Skip to content. | Skip to navigation

Citation and Metadata
Document Actions
Full Text

Introduction

This paper provides a brief introduction to the domain of ‘learning analytics’. We first explain the background and idea behind the concept. Then we give a brief overview of current research issues. We briefly list some more controversial issues before concluding.

Background and idea

Learning analytics can be introduced as a particular application of the more general movement towards ‘quantified self’: in this community, researchers, hobbyists and enthusiasts focus on ‘self knowledge through self-tracking’ [1]. Typical examples include tracking of food intake, emotional well-being, sports activities, work productivity, etc. Some applications track activities automatically, using sensors. These sensors can be hardware based: a typical example is the fitbit that tracks physical activity such as walking, running, stair climbing, etc. and that is part of a ‘health operating system’ [2]. The sensors can also be software based: a typical example is rescue time, which tracks all activities on a computer system, and enables a user to set goals for how much time she wants to spend on certain kinds of activities [3].

Another way to introduce learning analytics is to situate it in the quite explosive trend towards ‘business analytics’ [4] and ‘Big Data’ [5]. In this domain, huge data repositories collect traces of where people go, whom they interact with, what they buy, etc. Analytical applications then try to make sense of the data, either algorithmically through data mining techniques, or through information visualization techniques in visual analytics.

Learning analytics can then be defined as a research area that focuses on collecting traces that learners leave behind and using those traces to improve learning. In this domain, there are two major approaches:

  1. Educational Data Mining can process the traces algorithmically and point out patterns or compute indicators [6].

  2. Information visualization can present the traces in ways that help learners or teachers to steer the learning process [7].

One way to make the distinction clearer is to compare educational data mining with autonomous vehicles: the idea is to build algorithms that discover patterns in what learners do and then the algorithms can steer the learner in the right direction. On the other hand, the work we and others do on visualization is more about building dashboards that support people in being better drivers: the idea is to help learners to be more aware of what they do, support self-reflection and enable sense making [22] [8] [9] [10].

There is also an opportunity to combine both approaches: for instance, we can use visualization techniques to help people understand what data mining algorithms come up with and why. In that way, work on visualization can help to increase understanding of and trust in what the educational data mining community achieves [11].

Our research interests align mostly with the visualization approach, as it focuses on helping people rather than on automating the process [12]. It is inspired by a ‘modest computing’ approach [13] where the technology is used to support what we want people to be good at (being aware of what is going on, making decisions, …) by leveraging what computers are good at (repetitive, boring tasks…).

In any case, learning analytics is certainly a booming domain [14]: the first ‘International Conference on Learning Analytics and Knowledge’ was organized in 2011 [15]. LAK2012 sold out [16] and LAK2013 will mark the first time that the conference is organized in Europe (April 8-12, Leuven, Belgium) [17]. Recently, a Society for Learning Analytics research was set up, “ an inter-disciplinary network of leading international researchers who are exploring the role and impact of analytics on teaching, learning, training and development” [18].

Research issues

There are, as in most research fields, many diverse interests and goals in research on learning analytics. Here, we cite some of the issues that we feel are particularly important:

  1. A really tough problem is figuring out what are meaningful learning traces: this is all the more problematic in learning, where there seems to be less consensus on what are relevant criteria… Maybe time of day or location are relevant. And maybe they are not. Maybe whom the learner is with or what device she is using is relevant. Maybe not. Maybe what the teacher has had for lunch or the background noise level is important. Maybe not. As mobile devices proliferate, and as these devices integrate more sensors, there are more and more characteristics that we can measure. Still, it is not because we can measure them that they are relevant. However, it is often difficult to figure out beforehand what will be relevant, why or how...

  2. Specifically for visual approaches to learning analytics, translating those traces into visual representations and feedback that support learning is another challenge: the danger of presenting meaningless eye candy or networks that confuse rather than help is all too real [19].

  3. This kind of work is difficult to evaluate: we can (and do!) evaluate usability and usefulness, but assessing real learning impact is hard – both on a practical, logistical level (as it requires longitudinal studies) as well as on a more methodological level (as impact is ‘messy’ and it is difficult to isolate the effect of the intervention that we want to evaluate) [20]. One way to address this problem is to adopt a design research methodology [21], where, rather than trying to do double blind studies, the focus is on the design, development and deployment of artefacts and the study of the effects that these have in real learning contexts.

  4. While it is true that ‘if you don’t measure, you don’t know’, there is also a danger that ‘you become what you measure’. In our work, we track for instance how many twitter messages students send with a course hash tag, how many blog posts they write, how many comments they make on blogs of other students, how many lines of code they program, how many compilation errors they trigger, etc. [22] Yet, these are all quantitative measures, and more is sometimes ... just more. If the students would conclude from these measures that the most important goal is to tweet and blog as often as possible, then learning analytics may be more a problem than a solution. On the other hand, if they do not tweet, blog or program at all, they will not contribute to or take advantage of our very community-of-practice oriented approach to learning.

  5. By collecting traces that learners leave behind, we can build data sets that will help to turn learning research into more of an empirical science [23]. Sharing these data sets is a huge e-science challenge, that we try to address with the EATEL Special Interest Group [24].

  6. Obviously, by tracking sometimes in quite detailed ways [25] all the traces that learners leave behind, questions arise around privacy and ‘Big Brother’ aspects [26]: we advocate transparency in this context, where the learners know that they are being tracked, what exactly is recorded, and where the learners, teachers, organization (and even the outside world) all have access to the same data.

  7. As support through learning analytics becomes more and more effective, there is also a danger that it may become enslaving rather than empowering [27]. Although there may be a sometimes thin and careful line between coercion and persuasion, that doesn’t mean that, we should leave learners without any kind of support though?

All of these and many other issues are being actively researched in numerous projects and by a wide variety of parties from different background. As mentioned above, the SOLAR society [18] and the LAK conference series [17] tries to build community on this topic.

Conclusion

We hope that this short introduction to the new and exciting field of learning analytics may help to raise awareness, interest and reflection on learning analytics: the authors welcome feedback and comments!

References

[1] About the Quantified Self, http://quantifiedself.com/about/ (last check 2012-06-08)

[2] Fitbit, http://www.fitbit.com (last check 2012-06-08)

[3] RescueTime, http://www.rescuetime.com (last check 2012-06-08)

[4] Watson, H. J.; Wixom, B. H.: The Current State of Business Intelligence. In: IEEE Computer, 40(9), 2007, pp. 96-99. doi:10.1109/MC.2007.331

[5] Jacobs, A.: The Pathologies of Big Data. In: Communications of the ACM, 52(8), 2009, pp. 36-44.

[6] Romero, C.; Ventura, S.: Educational data mining: A survey from 1995 to 2005. In: Expert Systems with Applications, 33(1), 2007, pp.135-146.

[7] Duval, E.: Attention please! Learning analytics for visualization and recommendation. Proceedings of LAK11: 1st International Conference on Learning Analytics and Knowledge, pp. 9-17. ACM, 2011.

[8] Leony, D.; Pardo, A.; de la Fuente Valentín, L.; Sánchez de Castro, D.; Kloos, C.D.: “GLASS”: A Learning Analytics Visualization Tool. In: Proceedings of the Second International Conference on Learning Analytics and Knowledge (LAK’12). Vancouver, Canada, 29 April - 2 May 2012.

[9] Bakharia, A.; Dawson, S.: SNAPP: A Bird’s-Eye View of Temporal Participant Interaction. In: Proceedings of the 1st International Conference on Learning Analytics and Knowledge (LAK '11). ACM, New York, NY, USA, 2011, pp. 168-173.

[10] Vatrapu, Ravi; Teplovs, Chris; Fujita, Nobuko; Bull, Susan: Towards visual analytics for teachers' dynamic diagnostic pedagogical decision-making. In: Proceedings of the 1st International Conference on Learning Analytics and Knowledge (LAK '11). ACM, New York, NY, USA, 2011, pp.93-98.

[11] Essa, Alfred; Ayad, Hanan: Student Success System: Risk Analytics and Data Visualization using Ensembles of Predictive Models. In: Proceedings of the Second International Conference on Learning Analytics and Knowledge (LAK’12). Vancouver, Canada, 29 April - 2 May 2012.

[12] Govaerts, S.; Verbert, K.; Duval, E.; Pardo, A.: The Student Activity Meter for Awareness and Self-reflection. In: Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts. ACM, Austin, TX, USA., 2012, pp. 869-884.

[13] Dillenbourg, P.; Zufferey, G.; Alavi, H.; Jermann, P.; Do-lenh, S.; Bonnard, Q.; Cuendet, S.; et al.: Classroom Orchestration: The Third Circle of Usability. CSCL2011 Proceedings. Vol. I, 2011, pp. 510-517.

[14] Siemens, G. (2012).Learning Analytics: Envisioning a Research Discipline and a Domain of Practice. In Proceedings of the Second International Conference on Learning Analytics and Knowledge (LAK’12). Vancouver, Canada, 29 April - 2 May 2012. ACM.

[15] LAK2011, https://tekri.athabascau.ca/analytics/ (last check 2012-06-08)

[16] LAK2012, http://lak12.sites.olt.ubc.ca/ (last check 2012-06-08)

[17] LAK2013, http://lakconference.org/ (last check 2012-06-08)

[18] SOLAR, http://www.solaresearch.org/ (last check 2012-06-08)

[19] Plaisant, C.: The challenge of information visualization evaluation. In: Proceedings of VI04: working conference on Advanced visual interfaces. ACM., 2004, pp. 109-116.

[20] Shneiderman, B.; Plaisant, C.: Strategies for Evaluating Information Visualization Tools: Multi-dimensional In-depth Long-term Case Studies. In: BELIV ’06: Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization. 2006, pp. 1-7.

[21] McKenny, S.; Reeves, T.: Conducting Educational Design Research. Routledge, 2012.

[22] Santos, J. L.; Verbert, K.; Govaerts, S.; Duval, E.: Goal-oriented visualizations of activity tool tracking. A case study with engineering students. In: Proceedings of LAK12: 2nd International Conference on Learning Analytics and Knowledge. ACM, Vancouver, Canada. 2012. Accepted.

[23] Verbert, K.; Duval, E.; Drachsler, H.; Manouselis, N.; Wolpers, M.; Vuorikari, R.; Beham, G.: Dataset-driven Research for Improving TEL Recommender Systems. In: 1st International Conference on Learning Analytics and Knowledge. ACM, Banff, Canada, 2011, pp. 44-53.

[24] dataTEL TELeurope, http://www.teleurope.eu/pg/groups/9405/datatel/ (last check 2012-06-08)

[25] Romero-Zaldivar, V.-A.; Pardo, A.; Burgos, D.; Delgado Kloos, C.: Monitoring student progress using virtual appliances: A case study. In: Computers & Education, 58(4), 2012, pp. 1058-1067.

[26] Jarvis, J.: Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live. Simon & Schuster, 2011.

[27] Purpura, S.; Schwanda, V.; Williams, K.; Stubler, W.; Sengers, P.: Fit4life: the design of a persuasive technology promoting healthy behavior and ideal weight. In: Proceedings of the 2011 annual conference on Human factors in computing systems. ACM, 2011, pp. 423-432.

Fulltext