At the nexus of learning and innovation


Learning Analytics - Can we rigorously measure impact on students?

When deciding to implement a learning analytics (LA) tool, institutions often seek out evidence that it will result in measurable benefits. As most LA interventions are likely to produce a positive effect, institutions look for tools that demonstrate the largest, most meaningful effects.

Course Signals claimed to improve student retention by 21% and many institutions chose to adopt it over other tools as it was seen as the leader in the field of predictive analytics. That was, however, until late 2013 when the veracity of its claims came into question and a simulation established that there was no causal link between taking a Signals-enabled class and staying in college. At that time the Course Signals study was one of the most cited LA papers and as such, the LA community's research approach was called into question. 

As with many other users and developers of LA tools, we are currently facing the challenge of determining how best to rigorously measure impact on students. The Student Relationship Engagement System (SRES), developed at the University of Sydney, is a flexible tool that allows teachers to collect, curate, analyse and act upon data that is meaningful to their teaching practices. Although its human centred approach distinguishes it from many LA systems that rely on blind data mining and one-size-fits-all predictive analytics, the challenge of establishing impact remains. This is also compounded by the fact that if the system is working well, students should be generally unaware of its existence as it becomes seamlessly integrated into teachers’ practices. 

We invite you to discuss and share your thoughts on how researchers and institutions can more rigorously measure the impact of both top-down and bottom-up learning analytics tools on students.

Our driving questions for this discussion are:

  1. How can the learning analytics community develop better measures of learning (i.e. moving beyond crude measures such as attendance)?
  2. How do you measure the impact of learning analytics when it is likely confounded by many other factors?
  3. How do you assess the impact of a human-centred LA tool (such as the SRES) with a range of functionality when it is being used in diverse contexts?
  4. Institutions have focused heavily on student retention. From a teacher's perspective, what data is the most important and useful?

Emerging Comments

Submitted byCourtney Hiltonon Tue, 05/08/2018, 00:01
Type of contribution

To assess more meaningfully impact on learning, it might be interesting to support lecturers in delivering online 'micro-quizzes' etc more often in their teaching, and using this data. Not only is regular low-stakes testing well established as being beneficial for learning in-and-of-itself, but can be a valuable source of data for assessing student impact over time too.

Having large cohorts do this regularly, and having this university-wide, would generate a nice amount of data that might be combined with other data streams to build up a clearer picture of student learning. 

Type of contribution

@Courtney Hilton I agree that low-stakes testing is good for learning overall but I'm not sure if this would build a "clearer picture of student learning" over an entire University. To build a meaningful assessment that actually measures the outcomes that one wants to teach is challenging in itself. To measure these assessments meaningfully across an entire University is even more challenging. Each community of practice within a University has its own distinct culture, so its hard to know if what you measure in one discipline is equivalent to what you would measure in another. We lose the quality of the learning and replace it discrete collections of data that can then be interpreted in a myriad of ways. We are only measuring variables that represent a very small part of what the student is experiencing. I think the challenge for LA is to find a niche that informs the big picture of student learning, to make it relevant in a broader conversation about learning for students and for teachers.

Submitted byBrian Baileyon Wed, 05/16/2018, 08:48
Type of contribution

A 'commercial' approach

Not having an academic background, I will address this as though someone asked me in a corporate or industry context:

1. Identify the outcomes you want to measure and then work backwards to identify the drivers for those outcomes and create measures that will facilitate the evaluation of change. Start with good questions, then kick off the process of finding answers

2. Multivariate analysis. More to the point, identify subject and control groups and do this longitudinally, and at scale. At the risk of being mildly controversial, I am not sure we have an institution-wide approach to doing this yet 

3. As with 1 and 2, I think. What are you trying to achieve? Multivariate analysis, at scale but at an institutional level, resourced accordingly. SRES is many things but appropriately resourced for an enterprise system, it is not

Type of contribution

Brian, you make some very interesting points here. 

I have previously seen instances where platforms have been implemented without a clear understanding of expected outcomes. Unless an organisation is clear about the outcomes that it is looking to achieve, it becomes difficult to measure impact.

Type of contribution

You do make some interesting points there. (2) is a tricky one because a typical randomised control trial (or the like) is unethical to conduct on student cohorts. Teacher researchers are 'stuck' with broader measures of performance and engagement which are noisy with many confounding factors. The first post suggests some nice alternatives - more fine-grained regular assessments that are pedagogically sound and could also be leveraged for useful performance/engagement data.

On (1) and (3), I think this has been one of the things holding learning analytics back: people asking "what is the question?" instead of the perhaps deeper, more meaningful question of "what are people's needs?". The former assumes that people (staff, students) have questions, whereas they might just be struggling to teach/study. Instead, what we've seen with the SRES is that, in addressing the needs first, the questions then emerge. Allowing the complex system that is higher education learning and teaching work this out for itself is an approach that is rarely taken. As a complex system, it may not lend itself to a more 'waterfall'-like approach.

Type of contribution

One of the many aspects of the SRES that distinguishes it from other LA platforms is that its implementation has been primarily bottom-up. When a platform has a top-down implementation then one would assume that there is a specific objective for implementing that particular platform, regardless of who in the institution is using it. With bottom-up implementation there are likely to be objectives for implementing it, however, these objectives will be specific to the individual teacher or coordinator who decides to use it in their course. Each person will likely have something different in mind that they are trying to achieve. Are we, therefore, better off using top-down measures for top-down implementation and bottom-up measures for bottom-up implementation?

Submitted byPeter Reimannon Mon, 05/21/2018, 20:16
Type of contribution

I'm addressing 2 and 3. For human-centred LA tools like the SRES, which can be used in various forms and in many different learning contexts, it might be futile to ask what the impact is "in general" (which we take automatically meaning "on average"). Too many 'un-controlled variables', too many differences in intended use and realised use. If it works well in one context and not so well in another, should we just use the average effect and throw away information about the role of the differences between the two contexts? 

My general advise would be to think about the determination of the impact of complex socio-technical interventions along the lines that have been developed in evaluation research under the label of theory-based evaluation. In a nutshell, the strategy is to test if a theory about the impacts (consequences, effects) of an intervention is met rather than trying to come up with a single quantitative (and often uninterpretable) impact measure. My favourite variant of theory-based evaluation methodology is realist evaluation.

Theory-based evaluation methodology also allows us to take into consideration the fact that LA tools are often more like socio-technical systems rather than single-purpose/single user applications. It is unrealistic to assume that a single number for impact or effect can describe the effects of an intervention that takes the form of a socio-technical system. I note in passing that information system researchers also taken on the realist perspective that underlies theory-based evaluation, e.g. Carlsson 2006. 

Fitz-Gibbon, C. T., & Morris, L. L. (1996). Theory-based evaluation. Evaluation Practice, 17(2), 177–184.

Westhorp, G. (n.d.). Realist Evaluation : an overview Report from an Expert Seminar.

Bruce, B. C., Rubin, A., & An, J. (2009). Situated Evaluation of socio-technical systems. In Handbook of Research on socio-technical design and social networking systems (pp. 685–698).

Carlsson, S. A. (2006). Towards an Information Systems Design Research Framework: A Critical Realist Perspective. Design Science Research in Information Systems and Technology, 192–212.

Add new comment

Type of contribution
Please check the boxes that apply

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.