The next step in real-time student risk alerts

Blogs

The next step in real-time student risk alerts

Less may be more when it comes to creating a scalable LMS-based risk alerts

Of all the available sources for real-time student success data, perhaps none seems to hold as much untapped promise as Learning Management Systems (LMS).

In theory, an LMS has the potential to collect a rich array of data on course engagement, grade performance, and learning behaviors—all in real time. For practitioners of Student Success Management (SSM), current process metrics can feel frustratingly slow: course grades are updated at end of term, while graduation rates only come in at the end of a year.

The possibilities of using real-time LMS data can seem like a dream come true. But if this is true, why aren’t we using it already?

A brief history of LMS-based alert systems

LMS isn’t a new technology—the longest-tenured platforms are roughly 20 years old. EAB profiled one of the very first systems, the Purdue “Signals” project, back in 2009. Signals used a wide range of LMS data to forecast student performance in a course and targeted students with specific interventions to increase their engagement and improve their final course grades. Purdue deployed Signals in roughly a dozen introductory courses.

At the time it was thought that LMS-based early warning algorithms like Signals would eventually replace the more manual faculty early alert systems already in place at many schools. Indeed, Signals inspired a countless number of LMS-based startups, few of which are still in the market today. By 2017, the Signals project was discontinued, unable to scale beyond the pilot courses specifically designed to pursue LMS interaction. With so much promise, what went wrong with the first generation of LMS-based algorithms?

Inconsistency of data across courses

Predictive models rely on data that is consistent from cycle to cycle and from application to application. This is why transcript data was one of the first things we chose to include in the Student Success Collaborative risk algorithms. With some minor variation, professors tend to award grades in the same way from year to year and thus past patterns can be used to predict future behaviors.

The same cannot be said for LMS data. LMS usage varies from course to course, from instructor to instructor and can even change from semester to semester within a course if an instructor decides to try something new. Even now, most instructors only use their LMS for basic functions, such as gradebooks and posting course documents. As a result, a model that depends on sophisticated data inputs (such as discussion board posting) won’t be useable for a large number of courses. This means that LMS-based models are difficult to develop, costly to maintain, and have only limited scalability. Viewed this way, it’s not hard to understand why we haven’t made much progress here.

Achieve scale with less data

We wondered if it would be possible to design around these limitations to achieve an LMS-based risk algorithm that was optimized for scalability while still returning useful risk signals. As it turned out, the answer wasn’t more data, it was less data.

Partnering with George Mason University (GMU), we looked at what risk signals could be derived from the broadest and simplest possible LMS data element: logins. Logins have the advantage of being universal to every kind of LMS use case and thus could be used to understand risk in any course that uses an LMS. Across three semesters of data, we found that 93% of GMU students log in to the LMS at least once a week.

We also found that logins can also be normalized by course, week, and term to determine a mean rate of normal usage for any one course. In this way, logins function as a kind of proxy for course attendance, a metric that we have long known to be among the earliest and best indicators of course performance.

Our work with GMU revealed that students who are below this baseline are more likely to earn a failing grade and less likely to return for the next semester

GMU Analysis

This encouraging result gives us hope that logins could be used to finally develop a scalable LMS-based risk algorithm.

What comes next for LMS?

Our pilot work with GMU requires further work to refine: Will this analysis work for smaller institutions? How soon can we intervene with a student based on LMS risk signals? What other data sources could improve precision without sacrificing scalability?

Our data science team is in the early stages of a broader follow-up analysis and we look forward to providing updates in the future.

View our full analysis from GMU

Learn more about the insights our EAB data scientists uncovered about developing a broadly scalable LMS-based risk algorithm in partnership with George Mason University.

Download the White Paper

Logging you in