EC-TEL 19 Workshop on Generating Actionable Predictions on Student Success: The role of Learning Design (PredictWithLD)

Hands-on Workshop Session in EC-TEL 2019, Delft, Netherlands

Please use the form at the end of this page, to tell me about yourself. This is important for me to tune the tutorial content. See you at Tempe!

Background

The prediction research has been based on the data from a single past course to build and test predictive models with post-hoc approaches (e.g., cross validation). However, these approaches are not valid for real-world use since they require the true training labels which cannot be known until the target event takes place (e.g., dropouts) (Bote-Lorenzo & Gómez-Sánchez, 2018; Gardner & Brooks, 2018). To overcome this limitation, Boyer & Veeramachaneni (2015) have proposed the in-situ learning approach that allows training a model based on proxy labels. With in-situ learning, predictions can be produced before the target activity takes place, thus it can be used for creating real-world interventions. Nonetheless, their use is very limited in MOOC prediction research.  If widely adopted and practiced by researchers and practitioners, educational interventions using predictive models can be promoted in real-world practice.

Another main limitation in the literature is that predictive models are designed independent from the contexts for which they are intended. On the other hand, learning design and the pedagogical intentions can substantially impact the way students engage in the course, which can help determine the engagement indicators critical to student success. For example, most studies include total number of video views as a predictive feature. However, such a cumulative feature may not capture the importance of student engagement in a particular video that is more critical to success in the target assignment or exam. Thus, research is needed to create predictive models that align more closely with the context for which they are intended for.

Objectives

The first objective of this session is to teach the how to generate actionable predictions on student engagement using in-situ learning approach. Participants will be guided to compute relevant features to build a predictive model and to train the model with in-situ learning approach in Python Scikit-Learn, one of the most widely used machine learning library in the field, will be used in the tutorial.

The second objective is to illustrate the participants how to inform the feature selection with Learning Design, which is highly discarded in the literature. Features informed by learning design of the context studied can lead to more powerful models. In the session, participants will experience how to interpret the learning design to create features more relevant in the context.

Schedule 

Theory (30 mins.)

  • Introduction to Machine Learning
  • Training Paradigms: Cross-Validation and In-Situ Learning

Hands-On Exercise -PART 1 (45 mins.)

  • Getting Familiar with Jupyter Notebook Environment
  • Introduction to Python, Pandas, and Scikit-Learn
  • Understanding the Data

Hands-On Exercise -PART 2 (90 mins.)

  • Building Machine Learning Models in Scikit-Learn with
    • Cross Validation,
    • Transferring Across Courses
    • In-situ Learning

Concluding Remarks (15 mins.)

  • Reflecting on the session: Ideas to Put into Practice What Was Learned

WHAT YOU NEED: A fully charged laptop, which has Anaconda (Python 3.7 version) installed. Feel free to contact me (erkanererkaner@gmail.com) if you need any help before the session.

Organiser

erkan-smallErkan Er received his PhD degree in Learning, Design, and Technology from the University of Georgia, USA, in 2016. He is currently working as a postdoctoral researcher in GSIC-EMIC research group in the Department of Telecommunications Engineering, in the University of Valladolid, Spain. His recent research interests include using machine learning and educational data mining techniques to understand and support student learning in massive contexts.

He currently works in the project called WeLearnAtScale.

60704_alexAlejandro Ortega-Arranz received his BSc and MSc in telecommunications engineering from the University of Valladolid, Spain, in 2014 and 2015 respectively. He is currently a PhD candidate in the GSIC-EMIC Research Group at the University of Valladolid. His main research interests include game-based learning, gamication, and the technologies supporting the implementation of these strategies in online environments at different scales.

Tell Me About Yourself

This form is to help me know more about yourself. Any information you can provide will help me fine tune the content of the session so that you can make the most out of it.

See you in Delft!

 

References

Bote-Lorenzo, M. L., & Gómez-Sánchez, E. (2017). Predicting the decrease of engagement indicators in a MOOC. In Proceedings of Seventh International Conference on Learning Analytics and Knowledge (pp. 143–147). Vancouver, Canada. https://doi.org/10.1145/3027385.3027387

Bote-Lorenzo, M. L., & Gómez-Sánchez, E. (2018). An approach to build in situ models for the prediction of the decrease of academic engagement indicators in Massive Open Online Courses. Journal of Universal Computer Science, 1.Accepted.

Boyer, S., Gelman, B. U., Schreck, B., & Veeramachaneni, K. (2015). Data science foundry for MOOCs. In Proceedings of the IEEE International Conference on Data Science and Advanced Analytics (pp. 1–10). Paris, France. https://doi.org/10.1109/DSAA.2015.7344825

Gardner, J., & Brooks, C. (2018). Student success prediction in MOOCs. User Modeling and User-Adapted Interaction, 28(2), 127–203. https://doi.org/10.1007/s11257-018-9203-z

 

Advertisements

Predicting Student Success in a Future Assignment: How to Generate Actionable Predictions

In this tutorial, we will predict the likelihood that students would succeed (or fail) in a particular assignment in a course. For this prediction task, we will build a classification model that accurately categorises students into two groups: those who achieve a score above 60 (i.e., success) vs others.

To be able to make this classification, normally we would need labelled data to train a machine learning model. However, if we want to offer an earlier help to the students who are likely to fail the assignment, we have to identify them before the submission period even starts. That is, we will have to make the predictions without using the true class labels so that they can be actionable in the context by the students (or instructors).

For this purpose, we will explore the use of two transfer learning techniques, namely in-situ learning and transfer across courses, to train models that can predict student engagement in an upcoming assignment. These techniques are simple to apply yet very powerful to generate actionable predictions.

In-situ Learning

In-situ learning  is a transfer learning approach used in MOOCs (Massive Open Online Courses) [ref]. This approach involves in a particular learning context (1) deriving a proxy label based on student engagement (or success) in a past activity (such us a completed assignment), (2) training a predictive model with the proxy-labelled data, and (3) using the model to make predictions for the targeted future task.

The accuracy of predictions with in-situ learning highly depends on whether the capacity of features in predicting student success in the proxy task holds also for the predictions in the target task. That is, the way that students engaged/disengaged in learning activities (measured by the features) predicted their success/failure in the proxy task should implicate that if students maintain the same engagement/disengagement patterns, their success/failure in the target task should repeat (more or less) in the upcoming task.

Figure 1 below illustrate how we will apply the in-situ learning approach in the current context.

in-situ

Figure 1. The use of in-situ learning approach to predict student success in a future assignment  (The subscript ‘1’ refers to the first assignment, and the subscript ‘2’ refers to the second assignment)

Let’s put into words what this figure tells us. First, we generate features (features1) using the student activity data until the date when the first assignment (a1) starts. a1 is already completed, which means we can generate labels based on student submissions (class_labels1). So, we have the feature set and the class labels to form the training set (trainingset1), which we will use to train a classifier model. Then, we will repeat the same procedure to generate the same features (features2), this time using the student activity data until the second assignment starts (a2). In the final step, we will feed these feature into the trained model to generate the predictions, which will be available before the activity itself starts.

Transfer Across Courses

The second technique to produce actionable predictions covered in this tutorial is transfer across courses. This technique involves training a model using the data from a past course and using this model in an ongoing course to make a predictions regarding student engagement/success in an upcoming task. For this technique to produce accurate results, similarity (or the sameness) between the courses in terms of their learning designs is a must. For this reason, transferring predictive models between different runs of the same course (with no or minor changes in the learning design) is highly likely to result in more accurate predictions. That is, the key is to transfer the affect of the learning design on student engagement and success.

For example, let’s say we train a model in the first run of a course where quizzes are mandatory elements connected with an assignment. Using several features about students’ quiz engagement and success, we train a model to transfer to another course with a different learning design where quizzes are optional and serve for practising. In this scenario, we may not expect a good performance since the role of quizzes in student learning is different in two courses, which will influence how students engage in quizzes and how this engagement relate to their performance in the assignment.

The following figure visualises the use of transfer across courses approach to predict student success in an upcoming assignment using a model trained with the data from the first run of the same course.

transfer-across-courses

Figure 2. The use of transfer across courses approach to predict student success in an upcoming assignment . The subscript ‘1’ refers to the first assignment, and the subscript ‘2’ refers to the second assignment. (1) refers to the first course and (2) refers to the second course.

The Experiment Data

The Course Structure

Experiment data belongs to a (imaginary) course with the learning design provided in Figure 3). Please note that, for the scope of this tutorial we focus on the first 2 weeks (or modules).

course-structure

Figure 3. The learning design of the course

Based on Figure 3, the first two weeks follow the same structure:

  • Introduction: First page in the module to introduce the learning objectives,
  • Content: The content page where the learning concepts are explained comprehensively,
  • Video: The video explanation of the concepts,
  • Practice Example: A practice example to apply the concepts in practice
  • Discussion: A discussion forum to ask/answer questions about the concepts
  • Quiz: A multiple-choice quiz (#1 could be taken only once; no constraints for #2)
  • Assignment: The end-of-the-module assignment to be submitted by the student individually (graded on a scale of 0-100).

The Log Data

The data about students’ learning activities (also called trace data) that you have at hand determines the features that you can potentially generate to build your predictive model. The structure and the content of the educational data is affected by several factors. First, the general structure of the data is highly shaped by the learning design of the course. For example,  if there are videos in the course, then the dataset is likely to include tables to record students’ play/pause activities on each video. That is, the educational data is mapped to the learning design of the course.

Second, the content of the data depends on the type of the student interactions (with the course components) that are traced. For example, many MOOC databases include logs of the page visits but do not tell the duration of each visit, which would not allow us to generate a feature about the time spent on ‘important’ pages (although it could be a significant indicator of student engagement).

In our experiment, we will use the following log data. Please download them and store them in a specific folder in your computer. We will start to explore them in the next section.

Exploring the Data and Generating the Features

OK, let’s start coding! Let’s run Jupyter Notebook, and navigate to the folder where the data files are saved. Then, let’s create a new Notebook file and name it as “feature-generation“.

Before we continue, we need to import several python libraries that we will use in this prediction task. First library is pandas (which stands for Python Data Analysis Library). It is the most widely used library in performing data science with python. Pandas can read a variety of data files (e.g., csv, tsv, xls, sql database, etc.) and it stores the source file in a data object called dataframe, which is composed of rows and columns (similar to the tables in SQL, Excel, and R). Dataframe comes with a rich set of features and methods that effectively facilitate the editing of the data. We will also import numpy, a scientific computation library,  and matplotlib, a data visualisation library.

In the following code (Figure 3), we have the code to import these three libraries. Please note that by convention the code block to import libraries that we need throughout the notebook always go to the top of the page (generally in the first cell).

import-libraries-notebook-2

Data files that we will use in this exercise are:

These are the files you have downloaded from the google drive, and they should be placed in the same directory with the notebook file that you are currently working on.

Page view features

First feature set that we will create regards the page view activities of students. You may remember that there are three types of pages (i.e., introduction, content, practice example), and we want to generate one page view feature per each page type.

Let’s load the log data (called ‘page-view-logs.csv’) using the read_csv method of the Pandas, which receives the file name as the parameter. read_csv will convert the content of the csv into a dataframe, which will assign to a variable called df_pw_logs. Then, using the .head() method we display the top 3 rows of the dataframe, and using the .shape() method we display the number of rows and columns of the dataframe. The detail of the code is provided below:

page-view-logs.png

In the following code, we create the page view counts for the Introduction pages. The code explanation is provided in the comments within the code. Please note that the code in the first cell creates the view counts for the Introduction pages in the first module, and the second calculates the same count for the second module. Note that the page ids for the first module are 1 and 2, and for the second module are 3 and 4.

code-1-pw-intro

You may notice that the code in the second cell repeats the code in the first cell. To prevent the redundancy in our code, we can define a function that returns the each students’ page views for a particular page requested by its id. Please see the code below:

code-1-pw-intro-function.png

Note that the code above is almost identical to the code we used with small difference that we filter the page counts by a specific page, provided in the page_id parameter to our function. With this function, we can compute the view counts for any page we want (i.e., page view features) and then we can combine all these features to create a single dataframe. Please check the following code for this purpose:

code-1-pw-intro-function-2

Everything looks perfect so far. Now, we will create the same type of features for the “Content” pages. We can create a new function adopted from the previous one:

code-1-pw-content-function.png

But wait, we created the functions to prevent redundancy in our code at first hand but now we are repeating almost the same function for a different type of page. To avoid this problem, we can actually create a more generic function that filter the page views by type provided as another input. Please check the following code:

generic-function-pw-1

Using this generic function, lets generate all page view features and combine them into a single dataframe:

generic-function-pw-2.png

The code looks a little messy, right? This is because we had to call the function not only for each page type, but also for all possible page ids. Image what would happen if we had 10 different pages for each page type. Probably, it would become unmanageable.

This unsustainability actually calls for a better function: our generic function actually is not generic enough. One revision that we can incorporate is to automatically generate features for all pages available for a specific page type. That is, our function can identify the page ids for a given page type and create a page view feature for each identified page id. Please take a look at the following function (the explanation of the code is given through comments):

generic-function-pw-3.png

Now lets use our refined function to create the page view features and combine all of them:

generic-function-pw-4

Discussion features

The next feature set that we will create is the number of discussion posts made by each student for each discussion forum. This task sounds easier since we do not have “type” of discussion (like “page types”).  Let’s first load the discussion post logs.

discussion-logs

Now deriving from the previous generic function, we can easily write a function for generating the features about students’ discussion activities:

discussion-logs-2.png

Using this function, we can easily create all the discussion features as seen below:

discussion-logs-3

Video and Quiz features

Next, we will create two sets of feature: (1) the number of play/pause activities performed by each student for each unique video; and (2) the number of times students took each particular quiz. Let’s first take a look at these data:

video-quiz-logs

The structures of both sets look very similar to the structure of the discussion data set. Taking advantage of this similarity, we can actually build a more generic function that we can use for discussion, video and quiz features. Please see the code below:

last-generate-1.png

Note that this function requires 3 input parameters:

  • logs: the log data (discussion, video, or quiz activities)
  • col: the column where the unique id is stored (forum_id, video_id, quiz_id)
  • feature_name: the name of the feature being created

Using this function we can create all features as seen below:

last-generate-2

Now that all the features are created, we can save them to the computer as CSV files.

save-feature-files

Training and Testing the Model

We have created all features that we will use to build our classification model. We will create a new notebook for this task, named ‘training and testing the modes’, in the same working folder. First, let’s import the libraries that we will use:

import-libraries-3.png

Next, let’s import the grades of the first assignment. Remember that the grades for both assignments are stored in the file “assignment-grads.csv”

grades-1.png

As you see in the code above, we converted the student grades into a binary variable and assign it to a new column called “PASS”. Students with grades higher than 60 (n=539) are labelled as 1 (i.e., success), and the others (n = 261) are labelled as 0 (i.e., failure).

Let’s load the features we created starting from the page view features. You may remember that we stored all page view features (for both modules) into a single file. To train the model, we will use the features created from the Module #1 data. To make prediction and to test the model later, we will use the the features created from the Module #2 data. So, basically we need to separate these features into different sets as seen in the code:

load-pw-features-1.png

However, writing all the column names (i.e., feature names) 1-by-1 is a little cumbersome. We know that the feature names ending with 1 or 2 belongs to the first module, and the names ending with 3 or 4 belong to the second one. These names are attached to the names of the features. We can take advantage of this to filter by the column names as seen in the code below:

pw-feature-separation

We can follow the same approach to separate the rest of the features:

load-all-features.png

Now, we need to combine the features with the labels and display it:

data1-summary.png

Above, we used describe() method to obtain basic statistics about all the features.

Training the Model

Let’s train the model. We will use Logistic Regression as the classifier algorithm.  After training the model, we can intend to predict the success/failure in the same assignment:

training-1

Unsurprisingly, the results are quite positive. This is because, we used the same dataset for training and testing. Note that we used also Area Under the Curve (AUC) measure to assess the model performance. In social sciences, AUC scores above .70 are considered a good performance. Note that we will use the classifier_grade1 soon to predict the scores of the second assignment.

Let’s check the coefficients assigned to each feature:

feature-importance-1.png

You may see that some of the features are negatively related with the grades. These coefficients are likely to be affected by the learning design of the course. This is because, the learning design shapes how students engage in learning activities, which somehow relate with their assignment grades.

Testing the Model

Let’s load the scores of the assignment data and then merge them with the features for the second module. We repeat the same steps as we did for the first module.

assignment-2.png

Now let’s feed the features to the classifier_grade1 that we have created above and make some predictions:

test-the-model-1.png

As you can see the results, the accuracy of the predictions has dropped. However, still the model trained based on the first assignment did not perform very bad in predicting the achievement in the second assignment. The results could be improved by exploring the influence of the learning design on student activities and mining more features.

Let’s assume a base model which in which predictions are generated from the first assignment. That is, if students failed the first assignment they will be predicted to fail in the second assignment as well. Here is the code:

base-model.png

According to the performance scores, the base model performed a lot worse than our predictor model. If it were to perform very similar, then one would argue that building the model is not necessary as the success in the second assignment could be predicted simply by their performance in the first assignment.

Just for curiosity, let’s train a model with cross validation (CV) using data2, and check the coefficients assigned to the features. This way we can see the match with the CV model trained for the first assignment. Probably, there are many differences, which has resulted the low performance above.

training-2.png

If you compare the coefficients with the previous ones, you will notice many differences and some similarities. Actually, this comparison can help us understand how students behaved in each model and why and reflect on the reasons why we could not obtain very accurate results, and how we could improve them with new features. To be able to do this, we would need the learning design of the course. Given that the data for this tutorial was imaginary, we will leave this task for future.

Thanks!

Quick Introduction to Jupyter Notebook

Jupyter Notebook is an open-source web application to create documents that contain live code. One programming language used widely in Jupyter Notebook is Python. There are many powerful libraries of data science and machine learning  written in Python that can be imported and used in Jupyter Notebook. The goal of this tutorial is to briefly introduce the Jupyter Notebook platform.

The Jupyter Notebook

Please run “Anaconda Navigator”, and in the opening window, click “Lunch” placed at the bottom of the Jupyter Notebook (see Figure 1).

anaconda-navigator

Figure 1. Main interface of Anaconda Navigator

In several seconds, Jupyter Notebook should start in your default browser (see Figure 2). Initially, it will show the existing folders in your computer. Jupyter Notebook allows us to write python code using a very simple web interface. Its interface is basically composed of some menu items, and some cells where we put and run our code. These cells help us organise our code and build our application step by step.

first-open-notebook.png

Figure 2. Opening page of Jupyter Notebook

From the opening page, you can navigate to the folder where you have want to create and store the coding files. To create a new file, please click New on the top-left and choose Python 3 from the list as seen in Figure 3:

new-file-notebook

Figure 3. Creating new file in Jupyter Notebook

This will create a new empty notebook and show it in a new tab in your browser, as in Figure 4. Please click on Untitled and enter exercise-1 as the new name of the notebook.

change-title-of-file-notebook

Figure 4. Changing the file name

Initially, the notebook comes with one cell. We can create further cells by clicking on the + button on the toolbar. As seen in the following figure let’s create three cells.

add-cell-notebook

Figure 5. Adding new cell

These cells can communicate with each other within the same notebook. That is, any variable created in one cell is accessible in any of the other cells. Let’s type the following code in the second cell and run it. To run the code in a cell, we press CTRL + ENTER or SHIFT + ENTER. With CTRL + ENTER, the focus stays in the same cell that we just run, whereas with SHIFT + ENTER, the focus moves to the next cell (see Figure 6).

run-code-notebook

Figure 6. Running the code in a cell

Now, let’s try to print the x variable in the first and third cells as well as seen Figure 7.

run-x-variable-different-cells-notebook

Figure 7. Printing the X variable in different rows

You should see that 4, the value of the x variable, is printed for three cases. That is, from any cell we would be still able to access all the variables created in this notebook as they are stored in the memory.

As shown in Figure 8, toolbar has several buttons to create a new cell, cut or copy a cell, to move up or down a cell, and also to run the code. The type of the cell can be also changed using the drop-down in the toolbar. For example, you can change the type of the cell from “Code” to “Markdown” if you want to provide some descriptive text instead of code.

toolbar-notebook

Figure 8. Toolbar in Jupyter Notebook

Please note that the numbers next to the cells (e.g., In [3], In [2]) indicates the order that the cells are executed (incrementing by 1 each time a code is run in a cell).

If you would like to dive deeper, below are additional resources :

https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/

https://www.datacamp.com/community/tutorials/tutorial-jupyter-notebook

If you think some very fundamental features are missing in this blog post (or if you have any questions), please share them in the comment section. I will continue updating this post based on your input!