Data from Learning Games: Thoughts from Games+Learning+Society

Posted on by

I have spent this week at the Games+Learning+Society Conference in Madison, WI. This was my fourth year attending and it never fails to result in tons of new ideas and thoughts.

Being in the role I am, I was at a lot of sessions on the use of data from games. I think, just as there is a danger of over-hyping the potential of games, there is a danger in over-hyping the potential of data.

However, I also seem to pick up on a nervousness or even dismissal of the idea that we can learn from game data. Although I buy into the use of game data, I always:

1. Start by actually watching kids play and listening to them talk while they do so. This observation of what players are doing is essentially for interpreting what is happening in data.

2. Start with hypotheses and theory. No, I’m not going to throw every possible variable into a big pot and trumpet any correlation that happens to emerge.

3. Know that there are things we can’t capture in data streams. Last year Reed Stevens gave a talk analyzing conversations among players as they played. This is highly revealing information. If we just analyzed log files without thinking about all the data that ISN’T there, we could certainly be misled.

How Do We Know We Have the Right Data?

This question popped up on the twitter feed at the conference and I don’t think it can be answered in 140 characters. Here are a couple thoughts:

1. We start with hypotheses about what kinds of data are important for making the inferences we are interested in. When we design tasks, what do we think are the actions that kids will take that will tell us what we are interested in?

2. Now, do the play testing alluded to above. Watch kids play and do think alouds. When I first started working with the team at GlassLab, I took a video of a kid playing SimCity and recorded my voice over it saying, “Collect That” for each action I wanted to collect and sent it to the tech team implementing the telemetry to give the first start on the things to collect. How did I decide? I was listening for players saying things that provided evidence of their systems thinking or problems solving and then noting what they were doing when they said them.

3. Iterate the design of data collection the same way you iterate the rest of game design. Get information from each round of testing that you use to modify the data you collect. Use each iteration to confirm what you saw in the previous iteration and develop new hypotheses about the relationships between game actions and the knowledge, skills, or other attributes.

So, in summary, we need to use multi-method investigations and multiple iterations to gain the most benefit from game data.

Kristen DiCerbo

About Kristen DiCerbo

Kristen DiCerbo's research program centers on digital technologies in learning and assessment, particularly on the use of data generated from interactions to inform instructional decisions. Dr. DiCerbo, senior research scientist, has conducted qualitative and quantitative investigations of games and simulations, particularly focusing on the identification and accumulation of evidence. She previously worked as an educational researcher at Cisco and as a school psychologist. She holds doctorate and master’s degrees in Educational Psychology from Arizona State University.
This entry was posted in Digital Data, Analytics & Adaptive Learning and tagged , , , . Bookmark the permalink.