Many educators, and especially those interested in educational technology, are currently obsessed with the idea of personalized learning. It’s at the heart of some well hyped initiatives such as the School of One in New York, in which students have tailored schedules, called “Playlists”, that guide them from activity to activity and computer algorithms that generate specialized lesson plans based on a student’s prior performance. The EU’s iClass experiment is also based on this idea of personalization via technology.
The basic promise of personalization is easy to grasp – not every child in a classroom is at the same level, and there’s (presumably) no way for a teacher to teach to all of these differences effectively. In the past, many educators largely relegated personalized learning to those in need of remediation, the so called “low performers” in a class. This remediation often took the form of tutors, an expensive but effective approach. Later we also saw (and continue to see) Individualized Education Plans (IEP’s, in ed lingo), usually reserved for high risk or special education students. But the idea of having every student engage in some form of personalized learning, evidenced in initiatives like the School of One, may seem unique, but it is not new. Cognitive Tutors, computer programs that aim to replicate the effective guidance and adaptability that human tutors have been proven to provide, have been in the personalized learning game since at least the 1980’s.
Cognitive Tutors essentially incorporate cognitive models of both novice and expert thinking around a certain domain, like math, into a computer program. A learner is challenged to solve a problem that relates to that knowledge with the program providing hints but also taking into account multiple paths towards solving that problem as well as some common misconceptions that are represented in its novice models (Koedinger & Corbett, 2006). Most of the tutors that I’ve seen deal with math and science, and are predicated on the idea that there is one right answer to a problem, though potentially multiple paths towards getting to that answer. Even the most progressive (and impressive) amongst technology of this sort, Dan Schwartz’s Teachable Agents, which flip the model of the cognitive tutor by having the student school the computer as opposed to the other way around, are still predicated on there being one right answer to a particular problem. To me then, I see these as highly sophisticated ways to teach the basics, ie, the stuff that we as a society already know. But what about what we don’t know? Isn’t that the sort of thing that we need to have our future leaders grappling with?
This leads me to what I believe cognitive tutors can shed light on in terms of the model of technology and learning that I’m developing for a course I’m taking, a model I originally introduced and contextualized here and which you can interact with here. I’ve included a static image of the model for reference here:
One of the key innovations that I include in the model is this “Technology Driven Personalization System”, and it’s this idea that I think cognitive tutors can speak to, not because of what they do but because of what they don’t do. The general idea behind this personalization system, for me, is some kind of coordinating body that’s paying attention to all the “nodes” in a youth’s learning ecology and making recommendations for the young person about what might be best to pursue based on that from a learning perspective.
What I’m seeing in the proposal I’m making about personalization here is far less structured than how cognitive tutors conceive of the idea of personalization. It does not assume that there is one “right answer” as to the learning trajectory a learner should follow, indeed, it doesn’t envision an end goal. In contrast to heavily scaffolded learning technologies like cognitive tutors and many games (a technology I’m a fan of from an educational standpoint), what I’m envisioning is much more something that’s about resourcing the young person to pursue their own interests and their own values, as opposed to an imposed standard of what’s important to know. My model assumes that we must trust youth to become active learners, but doesn’t assume that they already have access to the tools and opportunities they need to do so. This is the role of the system I’m presenting here.
At the same time, I acknowledge that every system has its own politics and priorities, and so the question of what kind of ideology is baked into the system is a very good one. Ideally, what I’d like to see is a system where the inherent ideology is itself based on the idea of having others bring their own ideologies to the system and ‘make recommendations’ based on them. Since many teenagers are often not quite at the stage of having very clearly articulated value systems and interests, I can envision the system integrating data about them in multiple ways, some more explicit (profiles with interests they’ve filled out, information about programs they’ve gotten involved with, classes they’re currently taking) and others less explicit (having some sort of match question system, common on dating sites, that don’t directly ask you what you’re interested in or how you think but rather pose situations or hypotheticals for you to respond to that then serve as indicators). All of this would then be integrated to make a profile of a given learner and what they’d like to pursue, which brings us, of course, to the issue of privacy and surveillance.
As someone deeply concerned about issues relating to exploitation and privacy online, my own proposal makes me nervous. Most of us are currently in a situation online where we’re not the customer in places like Facebook, Twitter and Google – we’re the product. Personal data is being packaged and sold to the highest bidder in the form of marketers, and governments are increasingly surveilling their citizens in these spaces. And it’s exactly the kind of personalization and recommendation engines that exist in places like Netflix, Amazon and Facebook, ones based on the existing data about a user, that I would imagine powering a personalized learning system of the sort I’m envisioning. That’s why it makes me nervous, and it’s also why the point I make above about politics and priorities being embedded in the system is so important – given the level of information that something like this would have about a young person it’s essential that it be clearly designed off of the principal of resourcing a young person to pursue their own interests according to their own values.
Finally, I’d envision the system incorporating some of the designs that drive Diaspora*, the open source social network that arose in response to Facebook privacy issues in 2010. In Diaspora, users have full ownership over their data, can share or not share to whomever they want, and simple ways to control privacy are put at the forefront. I would imagine the same, and more, for a system that would have so much data about a young person. And if I truly did believe in the idea of self-determination on the part of the young person, putting them in the position where they were in full control over their footprint within this system would only make sense.