Humanizing adaptive learning for ELT: Part 1

humanizing pt1

Part 1:  Knewton, adaptive learning, and ELT
Part 2:  Open platforms and teacher-driven adaptive learning

The debate over adaptive learning at eltjam, Philip Kerr’s blog, and Nicola Prentis’ Simple English has been both fascinating and instructive, not only due to the posts but also the great dialogue in the comments. It’s a fun topic because it involves our core beliefs regarding language acquisition, effective teaching, and the roles that technology can play.

That said, I can’t help but feel that in some respects we’re thinking about adaptive learning in a limited way, and that this limited perspective, combined with Knewton confusion, is distorting how we approach the topic, making it into a bigger deal than it really is. But, given the potential power that the new “adaptive learning” technology may indeed have, we do need to see clearly how it can help our teaching, and where it can potentially go wrong.

Adaptive learning in context
I wrote “adaptive learning” in scare quotes above because I think the name itself is misleading. First, in a very important way, all learning is adaptive learning, so the phrase itself is redundant.  Second, the learning, which is carried out by the learner, is not what the vendors provide: “Knewton…constantly mines student performance data, responding in real time to a student’s activity on the system. Upon completion of a given activity, the system directs the student to the next activity.” That is not adaptive learning, but rather adaptive content; it is the content sequence (of “activities”) that adapts to the learner’s past performance. We can call adaptive content “micro-adaptation”, since it happens at a very granular level.

Now, good teachers have been adapting to our students for how long…millennia? We assign homework based on what we know of our students’ strengths and weaknesses (adaptive content).  In the communicative classroom, we are always adjusting our pacing, creating new activities, or supporting spontaneous discussion based on our perception of the students’ needs in that moment (the adaptive classroom).  Dogme is one kind of adaptive learning in the classroom. And, when the stars align, educators can successfully design and deploy a curriculum, including methods and approaches, that iteratively adapts to student needs over time (the adaptive curriculum). We can call the adaptive curriculum “macro-adaptation”.

So how does the new, algorithmic adaptive learning, such as Knewton helps deliver, address each of these categories?

  • As we saw above, the content level is where Knewton focuses, and it’s limited to task level online content that can be objectively scored (micro-adaptation). But, it can do amazing things with this limited data, especially when the data is aggregated (“big data”).  Knewton can change the activity sequence in real time to better fit the student’s performance, and can then make statistical inferences about the quality of specific activities and sequences of activities.
  • For the classroom, students would need tablets or smartphones in order to input the data that Knewton needs. I can think of some very cool pairwork and groupwork tasks involving tablet-based activities, but these aren’t individualized and so would be out of Knewton’s scope. Presumably the student data can only be created by individual tasks, which would severely limit its utility in a communicative classroom. However, the content level input resulting from student online work (e.g. homework, or from a blended course) could be valuable for teachers to have and could help optimize classroom lesson planning.
  • For the curriculum category, algorithmic adaptive learning can analyze the student performance data resulting from the content level, and then deliver insights that can potentially be fed into the curriculum, helping certain aspects if the curriculum iterate and adapt over time (there are limitations here that are discussed below).

So as a tool, Knewton has potential for the ELT profession. But, whether the tool is used appropriately, or not, does not depend on Knewton, but rather on the publishing partners that use the Knewton’s tools. All Knewton does is provide publisher LMSs with a hook into the Knewton data infrastructure.  Knewton is a utility. It’s the publishers that decide how to best design courses that use Knewton in a way that is pedagogically appropriate and leverages Knewton’s strengths to provide adaptive content, classrooms, and curricula. It is the publishers that must understand Knewton’s limitations. As a tool, it can’t do everything – it can’t “take over” language learning, or relegate teachers to obsolescence, although Knewton’s marketing hyperbole might make one think that.

Limitations for ELT
If Knewton’s ambition is one concern, then another is that it is not specifically designed for ELT and SLA, and therefore may not understand its own limitations. Knewton asks its publishing partners to organize their courses into a “knowledge graph” where content is mapped to an analyzable form that consists of the smallest meaningful chunks (called “concepts”), organized as prerequisites to specific learning goals.  You can see here the influence of general learning theory and not SLA/ELT, but let’s not concern ourselves with nomenclature and just call their “knowledge graph” an “acquisition graph”, and call “concepts” anything else at all, say…“items”.  Basically our acquisition graph could be something like the CEFR, and the items are the specifications in a completed English Profile project that detail the grammar, lexis, and functions necessary for each of the can-do’s in the CEFR.  Now, even though this is a somewhat plausible scenario, it opens Knewton up to several objections, foremost the degree of granularity and linearity.

A common criticism of Knewton is that language and language teaching cannot be “broken down into atomised parts”, but that it can only be looked at as a complex, dynamic system.  This touches on the grammar McNugget argument, and I’m sympathetic. But the reality is that language can indeed be broken down into parts, and that this analytic simplification is essential to teaching it. Language should not only be taught this way, and of course we need to always emphasize the communicative whole rather than the parts, and use meaning to teach form.  But to invalidate Knewton because it uses its algorithms on discrete activities is to misunderstand the problem. Discrete activities are essential, in their place. The real problem that Knewton faces in ELT is that both the real-time activity sequencing, and the big data insights delivered that are based on these activity sequences, are less valuable and could be misleading.

They are less valuable in ELT for at least two reasons.  First, they are less valuable because these big data insights come from a limited subset of activities, and much student-produced language and learning data is not captured.

Second, they are less valuable because language learning is less linear than other, general learning domains (e.g. biology, maths). Unlike in general learning domains, most language students are exposed to ungraded, authentic, acquirable language (by their teacher, by the media, etc.) that represents an approximate entirety of what is to be learned. Algebra students are not exposed to advanced calculus on an ongoing basis, and if an algebra student were exposed to calculus, the student wouldn’t be able to “acquire” calculus the way humans can acquire language.  Therefore, for ELT, the cause-effect relationship of Knewton’s acquisition graph and the map of prerequisite items is to some extent invalidated, because the student may have acquired a prerequisite item by, say, listening to a song in English the night before, not by completing a sequence of items. That won’t happen in algebra.

Because of these limitations, Knewton will need to adapt their model considerably if it is to reach its potential in the ELT field.  They have a good team with some talented ELT professionals on it, who are already qualifying some of the stock Knewton phraseology (viz. Knewton’s Sally Searby emphasising that, for ELT, knowledge graphing needs to be “much more nuanced” in the last Knewton response on eltjam).  And, hopefully Knewton’s publishing partners will design courses with acquisition graphs that align with pedagogic reality and recognize these inherent limitations.

Meanwhile, as they work to overcome these limitations, where can Knewton best add value to publisher courses? I would guess that some useful courses can be published straightaway for certain self-study material, some online homework, and exam prep – anywhere the language is fairly defined and the content more amenable to algorithmic micro-adaptation. Then we can see how it goes.

In Part 2 of this post we’ll focus on the primary purpose of adaptive learning, personalization, and how this can be achieved by Knewton as well as by teacher-driven adaptive learning using open platforms.  As Nicola points out, we need to start with “why”….

Featured Photo Credit: Billy Wilson Photography via Compfight cc. Text added by eltjam.

31 thoughts on “Humanizing adaptive learning for ELT: Part 1”

  1. Hi Cleve and thanks for this. A well-argued middle ground is what I like to see, and you have obviously thought about this a lot.
    I have never (knowingly) seen eLearning with an adapted learning algorithm in action, except of course for stuff based on ‘If score > 80 Goto Page X’; ‘If score <50 Goto Page Y'. This has been available in authoring tools for a decade or more. In the very first Knewton post he talks about this and says it is not what Knewton is about, so let's leave that aside.
    In my mind I keep returning to a point I made in an earlier post, and I wonder if you could deal with it in part 2? It is this: who knows what's best to do on-screen next, an algorithm or the learner themselves?
    Let's take a concrete example. One of the good things about eLearning is that it offers the chance to easily go back to material and revise and recycle. Lots of stuff out there about 'spaced repetition' and 'distributed practice' that argues that our brains need this repetition for memory to develop: it helps the neurons to make connections. All that Ebbinghaus Forgetting Curve stuff: http://en.wikipedia.org/wiki/Forgetting_curve
    OK. I do an online exercise and get a good score. Presumably the adaptive learning algorithm moves me on. And I would also choose to move on, so no conflict so far. But as a learner what I need to do is go back and revise that material again, one day later, then again one week later, then again one month later. That's the way to beat the Forgetting Curve. As a learner, I can choose whether or not to do that. But does the algorithm give me that choice? I imagine that in practice it adapts to my good score and thinks that the material is now learned.
    So, to rephrase the above paragraph, how does adaptive learning know whether material is learned/acquired in my brain, rather than simply covered in the course with a good score but then forgotten? The alternative, that only the learner themselves knows this, also has problems of course: revision may be perceived as boring and therefore skipped. But at least the learner is aware that things get forgotten, and knows which things.
    Let's take a second concrete example. I give 30 minutes a day to my online language learning and yesterday I did an activity where I read a text, did follow-up comprehension Qs and then did some vocabulary development. I enjoyed it and felt it was valuable. I log on: what should I do today? A little grammar maybe, I haven't done that in a while. Or some listening. Or X. Or Y Or Z. Hmm … what do I fancy? The answer depends on my mood, on my feelings, on what catches my eye on the screen. I think I'll try X. No, after a minute I don't like that, let's leave that and try Y. Hmm … let's go somewhere on the site I'm not 'meant' to go – too high level. Hey! It's hard but I'm having fun. Let's go somewhere I am 'meant' to go – the next unit. Hey, I already know most of this stuff – let's leave. And so on. It's all a human process, under my control, and part of the fun. Does adaptive learning simulate that process? If it doesn't, then I will get increasingly annoyed with its attempts to second-guess my interests/moods/needs right now.

    Reply
    • Hi Paul

      Thanks for your comment – you bring up some very interesting questions, which point out nicely some of the key issues of the adaptive learning debate. Let me say upfront that I haven’t sat down as a student and done online ELT study with publisher content connected to Knewton’s infrastructure, and am relying on what I have read from Knewton and through discussions with people who would know. And a fair amount of supposition 😉

      That said, on to your questions, and I’m not answering them so much as just brainstorming here:

      Who knows what’s best to do on-screen next, an algorithm or the learners themselves?

      Maybe the way to approach this is to break down the idea of “on-screen” into the various types of activities and tasks that can be done online, and then decide who is “directing” the activity, the student, teacher, or algorithm. For students, my thought is that they will mostly interact with English with web content that is intrinsically interesting to them, like music or cat videos or Facebook or sports highlights or playing WoW with English native speakers. Then you’ve got your online activities that are more focused on language, but have been designed by teachers: webquest-y things, mobile activities, project-based work (e.g. some of the fantastic stuff that Vicky Saumell has been doing). These are more creative and open and meaning-focused; language is often a means to an end (the end being e.g. the project), and generally are a great way to engage students. And then you have the type of online content that is more focused on the language as an end in itself. Things like drilling and the type of exercises that have an objective answer, and could be either discrete or integrative items (as the testing folks put it). In this case, I think the algorithm should be in charge. The type of recycling and ongoing assessment and adjustment to student performance that is possible here will be really valuable. Now, a good teacher could do the same thing, more or less, but not instantly and individually for 5 classes every day of 20 students.

      So another way to respond to your question of who is in charge, the student or the algorithm, would be: the student, always, decides what to do at the macro level: do I look for music lyrics because I like the song? Do I work on my project assigned by my teacher? Or do I work on some homework drills? In the latter case, then the algorithm will identify the optimal sequence of activities through micro-adaptation.

      How does adaptive learning know whether material is learned/acquired in my brain, rather than simply covered in the course with a good score but then forgotten? 

      I’m speculating here, but I assume that there are statistically-generated recycling and assessment built into the algorithms. The Ebbinghaus curve you reference is exactly the type of thing those statistics folks love, and I’m sure they have a range of tools like this. This is exactly what the Knewton data infrastructure is great at.

      Does adaptive learning simulate that process? If it doesn’t, then I will get increasingly annoyed with its attempts to second-guess my interests/moods/needs right now.

      My guess is that you will be annoyed in the early stages of algorithmic adaptive learning, but that Knewton will soon include some “exploratory” functionality that allows this type of self-directed wandering. They thrive on data, and you can get data doing a lot of different things. Because Knewton’s goal is personalization, this type of exploration by a student would actually yield excellent data about what interests and motivates the student.

      But, maybe the restricted type of activities that Knewton works on is not the best place for online exploration and wandering…maybe we leave that to the student when looking at song lyrics or doing teacher-assigned project work. This is where the publishers need to step up and use the Knewton toolset in the right way.

      Reply
  2. Hi Cleve and thanks for this. A well-argued middle ground is what I like to see, and you have obviously thought about this a lot.
    I have never (knowingly) seen eLearning with an adapted learning algorithm in action, except of course for stuff based on ‘If score > 80 Goto Page X’; ‘If score <50 Goto Page Y'. This has been available in authoring tools for a decade or more. In the very first Knewton post he talks about this and says it is not what Knewton is about, so let's leave that aside.
    In my mind I keep returning to a point I made in an earlier post, and I wonder if you could deal with it in part 2? It is this: who knows what's best to do on-screen next, an algorithm or the learner themselves?
    Let's take a concrete example. One of the good things about eLearning is that it offers the chance to easily go back to material and revise and recycle. Lots of stuff out there about 'spaced repetition' and 'distributed practice' that argues that our brains need this repetition for memory to develop: it helps the neurons to make connections. All that Ebbinghaus Forgetting Curve stuff: http://en.wikipedia.org/wiki/Forgetting_curve
    OK. I do an online exercise and get a good score. Presumably the adaptive learning algorithm moves me on. And I would also choose to move on, so no conflict so far. But as a learner what I need to do is go back and revise that material again, one day later, then again one week later, then again one month later. That's the way to beat the Forgetting Curve. As a learner, I can choose whether or not to do that. But does the algorithm give me that choice? I imagine that in practice it adapts to my good score and thinks that the material is now learned.
    So, to rephrase the above paragraph, how does adaptive learning know whether material is learned/acquired in my brain, rather than simply covered in the course with a good score but then forgotten? The alternative, that only the learner themselves knows this, also has problems of course: revision may be perceived as boring and therefore skipped. But at least the learner is aware that things get forgotten, and knows which things.
    Let's take a second concrete example. I give 30 minutes a day to my online language learning and yesterday I did an activity where I read a text, did follow-up comprehension Qs and then did some vocabulary development. I enjoyed it and felt it was valuable. I log on: what should I do today? A little grammar maybe, I haven't done that in a while. Or some listening. Or X. Or Y Or Z. Hmm … what do I fancy? The answer depends on my mood, on my feelings, on what catches my eye on the screen. I think I'll try X. No, after a minute I don't like that, let's leave that and try Y. Hmm … let's go somewhere on the site I'm not 'meant' to go – too high level. Hey! It's hard but I'm having fun. Let's go somewhere I am 'meant' to go – the next unit. Hey, I already know most of this stuff – let's leave. And so on. It's all a human process, under my control, and part of the fun. Does adaptive learning simulate that process? If it doesn't, then I will get increasingly annoyed with its attempts to second-guess my interests/moods/needs right now.

    Reply
  3. Cleve, great post! Looking forward to the next one.

    You note that “good teachers have been adapting to [their] students for how long…millennia?”

    Indeed. Which raises the question as to what adaptive learning software adds that, say, good teacher training could add just as effectively. Or, to re-phrase Neil Postman, What is the problem for which adaptive learning software is the answer? A cynic might suppose that it’s not what adaptive learning software adds (that motivates the enthusiasm surrounding its development), it’s what it takes away – i.e. the classroom teacher. Removing the teacher from the equation would eliminate a factor that is costly, capricious, inherently conservative, and, on the whole, unloved by publishers and ministries alike.

    In a very recent book, a self-styled polyglot called Benny Lewis (‘Fluent in 3 months’, HarperCollins 2014) describes a language learning system called HB 2.0. Among other features of this system he lists these:

    Advanced voice-recognition and feedback-based correction; […]
    Context-based recognition: Even if you do make mistakes, the system automatically adjusts for this and derives what you mean from the context […]
    An almost infinite data-base of interactive conversations; […]
    Built-in positive reinforcement: This system automatically detects when you are running into difficulty and provides encouraging messages to get you back on track […]
    (p. 101-102)

    You guessed it: HB 2.0 is simply another HUMAN BEING. Lewis comments, ‘We keep trying to find language-learning solutions through courses, software, apps, flights abroad, books, schools, and a host of other methods, some of which can be useful but they are nothing but accessories to the true core of language-learning: the people we speak with and hear’.

    I might qualify this (very dogmetic) assertion, by adding that, of all the people we speak with and hear, the teacher is potentially the most effective. HB 3.0 perhaps. 🙂

    Reply
    • I still think the conversation about Adaptive Learning is going down the wrong route. How many, many times have I heard teachers wanting their students to be more autonomous? Like we try to train them lexically to pick out chunks so they can do it for themselves. Like we want them to watch original version film and TV to get better at listening. Like we tell them to go and put themselves in real life situations where they have to speak without the teacher there. Adaptive Learning is just another tool for that. It is not, cannot and will not be a replacement for teachers any more than the Language Lab headphones and cassettes were that we used in French class at school. Was there all this fuss then?!

      Reply
      • Nicola, I don’t think we can say that adaptive learning is ‘just another tool’ for the sorts of things we already do in classrooms. It is a tool that is being promoted by a masssive industry with huge lobbying power. It is being presented, very explicitly, as a way of increasing class sizes and of reducing educational costs. In the US (see http://www.classsizematters.org/ as a starting point for more information), adaptive learning is intimately connected with the privatization lobby, and this will follow elsewhere. What happens in ELT on a global scale will probably be determined very little by ELT considerations: ELT content will be just a relatively small part of a much bigger adaptive package.
        At the moment, there isn’t much fuss in ELT – just the occasional blog post here and there. Most people haven’t a clue what adaptive learning means. It could be the case that what is happening in the US (both K12 and tertiary) will not be replicated in global ELT, but I can see no reasons to be overly optimistic.

        Reply
        • Very good points Philip, and as someone who lives in the US I suffer daily from watching the lobbying and influence of industry on our education system. It’s important to work towards providing a strong voice against these interests. But, I think we should be careful not to conflate the misuse of adaptive learning for these ends, and its intrinsic value when used correctly….we need to keep these arguments separate.

          Reply
  4. Hi Cleve,
    a speaker at the conference eyer in Rustaq, Oman, quoted Sugata Mitra’s phrase that a ‘teacher who can be replaced by a machine should be’ which is kind of nice discussion point! Adaptive learning (the models so far on offer like cyber homework, Duolingua etc) can’t yet replace the teacher because however nuanced they are now or are likely to be in the near future, there are so many things – empathetic and delicate – that they can’t do yet. But if they could? Would Mitra be right?
    We’ll get to hear him in Harrogate in a couple of weeks to see!
    Jeremy

    Reply
  5. Hi Scott, Jeremy

    I think I got so enamored with cutting through the details of Knewton that I didn’t explain simply enough my thinking on the “replace the teacher” issue 🙂

    There is no way that algorithmic adaptive learning can replace the teacher, whether it’s in the classroom, or designing a course or curriculum. No one needs to worry about that, despite certain marketing claims or click-bait media coverage. After 25 years in the ELT field I’m still amazed by the near infinite number of variables in every classroom: each student is different, each teacher is different, each cultures is different, schools are different, every day is different (this is one reason SLA research has been so difficult – too many variables). And since language is the foundation cognitive structure we use to manage everything else in the world, it’s not like learning maths – it’s too subtle, variable, subjective, a bit mysterious still. Only another human has, as Jeremy puts it, the empathy and delicacy to provide optimal instruction.

    But guys, machines are fricking awesome at marking objectively-scored homework or self-study. Say you are teaching in secondary school: Knewton can finish off scoring the homework of all 150 of your students in about 0.5 seconds and provide a report listing everyone’s scores, strengths and weaknesses, time spent…tons of useful data.

    Why have a human spend hours and hours doing this? The teacher can spend that time preparing better communicative activities, project work, etc. for the classroom, where the real magic happens. We need that time for human imagination. We worry about machines taking over human work…we should flip that around, and say “why have a human wasting time doing something a lowly machine can do instantly and effortlessly”. Imagine a teacher grading homework – much of the time the teacher is acting like a machine. It’s a waste of what is essentially human.

    Now you can tell I’m coming from my specific professional focus on blended learning – where the classroom and online work is blended together. What works with this is that the classroom is freed up for purely human interaction, and the heads down activities can be done online and not take up time in class.

    In short, Knewton will be good for online homework and self-study material, especially exam prep. It can improve teachers’ lives by instantly and effortlessly managing homework or online work for blended courses. But all this is just support for a teacher-driven program.

    Reply
    • Cleve, you have made your point very well here.

      In your OP you acknowledge that Knewton’s marketing hyperbole might lead people to believe that they wanted to take over language learning and relegate teachers to obscelesence. I would suggest that, in order to counter this accusation, technology companies such as Knewton would do well to try and engage with teachers much more than they are currently doing. I understand that the marketing hyperbole is necessary because they have a product to sell. But, surely, for reasons of corporate ethics if nothing else Knewton has a responsibility to engage and consult with the end users of their own products?

      David Liu’s response on this blog a couple of weeks ago was a step in the right direction.

      Reply
      • Cleve,

        I find it quite interested that the more you dissect this the more worried you sound. I think teachers who look at this system, not as a teacher replacement device (too many limitations RIGHT NOW), but as a way to reduce costs (and friction in the system) come away from this remarking on the intent behind the system. Many of the teachers who comment positively on this development seem to think this is being developed as a machine for teachers to use (homework correction, student memory aid, training partner). I however think this is being developed with a far different intent. And I think with beginning technology of this importance that INTENT is even more important than the tool.

        You advocate that others speak for David Liu. I say it is great that he speaks for himself. I applaud him for reveling his intent. But his intent isn’t to create something that helps the teacher. His intent is to support big businesses.

        I am sure machine intelligence can be harnessed to the job of supporting the teacher. Like you say, machines can do many great things. It would be great, for example, if a machine could help prepare each of my students for class and tell me if they did in fact prepare. But I want to be in control. I don’t want the machine telling ME what to teach (and even scripting my interactions). Instead, I want it to operate in reverse. I want the machine to help ME teach listening, speaking, vocab., (NOT so much grammar). I don’t so much want a machine that tells students what they don’t know rather I want a machine that shows students how to communicate using what they already know and show students how to politely adapt to conversations when they don’t understand.

        David might argue that in ELT they are only picking the low-hanging fruit, all the easy to do stuff. He is a first adopter trying to establish his position before his competitors can establish their places in the eco-system. I however would like to encourage the people nipping at his heels to lay a place at the table for teachers. To these others I say, plan your system with teachers in mind and please do not view teachers as the friction in the system that you are trying to eliminate.

        One day we will have teaching machines. Of that I am certain. But if the intent behind the machines is “correct” from the beginning (e.g. the The Three Laws of Robotics) the development could produce a system that will benefit everyone in the teaching/learning ecosystem.

        I believe that Capitalism, for all of its purported good, is by nature incredibly shortsighted. So, intent is important because intent with regard to technology does create the future. I think teachers should collectively wait to support a system that puts them near the center of a new system. Teachers should reject Knewton in its present design as a QWERTY inspired system that can only hamper our future development.

        (Writer’s note, the QWERTY system while once a boon to typists typing on mechanical devices is now seen as an impediment to greater typing speeds. QWERTY, now a universal standard, has instead become the friction in the system).

        Reply
  6. I still think the conversation about Adaptive Learning is going down the wrong route. How many, many times have I heard teachers wanting their students to be more autonomous? Like we try to train them lexically to pick out chunks so they can do it for themselves. Like we want them to watch original version film and TV to get better at listening. Like we tell them to go and put themselves in real life situations where they have to speak without the teacher there. Adaptive Learning is just another tool for that. It is not, cannot and will not be a replacement for teachers any more than the Language Lab headphones and cassettes were that we used in French class at school. Was there all this fuss then?!

    Reply
    • Hi Nicola

      I agree with you 100% on your “what is all the fuss about?” question, in that Knewton should be only a tool for teachers that can do really neat stuff very quickly and accurately, and add another dimension to homework, self-directed study, autonomous learning (although the types of activities that Knewton’s infrastructure can manage is limited – autonomous learning exclusively with Knewton material would not be appropriate for most students IMO).

      That said, there are a few reasons why there is concern. I tend to not share that concern personally, but only because I happen to be overly optimistic 😉

      The problem is a combination of three things: a very interesting technology innovation (algorithmic adaptive learning), combined with marketing hyperbole on the part of the vendors, combined with the desire and need to reduce costs by educational organizations. There will be some decision makers who, in order to save money, will reduce F2F classes with teachers and replace them with adaptive learning self-access components, because they have been told this is possible or even advisable. So if some people (wrongly) believe that adaptive learning can “replace” a teacher, then it will happen (i.e. in philosophy terminology, this is a descriptive statement, and I’ve been -optimistically-making a normative argument).

      This is exactly what happened to me when I ran a BE school back in the late 90’s. Global English had just launched and went around to all the multinationals, and told them they could save money by replacing teachers (supplied by me) with the online Global English courses. From one day to the next I lost one of our largest clients, who adopted Global English globally). Of course, six months later we were back in the company teaching again, because the Global English experience failed in this case. So the same thing may (ok, will) happen occasionally due to Knewton-driven publisher products

      Reply
    • Nicola, I don’t think we can say that adaptive learning is ‘just another tool’ for the sorts of things we already do in classrooms. It is a tool that is being promoted by a masssive industry with huge lobbying power. It is being presented, very explicitly, as a way of increasing class sizes and of reducing educational costs. In the US (see http://www.classsizematters.org/ as a starting point for more information), adaptive learning is intimately connected with the privatization lobby, and this will follow elsewhere. What happens in ELT on a global scale will probably be determined very little by ELT considerations: ELT content will be just a relatively small part of a much bigger adaptive package.
      At the moment, there isn’t much fuss in ELT – just the occasional blog post here and there. Most people haven’t a clue what adaptive learning means. It could be the case that what is happening in the US (both K12 and tertiary) will not be replicated in global ELT, but I can see no reasons to be overly optimistic.

      Reply
      • Very good points Philip, and as someone who lives in the US I suffer daily from watching the lobbying and influence of industry on our education system. It’s important to work towards providing a strong voice against these interests. But, I think we should be careful not to conflate the misuse of adaptive learning for these ends, and its intrinsic value when used correctly….we need to keep these arguments separate.

        Reply
    • Nicola asks, ‘Was there all this fuss then?!’. According to Selwyn (2011), yes, there was.

      ‘Fears for the technology-assisted “disappearance” of the teacher are not without theoretical precedence.[…] In one sense, Skinner’s notion of the teaching machine and programmed learning imply the technological displacement of the teacher. As the reinforcement theorist Fred Keller (1968) put it in a provocative article titled “Goodbye Teacher…”, the behaviourist-inspired model of programmed learning leaves little room for the teacher to continue in her role of provider of instruction. According to Keller, at best the teacher was expected to take the role of “proctor” or “assistant” — accompanying the use of tape recorders, computers and textbooks as small segments of instruction were given to learners at their own pace and with frequent feedback’ (p. 120).

      ‘Small segments of instruction … given to learners at their own pace and with frequent feedback’ would seem to characterize adaptive learning technology.

      Reply
      • Interesting! Seems like their fears came to nothing though..
        However, I take people’s point about mainstrean education trends and big companies. A game changer maybe.

        Reply
  7. Hi Thomas

    Agree completely that Knewton should engage with teachers more, and differently, than they have been. My concern is that they will use their publishing partners expertise in ELT, and not recognize that publisher and teacher perspectives are not always aligned.

    I appreciated that David Liu took the time to engage with us on eltjam, and he is clearly a bright guy and passionate about his project. But, to be honest, I think that he failed to take into account two key points: his audience, and the differences between gen ed and ELT. I considered writing a post on “What Knewton Should Have Said” but in the end was too lazy :-). They brought Sally Searby into the later installments, which was a good move – she knows her stuff. But she comes from publishing.

    What Knewton should do is set up an expert panel of teachers to feed in and provide pushback. I nominate the commenters on this post as a start.

    Unfortunately I think the response to your last question is that Knewton should listen to the end user (us) but like any company they may listen more carefully to the people who decide on purchasing. If you can convince a university CFO that he can cut $3 million in costs for English courses by paying you $1 million to Knewton-connected Pearson self-study products, then it’s tempting to do that. Both for Knewton and for Pearson.

    If you look at my first reply compared to this one – I’ve talked myself into being worried!

    Reply
  8. Laurie,

    Thanks for the slow pitch strike and for hosting this great discussion.

    I pay attention to what people do, not what they say. Talk is cheap. Businesses, especially software businesses, are well known for their vaporware and “visions” that never materialize.

    I personally believe that a business goes increasingly where its investments and relationships take it. Since I don’t have access to its investments its partnerships are the best I can do for the time being. Take a look at who Knewlon lists as its partners (http://www.knewton.com/partners/). While this is not definitive evidence it is quite instructive. Indeed, given this list, I would maintain that the burden of proof re: your question is no longer on my shoulders.

    Also, David stated, in terms of sharing with teachers what they will learn from students that, “we haven’t announced any specific plans, but we’re looking forward to uncovering and sharing insights over time.” He further said, “we’d be happy to share our findings.”

    Great, the words above are now one litmus test for me in terms of how they are willing to work with educators. Yet, they have not announced any plans. Is this because they have none? An intention to announce plans for data sharing is not the same as actually having plans (and allowing open access to significant insights).

    Given the above Laurie, I say, “show me,” with the full understanding that they are under no obligation to do so. But I also say don’t tease me with with vague promises when your partnership agreements are probably pulling you in a very different direction (by contract). If I were a lawyer for Pearson you bet I would try my best to make sure that any insights that come from my material would stay exclusively with me. Forget sharing this with teachers. I would be worried about competitors first. Given that this is the way the world normally works I believe the onus is on Knewton to prove otherwise.

    You see, vague promises aside, it is these agreements, and partnerships and contracts that are determining the forward vector on these matters. Teachers loved to be “included” but the reality is these very practical things will push us out of the conversation.

    Laurie, in concentrating on these practical matters do you feel I am barking up the wrong tree? I must confess when I look at a company the only statements I truly accept come on annual reports and statements to prospective investors. These words are vetted by lawyers not the PR/ad departments.

    Reply
    • Hi Michael

      Thanks for your perspective. I’ll try to strike a balanced note here. I really like your idea of the intent of a technology being a key indicator of where it will go. So, to follow your lead here, we need to look at the intent that Knewton has.

      So how do you identify Knewton’s intent? “Intent” reflects a plan, purpose, or goal…things that are in the future. Yet you specifically reject anything Knewton writes regarding their intentions as “vaporware”, “visions”, “vague promises”, “PR” etc.

      Then you ask about their intent for working with teachers, and say “they have not announced any plans.” But following your own criteria, if they had announced plans, then that would be just another vague promise that you would dismiss, and if they hadn’t, then the lack of plans is evidence to prove your point. You can’t have it both ways; it seems to be confirmation bias in your argument.

      And, actually, they do have plans: “In the near future, Knewton plans to release a free service to allow students, teachers, and parents around the world to build their own adaptive learning experiences.” (from their website FAQs).

      That sounds good to me. Looking forward to seeing if they can overcome the very real problems for ELT with their general education focus (as outlined in my post).

      Furthermore, I really don’t understand your argument that Knewton’s selection of publishing partners indicates that Knewton has bad intentions. I’m no defender of publishers – I’ve spent the last 10 years on a project that is predicated on my belief that publishers need to radically up their game and involve teachers as partners when it comes to the personalization and localization of materials design (see my post tomorrow). That said, English360 has a range of publishing partners, and yet our intention is not to push teachers out, but rather to pull teachers in. So having Pearson and CUP as partners does not prima facie indicate an anti-teacher bias.

      Finally, OF COURSE they are going to keep some of the big data of proprietary content exclusively for the authors and copyright holders. This isn’t a smoking gun – it’s common sense (as you put it, “the way the world works”). That doesn’t invalidate anything, and certainly not the good things that Knewton has the potential to do, even if we don’t believe them yet. And David said will open up some of the analytics. If Pearson or Macmillan want to share their data, then they can. We should be pushing them to open it up, not Knewton.

      OK, I said I’d be balanced: I found one piece of evidence for you where Knewton implicitly threatens teachers. They have an infographic that in a straightforward way shows the cost savings of online learning vs. instructor-led learning. That’s bad. But I think it’s mostly an indication of their math-centric experience to date (most of their case studies are about math so far). I’m an ELT guy and not general education, so I don’t know the student outcomes with new approaches using online tools instead of traditional teacher lectures. But I think we need to understand that Knewton does not yet understand ELT, and we shouldn’t pre-judge them (yet) using examples from the other disciplines that have been their focus.

      Reply
  9. Cleve,

    Thanks for the added perspective. And I am trying hard not to pre-judge them. Heck, I have a complete fascination with this service. I can think of many ways to use it as I create materials. I have a couple of “theories” I would like to test with mass data. I am particularly interested in lexical aspect for example.

    With regards to intent I mostly meant the intent of the technology. Is it designed in a “teacher-centric” way? I think I will leave this for others to comment on. Please however don’t misunderstand me on one point. I definitely believe that the future for teaching IS one of working with intelligent machines. The question is- whose vision will we be working with? And how will we decide if a given system puts teachers at the center? Based on what a company says or some other criteria inherent in their technology (again I would like to invoke the Three laws of robotics and its clear intent).

    As for my thoughts on vaporware I would say that promises are easy to make, especially for software companies. In putting Knewton in this category I could be wrong but I don’t think so. As such promises don’t particularly excite me. Actual services do. But even so how many times have we watched as valuable services get “thrown aside” by software companies because they don’t add to the bottom line. So, I say, don’t woo me with promises, woo me with actual services that are designed to at least break-even and be self-sustaining.

    And in so far as the partnerships go your thoughts are appreciated. I think it is instructive to go through your list of partners and theirs. I can see a difference. Still, this is indicative, not exhaustive evidence.

    I was responding to Laurie’s question about evidence. Is the evidence absolutely clear? No, by no means. But does that mean we must accept their version of the story without some critical thinking on our part?

    BTW, looking forward to part 2 and I appreciate the effort you are putting into this in what otherwise must be a very busy day!

    Reply

Leave a comment