humanizing pt1

Part 1:  Knewton, adaptive learning, and ELT
Part 2:  Open platforms and teacher-driven adaptive learning

The debate over adaptive learning at eltjam, Philip Kerr’s blog, and Nicola Prentis’ Simple English has been both fascinating and instructive, not only due to the posts but also the great dialogue in the comments. It’s a fun topic because it involves our core beliefs regarding language acquisition, effective teaching, and the roles that technology can play.

That said, I can’t help but feel that in some respects we’re thinking about adaptive learning in a limited way, and that this limited perspective, combined with Knewton confusion, is distorting how we approach the topic, making it into a bigger deal than it really is. But, given the potential power that the new “adaptive learning” technology may indeed have, we do need to see clearly how it can help our teaching, and where it can potentially go wrong.

Adaptive learning in context
I wrote “adaptive learning” in scare quotes above because I think the name itself is misleading. First, in a very important way, all learning is adaptive learning, so the phrase itself is redundant.  Second, the learning, which is carried out by the learner, is not what the vendors provide: “Knewton…constantly mines student performance data, responding in real time to a student’s activity on the system. Upon completion of a given activity, the system directs the student to the next activity.” That is not adaptive learning, but rather adaptive content; it is the content sequence (of “activities”) that adapts to the learner’s past performance. We can call adaptive content “micro-adaptation”, since it happens at a very granular level.

Now, good teachers have been adapting to our students for how long…millennia? We assign homework based on what we know of our students’ strengths and weaknesses (adaptive content).  In the communicative classroom, we are always adjusting our pacing, creating new activities, or supporting spontaneous discussion based on our perception of the students’ needs in that moment (the adaptive classroom).  Dogme is one kind of adaptive learning in the classroom. And, when the stars align, educators can successfully design and deploy a curriculum, including methods and approaches, that iteratively adapts to student needs over time (the adaptive curriculum). We can call the adaptive curriculum “macro-adaptation”.

So how does the new, algorithmic adaptive learning, such as Knewton helps deliver, address each of these categories?

So as a tool, Knewton has potential for the ELT profession. But, whether the tool is used appropriately, or not, does not depend on Knewton, but rather on the publishing partners that use the Knewton’s tools. All Knewton does is provide publisher LMSs with a hook into the Knewton data infrastructure.  Knewton is a utility. It’s the publishers that decide how to best design courses that use Knewton in a way that is pedagogically appropriate and leverages Knewton’s strengths to provide adaptive content, classrooms, and curricula. It is the publishers that must understand Knewton’s limitations. As a tool, it can’t do everything – it can’t “take over” language learning, or relegate teachers to obsolescence, although Knewton’s marketing hyperbole might make one think that.

Limitations for ELT
If Knewton’s ambition is one concern, then another is that it is not specifically designed for ELT and SLA, and therefore may not understand its own limitations. Knewton asks its publishing partners to organize their courses into a “knowledge graph” where content is mapped to an analyzable form that consists of the smallest meaningful chunks (called “concepts”), organized as prerequisites to specific learning goals.  You can see here the influence of general learning theory and not SLA/ELT, but let’s not concern ourselves with nomenclature and just call their “knowledge graph” an “acquisition graph”, and call “concepts” anything else at all, say…“items”.  Basically our acquisition graph could be something like the CEFR, and the items are the specifications in a completed English Profile project that detail the grammar, lexis, and functions necessary for each of the can-do’s in the CEFR.  Now, even though this is a somewhat plausible scenario, it opens Knewton up to several objections, foremost the degree of granularity and linearity.

A common criticism of Knewton is that language and language teaching cannot be “broken down into atomised parts”, but that it can only be looked at as a complex, dynamic system.  This touches on the grammar McNugget argument, and I’m sympathetic. But the reality is that language can indeed be broken down into parts, and that this analytic simplification is essential to teaching it. Language should not only be taught this way, and of course we need to always emphasize the communicative whole rather than the parts, and use meaning to teach form.  But to invalidate Knewton because it uses its algorithms on discrete activities is to misunderstand the problem. Discrete activities are essential, in their place. The real problem that Knewton faces in ELT is that both the real-time activity sequencing, and the big data insights delivered that are based on these activity sequences, are less valuable and could be misleading.

They are less valuable in ELT for at least two reasons.  First, they are less valuable because these big data insights come from a limited subset of activities, and much student-produced language and learning data is not captured.

Second, they are less valuable because language learning is less linear than other, general learning domains (e.g. biology, maths). Unlike in general learning domains, most language students are exposed to ungraded, authentic, acquirable language (by their teacher, by the media, etc.) that represents an approximate entirety of what is to be learned. Algebra students are not exposed to advanced calculus on an ongoing basis, and if an algebra student were exposed to calculus, the student wouldn’t be able to “acquire” calculus the way humans can acquire language.  Therefore, for ELT, the cause-effect relationship of Knewton’s acquisition graph and the map of prerequisite items is to some extent invalidated, because the student may have acquired a prerequisite item by, say, listening to a song in English the night before, not by completing a sequence of items. That won’t happen in algebra.

Because of these limitations, Knewton will need to adapt their model considerably if it is to reach its potential in the ELT field.  They have a good team with some talented ELT professionals on it, who are already qualifying some of the stock Knewton phraseology (viz. Knewton’s Sally Searby emphasising that, for ELT, knowledge graphing needs to be “much more nuanced” in the last Knewton response on eltjam).  And, hopefully Knewton’s publishing partners will design courses with acquisition graphs that align with pedagogic reality and recognize these inherent limitations.

Meanwhile, as they work to overcome these limitations, where can Knewton best add value to publisher courses? I would guess that some useful courses can be published straightaway for certain self-study material, some online homework, and exam prep – anywhere the language is fairly defined and the content more amenable to algorithmic micro-adaptation. Then we can see how it goes.

In Part 2 of this post we’ll focus on the primary purpose of adaptive learning, personalization, and how this can be achieved by Knewton as well as by teacher-driven adaptive learning using open platforms.  As Nicola points out, we need to start with “why”….

Featured Photo Credit: Billy Wilson Photography via Compfight cc. Text added by eltjam.

31 Comments

  1. Cleve,

    Thanks for the added perspective. And I am trying hard not to pre-judge them. Heck, I have a complete fascination with this service. I can think of many ways to use it as I create materials. I have a couple of “theories” I would like to test with mass data. I am particularly interested in lexical aspect for example.

    With regards to intent I mostly meant the intent of the technology. Is it designed in a “teacher-centric” way? I think I will leave this for others to comment on. Please however don’t misunderstand me on one point. I definitely believe that the future for teaching IS one of working with intelligent machines. The question is- whose vision will we be working with? And how will we decide if a given system puts teachers at the center? Based on what a company says or some other criteria inherent in their technology (again I would like to invoke the Three laws of robotics and its clear intent).

    As for my thoughts on vaporware I would say that promises are easy to make, especially for software companies. In putting Knewton in this category I could be wrong but I don’t think so. As such promises don’t particularly excite me. Actual services do. But even so how many times have we watched as valuable services get “thrown aside” by software companies because they don’t add to the bottom line. So, I say, don’t woo me with promises, woo me with actual services that are designed to at least break-even and be self-sustaining.

    And in so far as the partnerships go your thoughts are appreciated. I think it is instructive to go through your list of partners and theirs. I can see a difference. Still, this is indicative, not exhaustive evidence.

    I was responding to Laurie’s question about evidence. Is the evidence absolutely clear? No, by no means. But does that mean we must accept their version of the story without some critical thinking on our part?

    BTW, looking forward to part 2 and I appreciate the effort you are putting into this in what otherwise must be a very busy day!

  2. Laurie,

    Thanks for the slow pitch strike and for hosting this great discussion.

    I pay attention to what people do, not what they say. Talk is cheap. Businesses, especially software businesses, are well known for their vaporware and “visions” that never materialize.

    I personally believe that a business goes increasingly where its investments and relationships take it. Since I don’t have access to its investments its partnerships are the best I can do for the time being. Take a look at who Knewlon lists as its partners (http://www.knewton.com/partners/). While this is not definitive evidence it is quite instructive. Indeed, given this list, I would maintain that the burden of proof re: your question is no longer on my shoulders.

    Also, David stated, in terms of sharing with teachers what they will learn from students that, “we haven’t announced any specific plans, but we’re looking forward to uncovering and sharing insights over time.” He further said, “we’d be happy to share our findings.”

    Great, the words above are now one litmus test for me in terms of how they are willing to work with educators. Yet, they have not announced any plans. Is this because they have none? An intention to announce plans for data sharing is not the same as actually having plans (and allowing open access to significant insights).

    Given the above Laurie, I say, “show me,” with the full understanding that they are under no obligation to do so. But I also say don’t tease me with with vague promises when your partnership agreements are probably pulling you in a very different direction (by contract). If I were a lawyer for Pearson you bet I would try my best to make sure that any insights that come from my material would stay exclusively with me. Forget sharing this with teachers. I would be worried about competitors first. Given that this is the way the world normally works I believe the onus is on Knewton to prove otherwise.

    You see, vague promises aside, it is these agreements, and partnerships and contracts that are determining the forward vector on these matters. Teachers loved to be “included” but the reality is these very practical things will push us out of the conversation.

    Laurie, in concentrating on these practical matters do you feel I am barking up the wrong tree? I must confess when I look at a company the only statements I truly accept come on annual reports and statements to prospective investors. These words are vetted by lawyers not the PR/ad departments.

    1. Hi Michael

      Thanks for your perspective. I’ll try to strike a balanced note here. I really like your idea of the intent of a technology being a key indicator of where it will go. So, to follow your lead here, we need to look at the intent that Knewton has.

      So how do you identify Knewton’s intent? “Intent” reflects a plan, purpose, or goal…things that are in the future. Yet you specifically reject anything Knewton writes regarding their intentions as “vaporware”, “visions”, “vague promises”, “PR” etc.

      Then you ask about their intent for working with teachers, and say “they have not announced any plans.” But following your own criteria, if they had announced plans, then that would be just another vague promise that you would dismiss, and if they hadn’t, then the lack of plans is evidence to prove your point. You can’t have it both ways; it seems to be confirmation bias in your argument.

      And, actually, they do have plans: “In the near future, Knewton plans to release a free service to allow students, teachers, and parents around the world to build their own adaptive learning experiences.” (from their website FAQs).

      That sounds good to me. Looking forward to seeing if they can overcome the very real problems for ELT with their general education focus (as outlined in my post).

      Furthermore, I really don’t understand your argument that Knewton’s selection of publishing partners indicates that Knewton has bad intentions. I’m no defender of publishers – I’ve spent the last 10 years on a project that is predicated on my belief that publishers need to radically up their game and involve teachers as partners when it comes to the personalization and localization of materials design (see my post tomorrow). That said, English360 has a range of publishing partners, and yet our intention is not to push teachers out, but rather to pull teachers in. So having Pearson and CUP as partners does not prima facie indicate an anti-teacher bias.

      Finally, OF COURSE they are going to keep some of the big data of proprietary content exclusively for the authors and copyright holders. This isn’t a smoking gun – it’s common sense (as you put it, “the way the world works”). That doesn’t invalidate anything, and certainly not the good things that Knewton has the potential to do, even if we don’t believe them yet. And David said will open up some of the analytics. If Pearson or Macmillan want to share their data, then they can. We should be pushing them to open it up, not Knewton.

      OK, I said I’d be balanced: I found one piece of evidence for you where Knewton implicitly threatens teachers. They have an infographic that in a straightforward way shows the cost savings of online learning vs. instructor-led learning. That’s bad. But I think it’s mostly an indication of their math-centric experience to date (most of their case studies are about math so far). I’m an ELT guy and not general education, so I don’t know the student outcomes with new approaches using online tools instead of traditional teacher lectures. But I think we need to understand that Knewton does not yet understand ELT, and we shouldn’t pre-judge them (yet) using examples from the other disciplines that have been their focus.

  3. Hi Thomas

    Agree completely that Knewton should engage with teachers more, and differently, than they have been. My concern is that they will use their publishing partners expertise in ELT, and not recognize that publisher and teacher perspectives are not always aligned.

    I appreciated that David Liu took the time to engage with us on eltjam, and he is clearly a bright guy and passionate about his project. But, to be honest, I think that he failed to take into account two key points: his audience, and the differences between gen ed and ELT. I considered writing a post on “What Knewton Should Have Said” but in the end was too lazy :-). They brought Sally Searby into the later installments, which was a good move – she knows her stuff. But she comes from publishing.

    What Knewton should do is set up an expert panel of teachers to feed in and provide pushback. I nominate the commenters on this post as a start.

    Unfortunately I think the response to your last question is that Knewton should listen to the end user (us) but like any company they may listen more carefully to the people who decide on purchasing. If you can convince a university CFO that he can cut $3 million in costs for English courses by paying you $1 million to Knewton-connected Pearson self-study products, then it’s tempting to do that. Both for Knewton and for Pearson.

    If you look at my first reply compared to this one – I’ve talked myself into being worried!

More comments

Leave a Reply

Your email address will not be published. Required fields are marked *

TwitterLinkedInFacebook

Other related posts

See all

New decade, new Jam!

My English learning experience – 6 lessons from a millennial learner

What makes an effective learning experience? 3 key principles from the science of adult learning