Adaptive learning and personalisation are big topics in ELT and in education more generally. Knewton is at the forefront of this and is one of the highest profile EdTech companies out there. They’ve been moving into ELT in a big way recently – we’ve covered their partnership agreements with Macmillan and Cambridge University Press. Knewton’s technology is very impressive stuff, using ‘big data’ to provide adaptive learning and analytics for students, teachers, schools and publishers.

Rather than just report on what they’ve been up to, we thought it would be interesting to go to the source and get Knewton’s own take on what they’re doing, what their ambitions are, and what they think technology and the use of big data is going to mean for ELT. So we spoke to David Liu, Knewton’s Chief Operating Officer. We were also joined by Sally Searby, until recently Publishing Director at Cambridge University Press, and now at Knewton as Partnership Manager, and Molly Gerth, Knewton’s Communications Manager. They were happy to talk at length on a range of topics, and were refreshingly open.
We’ve serialised the interview into four parts:
- What is Knewton, why is data such a big deal, and how can data analysis help learners?
- Is sharing of data a problem for publishers?
- How can Knewton’s approach help to continually improve ELT products?
- This is all very well for subjects like maths and science, but how can it really work for language learning?
If you have any questions for the Knewton team on these topics, please ask them in the comments sections of the posts.
So, let’s dive in to part 1, and find out more about what Knewton is, and what all the fuss is about.
eltjam:
Can you give us an overview of what Knewton is, and how the technology works?
David:
Knewton personalises digital courses so every student is engaged and no student slips through the cracks.
As students work through online lessons, Knewton analyses vast amounts of anonymised data to work out what a student knows and how they learn best. Based on that, we are able to recommend what to study next at any given moment, helping students at any level succeed. Teachers use Knewton-powered real-time predictive analytics to detect gaps in knowledge and differentiate instruction. Students who have different needs, interests, strengths, and weaknesses can work toward goals in a sequence and pace that continuously adjusts to fit their needs. Knewton recommendations are optimised and constantly improving for students over time.
A lot of the leading educational publishers around the world, including Pearson, Cambridge University Press, Macmillan, and Houghton Mifflin Harcourt, already use Knewton to personalise their content. Using advanced data science and machine learning, we help publishers deliver differentiated instruction for students from preschool to higher education, adult learning, and corporate training. Knewton personalises learning for subjects including math, sciences, reading, writing, business, language learning, and more.
We believe personalisation is most effective via one large platform that analyses vast amounts of data from classrooms around the globe
We believe personalisation is most effective via one large platform that analyses vast amounts of data from classrooms around the globe. Our heavy duty infrastructure and team of data scientists and experts handle adaptivity so publishers can focus on creating quality content. Knewton analyses which lessons resonate best, for whom, and why — enabling publishers and content creators to evaluate content efficacy at the concept level and create more effective learning materials.
eltjam:
One of the key things Knewton does is to analyse everything a student does, not just test scores, which is how some other ‘adaptive’ products work. Can you briefly explain here how adaptive learning is different from adaptive testing?
David:
When people use the term “adaptive learning,” they are often referring to either single-point adaptivity or adaptive testing. Single-point adaptivity considers a student’s performance at one point in time in order to determine the level of instruction or material he or she receives from that point forward. Adaptive testing determines a student’s exact proficiency level using a fixed number of questions. An example of single-point adaptivity would be a course that includes a diagnostic exam at one point in time, the results of which then determine subsequent course content. There is often no further data mining and personalisation.
When we refer to adaptive learning, we mean a system that is continuously adaptive – that responds in real-time to each individual’s performance and activity on the system. It maximises the likelihood a student will understand a certain concept by recommending the right instruction, at the right time, about the right thing. Many apps use human created decision trees as the basis of their adaptivity, which Knewton does not do. We don’t guess and continue to guess what a student knows and what they should study.
eltjam:
Knewton is one of the leading ‘big data’ EdTech companies. Why are you focused on data, and how do you believe technology and data analysis can really help learners?
David:
We believe education is this incredibly data-rich industry that has not been able to capture much of that data in the past. Now with the technology of cloud computing, machine learning, and data science, plus as educational materials shift to digital we have the ability now to capture a lot of the data in and out of the classroom that has not been captured before. And so one of the fundamental beliefs that we have is that a student should have the right to maximise performance or to hit their own personal potential in anything that they’re studying.
a student should have the right to maximise performance or to hit their own personal potential in anything that they’re studying
So how do we do that? We help students and teachers optimise the course that they’re studying or teaching, based upon skill level, and based upon their experience.
I think one thing that’s very interesting is the network effect, or the longitudinal benefit of having a learning history travel with you. This learning history, of what each student knows and how they learn best, travels with the students. This is useful because classrooms are filled with unique individuals that learn differently and pick up ideas at different rates; come from different educational backgrounds; and have different needs and interests. This is something that we don’t talk much about because we are at the earlier stages of it, but we’re already beginning to see it intra-publisher; meaning course-to-course within a publisher that we’re powering. We’re going to begin to see this inter-publisher soon when we begin to launch multiple products across different publishers around the world.
So let’s bring it to a real concrete example in ELT. As you move from one level of English language training or teaching to another, you’re working through prerequisites. There are certain things that you need to learn before you move on. Unfortunately, today, when you move from one class to another, we pretend that we’ve never met you before. Now we can change that.
Unfortunately, today, when you move from one class to another, we pretend that we’ve never met you before. Now we can change that.
One of the benefits of the data platform and the infrastructure we provide all of our publishing partners, is that publishers can personalise a course for each student and generate data reports showing teachers what each student needs at any moment. For example, as a student progresses in adult ELT, they might not take the same course the next year with that same publisher from the year before. So typically, it’s like starting over – students are treated like we’ve never met them before; it is a missed opportunity to help the student reach their full potential. Until now, there has not been this platform that allows you to take your learning history with you as you grow.
It really all happens behind the scenes. You’re not necessarily interacting with it – it’s not like a social media profile, which is an in-your-face kind of interface – this is much more in the background. Our technology allows each course to maximise understanding of how you learn best, and adjust to fit your needs.
And so, if we learn about you, especially from one ELT course to another in a series, we can do a lot more thorough root cause analysis to determine what it is that is slowing you down. Extend that now beyond language training, maybe there are core concepts around reading comprehension that are just lacking. If we had you in a reading course previously, we would know your strengths and weaknesses and can apply that knowledge to the ELT course that you’re taking today. It may not be a language thing at all, it may just be a reading comprehension issue.
we can weave a lot of these seemingly disparate courses together, because we can understand how one series of concepts that you learn in a course can impact another
And so, you begin to now see the power of having this learning history inform every course that you’re taking. Knewton is creating an agnostic, open platform that allows even competitive products to thrive, and that really benefits students. You’re taking courses that are using many different materials from many different publishers. And so we can weave a lot of these seemingly disparate courses together, because we can understand how one series of concepts that you learn in a course can impact another. We become sensitive to student strengths and weaknesses across domains and can help publishers and learning companies create more individualised products for each customer across a lifetime of learning.
Another example that I always give is that in science a lot of students are struggling in biology or chemistry. So traditionally what happens when you get stuck in biology, students get more biology questions and practice. But what if that’s not the problem? At Knewton, we can help figure out that it’s not because they’re not understanding the actual science part of it. It may be that you have a fundamental issue with algebra, and that is an absolute prerequisite to learning some of these sciences. And so, if we can understand that, we can again get to the root cause and solve some of those issues.
Another example is when kids are having issues in math, and a lot of times, what happens? They get sent down the path of more practice problems, which results in kids hitting their heads against a wall over and over. In reality, a student might actually be struggling because they have a reading issue; a lot of kids who are having problems with math are fundamentally not able to read at a high enough grade level. And so they’re having reading comprehension issues with the math problems.
one of the most powerful parts of what we’re doing at Knewton is to not only optimise your individual experience with the course that you’re working on today, but it is to take those learnings, and be able to apply it to anything you’ll learn in the future
We believe that not all courses are linked in this way, but many are. Once we have enough of these courses powered [by Knewton], we can begin to connect these learning histories to benefit the students as they continue on. And, to us, one of the most powerful parts of what we’re doing at Knewton is to not only optimise your individual experience with the course that you’re working on today, but it is to take those learnings, and be able to apply it to anything you’ll learn in the future.
eltjam:
So, what about ELT?
David:
It’s just really exciting that we can now enable publishers to leverage an adaptive learning technology, whereas in the past it was just premature. In ELT we think it’s an incredible advantage, especially in this day and age where we’re hearing about so many ambitious things that are happening.
The other part about being able to link a lot of these learning histories [is that] we build these knowledge graphs, as we call them. Basically these are just a translation of the content from a course or textbook in the order that it should be taught into our system. And so if we work with a partner to build a large enough knowledge graph, we can encompass all of the different standards that are necessary. ELT has a lot of different standards, in different regions to keep track of, so it’s going to be really important, no matter where you go, to be able to compare yourself to where you should be within any given standard. It’s very powerful if we can graph the content that aligns to those standards, all in one place, and analyze the graph to tell a student where they stand.
If we have more content, more assessments, more choice for what to recommend for each student to study next, we’re able to help publishers continue to build a richer experience around proficiency estimations and recommendations, which is really the core of what we do.

Knewton interviews
Part 1 – Big data and adaptive learning in ELT
Part 2 – Sharing data and competitive advantage
Thanks for a really informative interview – look forward to reading Part 2. I have a question for David: I am a co-author of a Business English course published by Cambridge University and am interested to understand more about Knewton’s business model vis a vis ELT publishers and how that in turn affects payments to authors. I can understand the details of individual contracts will be commercially sensitive, but could you flesh out in general how publishers are paid for putting their content on your platform. OR, does the model work the other way round, where the ELT publishers pays you to adapt their content and then uses your platform to distribute it?
Hi Martin – let me jump in, since I know the answer to your question. Knewton is not actually a publishing platform in itself, but a service designed to enhance publishing platforms. It receives data from publishers’ platforms and analyses that data, then sends back recommendations, analytics etc. Publishers pay to have Knewton’s capabilities added to their platforms. Any content adaptation is done by the publisher, with support from Knewton. So, it doesn’t have any direct bearing on the commercial arrangement between publisher and author, although of course it is an additional cost that publishers incur in supporting the online product. However, that should be offset by increased sales if the Knewton features are felt by teachers and students to be valuable. One thing that is very interesting from an author’s point of view is the nature and structure of content that would work best in an adaptive learning environment,
Thanks for your answer Laurie – very clear. All we need now is for ALL the ELT publishers to develop viable secure pirate-proof cloud streaming platforms for distribution of existing print content to work with the Knewton analytic software. I don’t see that yet. Will I ever see it?!
Yes, agree Laurie. The contractual terms continue to be between the author and the publisher. Knewton’s API is the next evolution in digital or online product so if a publisher decides to partner with Knewton to provide the adaptive learning experience for students and teachers, it’s no different than if they were partnering with an LMS, or or other supplier to deliver digital content. All costs of producing and delivering ELT content to students and teachers are factored in by the publisher, including author fees or royalties.
Oh my Godness! I see it now. Holy Cyberdyne Systems Batman. You will hook this system to a machine intelligence that can take “understand” basic language and then take students through written, reading and speaking practice. The end result of this system will be to eliminate the need for language teachers. At first this system will act as an aid to teachers but the end result will be to eliminate teachers. I can now see the end game for the big publishers. Their end game is to use big data to create individualized pathways for large numbers of students and so eliminate the need for teachers beginning with teachers who pass rigorous 12-week ESL accreditation “courses”.
Your system, once fully operational will be able to 1. recognize spoken language. 2. Locate mistakes. 3. Make suggestions for corrections. 4. measure to see if the mistakes have been systematically eliminated while at the same time moving forward with vocabulary acquistion etc..
Obviously, the goal of the publishers now is to create cradle to grave teaching bots. Language learning is a great place to start because you need to create conversational systems……
This isn’t Big Data. That makes this sound somewhat regal. This is Big Caretaker at a still infant stage.
Hi Michael,
Hmm… a system that could effectively recognise spoken language, locate mistakes, make suggestions for corrections, measure to see if the mistakes have been systematically eliminated while at the same time moving forward with vocabulary acquisition etc. I’m struggling to see how such a tool could in any way be a bad thing to have available for language learners. Shame it’s such a long way from being possible right now.
Laurie,
My only concern is that initially this will be sold to teachers as “Progress” when in fact it will start to result in the end of their jobs (in say 10 years time). I say, call a spade a spade. It probably will result in better learning in many instances but it will also result in the elimination of many jobs.
This system should come with a warning: “to the teacher this system is ultimately hazardous to your well being. It will ultimately eliminate your job for the primary financial benefit of the Big Caretaker.”
The irony of this is that it could be built to benefit individual teachers but money talks and the end result will probably be systems that benefit big business at the expense of teachers and small schools. In the end this is a system that will tend to concentrate wealth- distributing it from the many to the few in the name of efficiency.
So, I do see a problem with this version of progress!
Thanks for your answer Laurie – very clear. All we need now is for ALL the ELT publishers to develop viable secure pirate-proof cloud streaming platforms for distribution of existing print content to work with the Knewton analytic software. I don’t see that yet. Will I ever see it?!
Wow … in my mind, this whole idea just raises loads of questions about how quantifiable and trackable language teaching and learning can (and should!) be. Over the years I’ve come across so many students who’ve gone through systems that are all about testing and box-ticking who have no feel for the language and very poor real communication skills. I worry that too much emphasis on the quantifiable elements of language learning (even if they’re approached quite creatively and flexibly) risks pushing out the more touchy-feely elements that are such a vital part of both education and communication.
I look forward to reading the rest of the interview though and will be happy to be proved wrong 😉
I’m largely unconvinced:
“CAI is now back as “adaptive learning systems.” Some of the old programs have been repurposed with more interactivity. McRae states it as “adaptive learning systems still promote the notion of the isolated individual, in front of a technology platform, being delivered concrete and sequential content for mastery. However, the re-branding is that of personalization (individual), flexible and customized (technology platform) delivering 21st century competencies (content).” “
http://barbarabray.net/2013/12/30/this-time-its-personal-and-dangerous/