Following Tim’s recent post on bots and their potential application in ELT, and after our own foray into the world of bots with our Ame product, we decided it was time to do another ELTjam review, this time of the Duolingo Language Bots.
In an update to our previous review criteria, we are now using our Learner Experience Design (LXD) framework to inform the areas of focus. You can find more information on the framework here, but essentially it means analysing the product from four different angles: pedagogy, content, UX and interaction.
After completing a number of units in the Duolingo product, you unlock the ability to interact with chatbots that ‘prepare you for real-life conversations’ and remove the ‘awkwardness and anxiety [of human-to-human communication]’. Once in the chat, you get introduced to different people whom you chat to, following prompts and answering questions from these new ‘friends’. The messages are exchanged in a chat interface, similar to iPhone’s iMessage. You get suggested answers when you are responding, corrections when you are wrong, plus positive feedback and ‘XP’ points when you do well. There is also a “help me reply” button in case you get stuck.
Duolingo says that their bots “are powered by artificial intelligence and react differently to thousands of possible answers.”. The bots feature is currently available for Duolingo’s Spanish, French and German courses. More languages are apparently on their way.
Pedagogically I feel that this is a step in the right direction for Duolingo, and there are a lot of benefits to the new feature. Whereas previous exercises were decontextualised and a bit odd, the chatbot provides a framework for the exchange, making the use of language more meaningful and more realistic. The input is comprehensible, matching what you have done in previous Duolingo lessons, offering therefore opportunities for recycling lexis and structures encountered before. There is also a good degree of scaffolding in the exchanges; you are able to click on words in the prompts and get a translation, and when replying you are given suggestions for what to write and can use the ‘help me reply’ button if you are really stuck.
The chatbot also provides an opportunity for productive practice, in the form of semi-authentic output. You are required to focus on the form of their output as you type, and suggestions are made if the spelling is inaccurate. Feedback is then given, plus encouragement if you answer well. That said, the feedback was at times ambiguous and I was congratulated for answers which were different to the reply which came from the bot. There was no explanation as to why there was a difference.
Pedagogically I think the main drawback is how prescriptive the conversations are. Despite the claim that the bots can respond to thousands of possible answers, you are unable to deviate at all from the script, or ask a question about what you are learning. In the conversations I had, there were a very small number of responses it would accept and let me send. I imagine that as the AI models get more data they will be able to cope with a wider range of input, but at the moment, it quite quickly stops feeling like a conversation and starts to feel like an exercise. For example, in the restaurant chat, I was asked what I wanted to drink but my answer could only be the noun of the drink I wanted; I wasn’t able to say ‘Me pones una cerveza’, only ‘Una cerveza (por favor)’. I was then told they only had tea, coffee, water or hot chocolate. What if I want a PARSNIP!? Not only is this prescriptiveness a bit dull, but it also interferes with the ability for the learner to experiment, to get things wrong and be corrected. The models and the output options seem too basic for there to be an opportunity for the user to notice any shortfalls in their own linguistic understanding and use this to update their interlanguage. This may change in the higher levels and as the algorithms improve, but at the moment, this prescriptiveness is a pedagogical weak point.
Additionally, this is not the sort of exchange that you would have by text, and so I question the extent to which the bot fulfils its stated aim of preparing users for real-life conversations. For more on this, let’s look at content…
I think this is the most disappointing aspect of the bot. There was a real opportunity here for the conversations to be structured around functional language, high-frequency words and collocations and meaningful interaction, but in the end we get the sort of nonsensical rubbish that Duolingo has become famous for. In the second conversation I was asked to name a zookeeper’s animals and the things they are eating (tortoise, bear, elephant, carrot, fish). I’ve never done that anywhere, let alone via text message! A later conversation was on helping an artist paint a picture (naming the colours she should paint the sky, grass etc.). The only realistic interaction of the first four chats made available to me was the restaurant dialogue where I had to choose my food and drink.
So, while the bots are a move in the right direction, it’s disappointing that Duolingo claim that such fake conversations could actually prepare users for what they might encounter in real life. I imagine even the less self-aware learners might realise this is not a natural conversation structure, which is definitely better than them thinking it is and then trying to initiate similar conversations with random people they meet. Given the content, I don’t feel there is much likelihood of improvement in communicative competence for users.
As you would expect from Duolingo, the user experience is one of the strongest aspects of the product. The experience feels just like a real messaging app and has a lot of the features that we would expect to see (indication of the other person typing, spell-check, suggested words to use, deactivation of ‘send’ button when unable to send a message) – and these all work well. There are a couple of things which get in the way, though. For example, a double space doesn’t insert a full stop as it would (on iOS) and you can’t move the cursor back to the middle of what you are writing, so you have to delete in order to go back and change something you’ve written, but generally the experience is consistent and intuitive.
It’s also impressive how some of the educational aspects of the product have been incorporated into the messaging UI: The ‘help me reply’ button sits in the suggested words section above the keyboard, the reactions to your messages appear under the message where you are used to seeing ‘delivered’ or other indicators that the recipient has received the message. Rather than UI issues, it’s the underlying algorithms which most get in the way of the user experience. Not being able to type what you want or send whatever you type jar slightly in an interface that looks so much like it should accept that behaviour.
Visually and graphically the feature works well and fits with the other features of the Duolingo product.
The main question I have around interaction here is whether the human-to-computer interaction in the feature will encourage learners to get out and converse more confidently with other people. In theory, setting up exercises in this way should help learners and make them feel more confident, but I feel that the content and restrictive conversational options get in the way of this. The interactions aren’t meaningful enough, don’t really give a sense of engaging with another person. Learners are not encouraged to try real-life chats or interact with other learners who they could have a conversation with. In fact, the way that the bot is pitched to us is that real-life chat is awkward and anxiety-making, and that we’re much better with the bot as a safe starting point. But then it doesn’t follow through by encouraging learners to make that challenging transition into real-world interaction, or even really give them the tools and language they need to do so.