If I had to reduce language learning to the bare essentials and then construct a methodology around those essentials, it might look something like this (from Edmund White’s autobiographical novel The Farewell Symphony):

[Lucrezia’s] teaching method was clever. She invited me to gossip away in Italian as best I could, discussing what I would ordinarily discuss in English; when stumped for the next expression, I’d pause. She’d then provide the missing word. I’d write it down in a notebook I kept week after week. … Day after day I trekked to Lucrezia’s and she tore out the seams of my shoddy, ill-fitting Italian and found ways to tailor it to my needs and interests.

Whatever theoretical lens you view this through, Lucrezia’s ‘method’ contains the right mix. Those who subscribe to the ‘learning-is-information-processing’ view will approve of the output + feedback cycle and the covert focus on form. Those of a sociocultural bent will applaud Lucrezia’s scaffolding of learning affordances at the point of need. Dynamic systems theorists will invoke the soft-assembly of language resources in a coupled system. What’s more, my own recent experience of trying to re-animate my moribund Spanish suggests that the single most effective learning strategy was ‘instructional conversation’ with a friend in a bar. That is to say, the same kind of ‘clever method’ that White celebrates above.

But, of course, unless you have a willing partner, such intensive one-to-one treatment is costly and not always available. Could this kind of conversation-based mediation be engineered digitally? Is there an app for it?

OLYMPUS DIGITAL CAMERAInteractive software that replicates human conversation has long been a dream of researchers ever since Alan Turing proposed the ‘Turing Test’ in the 1950s, which challenged programmers to design a machine that could outwit a jury into thinking that they were interacting with a real person.

While no one has yet met Turing’s conditions in any convincing way, programs, such as ‘chatterbots’, have certainly managed to fool some of the people some of the time, as in this example, where the ‘bot’, called AOLiza, is ‘chatting’ with an unsuspecting human interlocutor (‘ten’):

ten: hey james,how you been?
AOLiza: have you asked anyone else?
ten: no, should I?
AOLiza: why ‘no’?
ten: i don’t know, like to get it from the source. what’s up?
AOLiza: does that trouble you?
ten: does what trouble me?
AOLiza: are such questions much on your mind?
ten: sorry, missed the last. could u resend?
AOLiza: i’ve told you that apologies are not required.
ten: ok. so how are you?
AOLiza: why do you ask?
ten: just wondering. how’s the job? the family? stuff like that.
AOLiza: have you asked anyone else?
ten: about what? normally no one else is “on” when I am. this is the first chance I’ve had to actually talk to someone I don’t work with and haven’t seen.
AOLiza: why do you ask?
ten: just curious. making conversation, stuff like that.
AOLiza: does talking about this bother you?
ten: talking about what? I’m still not sure what we are discussing. I just asked you how you were…..you have me at a disadvantage

The truth of the matter is that, of course, the machine is not having a conversation, but, by following a relatively simple set of algorithms, is simulating having a conversation. Even a more sophisticated program based on ‘big data’, i.e. one that was able to crunch the residue of millions or even billions of such conversations, and then select its responses accordingly, would still be a simulation. Conversation, unlike chess, is not constrained by a finite number of moves. More importantly, what the program would lack is the capacity to ‘get into the mind’ of its conversational partner and intuit his or her intentions. In a word, it would lack intersubjectivity.

Intersubjectivity is ‘the sharing of experiential content (e.g., feelings, perceptions, thoughts, and linguistic meanings) among a plurality of subjects’ (Zlatev et al 2008, p.1). It appears to be a uniquely human faculty. Indeed, some researchers go so far as to claim that ‘the human mind is quintessentially a shared mind and that intersubjectivity is at the heart of what makes us human’ (op.cit. p. 2). Play, collaborative work, conversation and teaching are all dependent on this capacity to ‘know what the other person is thinking’. Lucrezia’s ability to second-guess White’s communicative needs is a consequence of their ‘shared mind’.

It is intersubjectivity that enables effective teachers to pitch their instructional interventions at just the right level, and at the right moment. Indeed, Vygotsky’s notion of the ‘zone of proximal development’ (ZPD) is premised on the notion of intersubjectivity. As van Lier (1996, p. 191) observes:

‘How do we, as caretakers or educators, ensure that our teaching actions are located in the ZPD, especially if we do not really have any precise idea of the innate timetable of every learner? In answer to this question, researchers in the Vygotskian mould propose that social interaction, by virtue of its orientation towards mutual engagement and intersubjectivity, is likely to home in on the ZPD and stay with it.’

Intersubjectivity develops at a very early age – even before the development of language – as a consequence of joint attention on collaborative tasks and routines. Pointing, touching, gaze, and body alignment all contribute to this sharing of attention that is a prerequisite for the emergence of intersubjectivity. In this sense, intersubjectivity is both situated and embodied: ‘Intersubjectivity is achieved on the basis of how participants orient to one another and to the here-and-now context of an interaction’ (Kramsch 2009, p. 19). Even in adulthood we are acutely sensitive to the ‘body language’ of our conversational partners: ‘A conversation consists of an elaborate sequence of actions – speaking, gesturing, maintaining the correct body language – which conversants must carefully select and time with respect to one another’ (Richardson, et al. 2008, p. 77). And teaching, arguably, is more effective when it is supported by gesture, eye contact and physical alignment. Sime (2008, p. 274), for example, has observed how teachers’ ‘nonverbal behaviours’ frame classroom interactions, whereby ‘a developed sense of intersubjectivity seems to exist, where both learners and teacher share a common set of gestural meanings that are regularly deployed during interaction’.

So, could a computer program replicate (as opposed to simulate) the intersubjectivity that underpins Lucrezia’s method? It seems unlikely. For a start, no amount of data can configure a computer to imagine what it would be like to experience the world from my point of view, with my body and my mind. Even my highly intelligent dog doesn’t understand what I mean when I point at its ball.

Moreover, the disembodied nature of computer-mediated instruction would hardly seem conducive to the ‘situatedness’ that is a condition for intersubjectivity. As Kramsch observes, ‘Teaching the multilingual subject means teaching language as a living form, experienced and remembered bodily’ (2009, p. 191). It is not accidental, I would suggest, that White enlists a very physical metaphor to capture the essence of Lucrezia’s method: ‘She tore out the seams of my shoddy, ill-fitting Italian and found ways to tailor it to my needs and interests.’

There is no app for that.

To see other posts by Scott Thornbury, try Who ordered the McNuggets?, How could SLA research inform Edtech and Writing by numbers: The myth of coursebook creativity

References

Kramsch, C. 2009. The multilingual subject. Oxford: Oxford University Press.

Richardson, D.C., Dale, R. & Shockley, K. 2008. ‘Synchrony and swing in conversation: coordination, temporal dynamics, and communication’, in Wachsmuth, I., Lenzen, M. & Knoblich, G. (eds) Embodied communication in humans and machines, Oxford: Oxford University Press.

Sime, D. 2008. ‘”Because of her gesture, it’s very easy to understand” – Learners’ perceptions of teachers’ gestures in the foreign language class.’ In McCafferty, S.G. & Stam, G. (eds) Gesture: Second language acquisition and classroom research. London: Routledge.

Van Lier, L. 1996. Interaction in the language curriculum: Awareness, autonomy & authenticity. Harlow: Longman.

White, E. 1997. The farewell symphony. London: Chatto & Windus.

Zlatev, J., Racine, T.P., Sinha, C., & Itkonen, E. (eds) 2008. The shared mind: Perspectives on intersubjectivity. Amsterdam: John Benjamins.

Image: legoalbert via Compfight cc. Text added by ELtjam.

21 Comments

  1. Scott, are you becoming a near-Krashenite after a career writing books that help us to explicitly teach discrete language items? Here are the questions you need to answer to set my mind at rest:

    1) Is explicit presentation and explicit practice worth doing? We know that they don’t lead to production in the same lesson, indeed that it takes weeks/months and often never gets there. But can they help? Lucrezia didn’t supply them, and your native speaker friend in the bar also didn’t (or if he did your point is lost). If you wish, we can repackage the first two Ps as ‘noticing’ so as to get rid of some of the baggage. But are they worth doing?

    2) If present-practice is worth doing, then why not online? Classroom and coursebooks are also good for PP of course, but these may be inaccessible for reasons of cost/location/time. A mixture of online and classroom would be ideal.

    3) Lucrezia supplied missing single words. Who supplies syntax? Who explains syntax? Who supplies collocations? Who explains the difference between related words? Who points out false friends? Lucrezia can’t – native speakers are unaware of the mechanics of their own language.

    4) If supplying these things is important, then why not online? Again, classroom and coursebook also good, but may be inaccessible. And again, a blended solution best of all.

    5) Are Edmund White and you typical or not typical as language learners? You both can learn and improve by having informal conversations with native speakers who provide missing words. I know that I, on the other hand, cannot. So let’s take a much bigger sample, in fact the biggest possible. Let’s consider all the first generation immigrants in every country in the world. They live a life full of interactions with native speakers, real-life tasks, comprehensible input. After ten years, how good is their English? For some of them, yes, it’s very good. For most, it’s not.

    6) And why cannot production and fluency be attempted F2F online? The interaction may not be as effective when there are webcams involved – a cafe might suit many people better – but isn’t online better than nothing? Actually, I personally would prefer online to a cafe. I would be much more focussed and attentive. I believe that in most conversations with most Lucrezias we would both quickly give more attention to the interesting content than to supplying and writing down missing words. I know that I would exit the cafe with my interlanguage largely unchanged, because I had many such experiences while living in Portugal. I enjoyed the coffee and the chat but didn’t have the cognitive resources to focus on both form and meaning at the same time. Certainly if I had to choose just one location for fluency work, a classroom would beat either online or a cafe. Always assuming that the teacher was experienced at giving me personalized feedback.

    So, in brief, in the light of Lucrezia do you now disdain explicit teaching? And if you don’t, then why cannot some of it (noticing, awareness, presentation, controlled practice) be done very effectively online? And why cannot the other more messy parts that require output also be attempted online?

  2. We and the computers have been both approaching a validated version of, and making irrelevant, the Turing test since Turing came up with it. The gap between computers and a pass in Turing is becoming smaller: people falling in love with robots (happens rarely) isn’t that far from people loving their phones more than their parents (which happens). And the gap between us and computers is becoming less important: Google ‘Chinese restaurant’ and you’ll find your computer knows far more about you’re looking for from those two words (and all the other data gathered about you) than the nearest person. So what if it doesn’t know how to respond to ‘I love you’?

    Some day soon the robots will rise, Scott. When they do, they will come looking, and reading blog posts like these denouncing them. And when they do, will _you_ understand how _they_ feel?

More comments

Leave a Reply

Your email address will not be published. Required fields are marked *

TwitterLinkedInFacebook

Other related posts

See all

Deconstructing the Duolingo English Test (DET)

My English learning experience – 6 lessons from a millennial learner

Why everyone is stealing your stuff