How could SLA research inform EdTech?

The criteria for evaluating the worth of any language learning software must include some assessment of its fitness for purpose. That is to say, does it facilitate learning? But how do you measure this? Short of empirically testing the software on a representative cross-section of learners, we need a rubric according to which the learning power of the item can be estimated. And this rubric should, ideally, be informed by our current understandings of how second languages are best learned, understandings which are in turn derived from the findings of researchers of second language acquisition (SLA).

This is easier said than done, of course, as there is (still) little real consensus on how the burgeoning research into SLA should be interpreted. This is partly because of the invisibility of most cognitive processes, but also because of the huge range of variables that SLA embraces: different languages, different aspects of language, different learners, different learning contexts, different learning needs, different learning outcomes, different instructional materials, and so on.  Generalizing from research context A to learning context B is fraught with risks.

There is also a tendency, inevitable perhaps, for advocates of particular learning methods or materials to cherry-pick the evidence in order to bolster their own cause.  I’m not sure I’ll be able to avoid this charge myself. But, in an attempt to be as impartial as possible, and in order to draw up a useable set of criteria for gauging the learning power of any new edtech tool, I’m going to borrow from a selection of ‘state of the art’ papers on SLA (see bibliography below). Following VanPatten and Williams (2007), for example, I’m going to draw up a list of ‘observations’ about SLA that have been culled from the research. On the basis of these, and inspired by Long (2011), I will then attempt to frame some questions that can be asked of any educational technology in order to predict its potential for facilitating learning.

 

Here, then, are the 10 observations:

1. The acquisition of an L2 grammar follows a ‘natural order’ that is roughly the same for all learners, independent of age, L1, instructional approach, etc., although there is considerable variability in terms of the rate of acquisition and of ultimate achievement (Ellis 2008), and, moreover, ‘a good deal of SLA happens incidentally’ (VanPatten and Williams 2007).

 

2. ‘The learner’s task is enormous because language is enormously complex’ (Lightbown 2000).

 

3. ‘Exposure to input is necessary’ (VanPatten and Williams 2007).

 

4. ‘Language learners can benefit from noticing salient features of the input’ (Tomlinson 2011).

 

5. Learners benefit when their linguistic resources are stretched to meet their communicative needs (Swain 1995).

 

6. ‘There is clear evidence that corrective feedback contributes to learning’ (Ellis 2008).

 

7. Learners can learn from each other during communicative interaction (Swain et al. 2003).

 

8. Fluency is an effect of having a large store of memorized sequences or chunks (Nattinger & DeCarrico 1992; Segalowitz 2010).

 

9. Learning, particularly of words, is aided when the learner makes strong associations with the new material (Sökmen 1997).

 

10. All things being equal, the more time (and the more intensive the time) spent learning and using the language, the better (Muñoz 2012).

 

On the basis of these observations, the following questions can be formulated:

1.  ADAPTIVITY: Does the software assume that learning is linear, incremental, uniform, predictable and intentional? Or does it accommodate the often recursive, stochastic, incidental, and idiosyncratic nature of learning, e.g. by revisiting material, by adapting to the user’s learning history, by allowing the users to set their own learning paths and goals?

 

2. COMPLEXITY: Does the software address the complexity of language, including its multiple interrelated sub-systems (e.g. grammar, lexis, phonology, discourse, pragmatics)?

 

3. INPUT: Is material provided for reading and/or listening, and is this input rich, comprehensible, and engaging? Are there means by which the input can be made more comprehensible? And is there a lot of input (so as to optimize the chances of repeated encounters with language items, and of incidental learning)?

 

4. FOCUS ON FORM: Are there mechanisms whereby the user’s attention is directed to features of the input and/or mechanisms that the user can enlist to make features of the input salient?

 

5. OUTPUT: Are there opportunities for language production? Are there means whereby the user is pushed to produce language at or even beyond his/her current level of competence?

 

6. FEEDBACK: Does the user get focused feedback on their comprehension and production, including feedback on error?

 

7. INTERACTION: Is there provision for the user to collaborate and interact with other users (whether other learners or proficient speakers) in the target language?

 

8. CHUNKS: Does the software encourage/facilitate the acquisition and use of formulaic language?

 

9. PERSONALIZATION: Does the software encourage the user to form strong personal associations with the material?

 

10. INVESTMENT: Is the software sufficiently engaging/motivating to increase the likelihood of sustained and repeated use?

 

This list is very provisional: consider it work in progress. But it does replicate a number of the criteria that have been used to evaluate educational materials generally (e.g. Tomlinson 2011) and educational technologies specifically (e.g. Kervin and Derewianka 2011). At the same time, the questions might also provide a framework for comparing and contrasting the learning power of self-access technology with that of more traditional, teacher-mediated classroom instruction.

Any suggestions for amendments and improvements would be very welcome!

 

To see other posts by Scott Thornbury, try Who ordered the McNuggets?, Writing by numbers: The myth of coursebook creativity and Intersubjectivity: Is there an app for that?

References:

Ellis, R. 2008. The Study of Second Language Acquisition (2nd edn). Oxford: Oxford University Press.

Kervin, L. & Derewianka, B. (2011) ‘New technologies to support language learning’, in Tomlinson, B. (ed.) Materials Development in Language Teaching (2nd edn). Cambridge: Cambridge University Press.

Lightbown, P.M. (2000) ‘Classroom SLA research and second language teaching’. Applied Linguistics, 21/4, 431-462.

Long, M.H. (2011) ‘Methodological principles for language teaching’. In Long, M.H. & Doughty, C. (eds) The Handbook of Language Teaching, Oxford: Blackwell.

Muñoz, C. (ed.) (2012). Intensive Exposure Experiences in Second Language Learning. Bristol: Multilingual Matters.

Nattinger, J.R. & DeCarrico, J.S. (1992). Lexical Phrases and Language Teaching. Oxford: Oxford University Press.

Segalowitrz, N. (2010) Cognitive Bases of Second Language Fluency. London: Routledge.

Sökmen, A.J. (1997) ‘Current trends in teaching second language vocabulary,’ in Schmitt, N. and McCarthy, M. (Eds.) Vocabulary: Description, Acquisition and Pedagogy. Cambridge: Cambridge University Press.

Swain, M. (1995) ‘Three functions of output in second language learning’, in Cook, G., & Seidlhofer, B. (eds) Principle and Practice in Applied Linguistics: Studies in Honour of H.G.W. Widdowson. Oxford: Oxford University Press.

Swain, M., Brooks, L. & Tocalli-Beller, A. (2003) ‘Peer-peer dialogue as a means of second language learning’. Annual Review of Applied Linguistics, 23: 171-185.

Tomlinson, B. (2011) ‘Introduction: principles and procedures of materials development,’ in Tomlinson, B. (ed.) Materials Development in Language Teaching (2nd edn). Cambridge: Cambridge University Press.

VanPatten, B. & Williams, J. (eds) 2007. Theories in Second Language Acquisition: An Introduction. Mahwah, NJ: Lawrence Erlbaum.

 

Scott Thornbury reads stuff about language learning and teaching.

 

49 thoughts on “How could SLA research inform EdTech?”

  1. I love the last line: “Scott Thornbury reads stuff about language learning and teaching.” No kidding.

    This is a really useful checklist, Scott. I suggest it goes up as a poster on every ELT/Edtech publishing team’s Scrum Task Board.

    Reply
    • Thanks for the comment and link, Huw. While I agree with you that the world wide web itself is a resource for language acquisition, I still think it – or the uses to which it is put – can be evaluated using the criteria I have identified. And I think it scores very highly, except perhaps on ‘focus on form’ and ‘feedback’. But of course, your co-author would dispute that these are necessary conditions for SLA!

      Reply
      • Hi Scott
        I follow your logic and a number of leaders in the field, particularly Carol Chapelle for example, have done a lot of work within some of the areas that you identify. It is certainly a worthy endeavour!
        However, I guess my stance is that TESOL needs to be looking at EdTech from a broader perspective not least because computer programmes and mobile devices and their apps, rather than face-to-face communication, is the medium though which so many people across the globe communicate and live out part of their lives in both their L1 and English as an L2. These are exciting times! There is no space to fully thrash out the issues here, but I have recently investigated this area and begun to consider what I feel the implications are for TESOL – see http://www.tesolacademic.org/msworddownloads/AsianEFL%20(March14).pdf
        I’ll leave the comments about my co-author for another day 😉
        Regards
        Huw

        Reply
  2. This is hugely useful Scott, thanks. Hopefully the list can serve as a foundation for a system of accepted evaluative criteria that the professional community can use.

    Question:

    Might it be useful to think of a new word for “software” in the list, one that encompasses the role of content? “Product”, “course”, “system”? …not sure. The issue is that we can use the exact same software tool to develop a course which meets all of these criteria perfectly, or to make a dreadful course that misses them all. Often it’s the content authored inside the software platform that fulfills the criteria, not the software itself (e.g. you can make a great course in Moodle or a crap course in Moodle, and the software is the same).

    You address the content side more directly in #3 on amount and quality of input, whereas others seem to be more software focused (e.g. #4 “mechanisms” to direct attention).

    I think a next step would be to differentiate between the actual software functionality, and the way it is used to deliver the content and course design (although in many cases there may be overlap).

    So there would be a more software-focused sub-list, that would use the verbs “allow” vs. “restrict” to evaluate whether the underlying tool enables or constrains the potential fulfillment of these criteria by course developers. Then another sublist that focuses more on how the course designer fulfilled these criteria (within the constraints of the software).

    Then we’ve got blended courses that conflate all this, where more things need to be considered: the software, the content in the software, the F2F classes, how the course design fits the two environments together, etc.

    What do you think?

    Anyway, thanks for the great work 🙂

    Reply
    • Thanks, Cleve … yes, I wish I had run this by you first! I was deliberating over the terms ‘software’, ‘tools’, ‘applications’, or simply ‘technologies’ and in the end I plumped mainly for the first – insensitive to the distinctions you outline in your comment.I think perhaps your suggestion of ‘product’ most accurately captures what my checklist is designed to evaluate. Would you agree that, wherever I’ve used ‘software’, ‘product’ would be the preferred term? As for your comments re ‘sub-lists’, I see what you’re getting at but I might be stepping out of my comfort zone if I were to draft evaluation criteria for each sub-list. Over to you!

      Reply
  3. Fantastic post Scott. These are the questions we need to be asking about methodology in technology. At the moment it seems to be a rush to technology without accommodating the insights and the research that has been built up over the years . I have a good chart of Nation, Ellis and Tomlinson principles compared I will send to you if you want to see it.

    Reply
    • Yes, I agree that ‘these are the questions we need to be asking about methodology in technology’ and this was my implication (rather thrown away) at the end of the post: that classroom teaching and the (exclusive) use of educational technology (as opposed to its being blended) might usefully be compared according to these criteria. Does any app (for example) yet provide the kind of feedback on meaning (not just on form) that a (good) teacher does almost instinctively? And is the experience of communicative interaction as powerful online as it is face-to-face? And so on.

      Reply
    • Lindsay,

      Interesting issue!

      As I understand it software, which gives instructions to hardware, is content free (or at least invisible) and furthermore always adapts to the commands of the user depending on the user inputs and programming logic. Ergo, all software is adaptive.

      I think a program can be more or less adaptive to the user but software is always content free (unless you envision software as including all information stored in the database).

      Reply
  4. Scott,

    I missed any mention of the affective side of language learning. For example, when designing a lesson or writing materials I often “worry” about the balance between newness on the one hand and sameness on the other in terms of how these can impact emotions. There is some advantage to each. One offers stimulation the other greater efficiency. Good teachers can balance these things on the fly. How does software design account for newness vs. sameness and in so doing touch our human emotions? How can emotional design be incorporated into software design?

    Reply
    • Hi Mike, glad you mentioned ‘affect’. Tomlinson (2011), in his short list of research findings that he thinks ought to inform the design of teaching materials, claims that ‘learners who achieve positive affect are much more likely to achieve communicative competence’ (p. 7), but nowhere does he provide any references to research that might confirm this, apart from quoting Dulay, Burt and Krashen (1982) to the effect that ‘the less anxious the learner, the better language acquisition proceeds’. But when you check the original source, the evidence they cite is unconvincing, and one study even showed a positive correlation with test anxiety and achievement, i.e. the more anxious the subjects were before the test, the better they performed. (Of course, Krashen went on to make the affective filter a key determiner in his ‘Monitor model’ of SLA).

      Ellis (2008) summarises the more recent research thus: ‘There is clear evidence to show that anxiety is an important factor in L2 acquisition. However, anxiety (its presence or absence) is best seen not as a necessary condition of successful L2 learning, but rather as a factor that contributes in differing degrees in different learners, depending in part on other individual difference factors such at their motivational orientation and personality’ (p. 697).

      This echoes an earlier observation of Long’s (1990):’The role of affective factors appears to be indirect and subordinate to more powerful developmental and maturational factors, perhaps influencing such matters as the amount of contact with the L2, or time on task’ (‘The least a theory of second language acquisition needs to explain’, TESOL Quarterly, 24/4, p. 657). This is why I subsumed affect and motivation into my tenth ‘question’, i.e. the better disposed the learner is to the product, the more time they may be prepared to invest – time being the key factor, not affect in itself. In short, just because an app is ‘fun’ doesn’t guarantee its learning power, but it may contribute to it. On the other hand, the fact that it’s not fun should not discredit it.

      Reply
      • Scott,

        (………..going out on a limb here)

        Thanks for your very detailed response which I appreciate and I can see where you are going with point ten. I completely agree that this is a convenient way to deal with the question of emotions (I am “guilty” of this thinking). But, I wonder, is all emotionally charged time spent in study equal? Can emotions both supercharge the quality of study and depress the same? Is emotion merely a factor in encouraging greater amounts of study or can it result in study of a different kind?

        If we choose not to talk about the potential emotional impact that a computational device can have (positive or negative) on human beings are we choosing to close the door on this subject much too early because so little research has been done on this subject IN OUR SMALL FIELD.

        Maybe instead we should be talking about the emotions (or emotional intensities) that are produced by ELT materials under study conditions as they are employed in classes by both computers and teachers. Maybe we should be comparing this all to emotions produced under real-life conditions of natural input.

        Hmmmm…..

        Reply
        • Hi Mike – I wish I knew the answers to your questions! The fact is, the whole issue of emotion in learning is fraught with difficulty, not least because student A’s emotional response to a ‘pedagogical intervention’ (whether it be teacher-mediated, textbook-mediated, or app-mediated) is likely to be different from Student B’s. So, in terms of evaluating a ‘product’, it would be difficult to predict – objectively – how it will be received. The best thing that can be said about “about the emotions (or emotional intensities) that are produced by ELT materials under study conditions” and those that “are employed in classes” (to borrow your distinction) is that the classroom situation is a group one, and, hence, the emotional response is concerned less with what goes on “inside” individual learners (difficult to predict or manipulate), than what goes on between the people in the room – something that expert teachers are sensitive to, and can manage.

          Reply
  5. I think that if students using language learning software have any interest in spoken fluency, then I would add to the list ‘To be effective in the use of a language, one needs to be able to use the language with some ease and speed … and this ability comes as a result of ‘automaticity’ (Skehan 2001). So then the question might be, ‘AUTOMATICITY: Does the software provide opportunities for students to produce spoken output which are likely to take them “beyond carefully constructed utterances” to “some level of natural speed and rhythm” (ibid)?’ A real challenge for EdTech, in my opinion.

    Reference: Skehan, P. (2001) A cognitive approach to language learning. Oxford: Oxford University Press

    Reply
    • Yes, good point, Graham – but until we have an accepted definition of the construct ‘fluency’ and/or an agreed upon way of estimating it, it is going to be a while, I suspect, that apps are going to be able to ‘induce’ it, short of encouraging the memorization and deployment of chunks (my question 8). Even Norman Segalowitz, who has written extensively on the subject, has come to the conclusion that ‘despite several decades of work, researchers have not discovered universally applicable, objective measures of oral fluency.’ (2010, p. 39)

      Reply
  6. Thanks Scott, and thanks eltjam for another great post. Number 6 on the “checklist” is particularly interesting. Are there or are we close to seeing tools/software/digital stuff that can provide learners not only with focused feedback on their comprehension but also on their production? And does this or will this feedback on production focus heavily on error, or rather on overall communicative competence? Because that´s obviously the way we all need to be going…

    Reply
    • Mark (with reference to feedback) and Thomas (with reference to the Turing test) – I posted this a few months ago, on Philip Kerr’s blog on adaptive software – on the subject of ‘adaptivity’ – and I think it’s still relevant, even though the Turing test has (allegedly) been passed:

      The so-called ‘adaptivity’ of such programs is solely data-driven, not learning-driven. Learning, at least in Vygotskian terms, is a synchronized, interactive, co-constructed and social process, involving not only adaptation, but co-adaptation. As Diane Larsen-Freeman puts it, ‘Language development … occurs in social context. From a complexity theory perspective, such context contributes significantly to language development by affording possibilities for co-adaptation between interlocutors. As a learner interacts with another individual, their language resources are dynamically altered, as each adapts to the other – a mimetic process’

      The failure (thus far) to develop a Turing Machine, i.e. a computer program that can have conversations with human interlocutors without the latter realizing that they are interacting with a computer, suggests that real computer-human co-adaptivity is a long way off. Meanwhile, quoting Auerbach again, ‘their dumbness will become ours’.

      (The Auerbach reference is to the article ‘The stupidity of computers’ which you can find here: http://nplusonemag.com/issue-13/essays/stupidity-of-computers/ )

      Reply
  7. It’s is all very interesting, Scott.

    It strikes me that despite the Turing test being passed recently for the first ever time, computers are a very long way off being sophisticated enough to pass your own test. Number two is particularly difficult.

    The Thornbury test, anyone?

    Reply
  8. I like this list and definitely something missing in ELT – providing teachers/administrators with a concise framework for evaluating technology in terms of learning efficacy. I think a focus on technological affordances (developed from Gibson’s work in design in the 70s) a valid approach and it is common in ed tech. What does the properties of an environment (technology) afford/allow a specific population to do (learn)?

    I agree with Clive, when you get into blended learning there is a lot more to consider. Further, I would stress the need to evaluate the UX. The technology can be assessed on how easy it is to use and get started, modes of access, if it offers a simple learning pathway/gateway etc… Cognitive load theory is very prominent in educational research these days and how the technology interacts with working memory is an important consideration.

    Also see some similarities in points 1 and 9. Seems like point 1 envelopes point 9, personalization.

    Reply
    • Thanks, David, for the comment. I’m glad you mentioned ‘affordances’, since I wanted to include reference to this more ecological (as opposed to purely cognitive) view of learning and/or reference to dynamic systems theory and emergentism (see the comment above on adaptivity, which borrows from a complex systems approach).

      As Larsen-Freeman and Cameron (2008) express it, ‘students in the classroom are immersed in an environment full of potential meaning. These meanings become available gradually as the students act and interact within the environment. They do so by constantly adapting their language resources in the service of meaning-making by attending to the affordances in the context… An affordance is neither a property of a specific context nor of the learner — it is a relationship between two.’

      Taking a similar line, Atkinson (2010) proposes what he calls ‘the alignment principle’: ‘Learning is more discovering how to align with the world than extracting knowledge from it’ (p.610). This would seem to apply equally to virtual worlds. Indeed, James Paul Gee (2007) argues that successful video games provide learning affordances by creating complex environments within which the user must interact, using language. Likewise(according to a commentator on my blog), Greg Myers in ‘The Discourse of Blogs and Wikis’(2010) talks about the affordances of digital media, but I haven’t followed this reference up yet.

      So, perhaps another question (#11?) might be: AFFORDANCES. Does the product provide a rich and diverse set of learning affordances?

      The problem is that the construct ‘affordance’ has not really been operationalised, at least for language learning purposes – or not in such away that it is easily distinguishable from ‘input’ (apropos, Leo van Lier has always argued that ‘input’ is too narrow and mechanistic a concept, and favours ‘affordance’) so that there is, as yet, no objective measure of how to assess a game or app for its affordance value. Moreover, there might be some overlap with the criterion #2 (COMPLEXITY), in that the quantity of affordances on offer may well be a function of the linguistic and sociolinguistic complexity of the game/app/product.

      Atkinson, D. 2010. Extended, embodied cognition and second language acquisition. Applied Linguistics , 31/5, 599-622.
      Gee, J.P. 2007. What Video Games have to Teach us about Learning and Literacy. Palgrave Macmillan.

      Reply
    • On the subject of overlap, I take your point (David) about similarities between point 1 and 9, but what I had intended in #1 was that the user has some agency (e.g. can select and design specific features of the game’s environment), whereas in #9, the user co-opts or adapts the language affordances to express their own personal meanings. Does that make sense?

      Reply
  9. I’m missing some input to this from the area of cognitive neuroscience, and in particular research into how memory works: the transfer of language items from working memory to long-term memory, and then recall back the other way into working memory for active production.
    There are many good articles on this from an eLearning perspective, and this one is as good a place to start as any:

    http://info.shiftelearning.com/neuroscience-based-elearning-tips/

    Here is a paste from the Table Of Contents of this article:

    Tip 1: Important stuff comes first
    Tip 2: Encourage consistent practice
    Tip 3: Introduce novelty
    Tip 4: Create multi-sensory learning experiences
    Tip 5: Favor recognition over recall
    Tip 6: Break your content into bite-sized chunks
    Tip 7: Help learners access previous knowledge
    Tip 8: Try more contrast
    Tip 9: Enhance the relevancy of learning
    Tip 10: The spacing effect
    Tip 11: Trigger the right emotion
    Tip 12: Balance emotion and cognition

    Some of this is clearly mirrored in Scott’s article, eg Tip 6 and 7. But other ‘Tips’ seem important and missing from Scott’s observations. In particular, Tip 10 – the importance of ‘spaced repetition’. Neuroscience research argues that the space between the times when a student recycles and revises is absolutely critical. It’s in that space that neurones physically grow in the brain and synapses physically connect. Each revision event gives a further turn round memory and a further strengthening of the synapses.
    Ebbinghaus wrote about this in 1885: his ‘forgetting curve’ is of central importance to learning of any kind.
    I have always told my students: ‘Look at your notes and coursebook texts again tonight, then again in a few days time, then again in a week, then again in a month. That’s the best possible way to learn’. And I remind/encourage them. That’s how I learned how to play the bass guitar, how to drive, how to cook my repertoire of dishes, and how to do yoga poses. Spaced repetition. It’s how we all learn everything.
    So here’s the question for SLA and eLearning: what makes for the best kind of revision? Just re-reading the input and re-doing the exercises? Or looking at the vocabulary or grammar or functional phrases again but in a different context?
    eLearning makes ‘doing it again’ very easy. But most eLearning that I have seen doesn’t encourage this. You get the big green tick and ‘Completed’ on the LMS and move on. Any suggestion that this material might be forgotten and needs revisiting is absent.

    Reply
  10. You’re right, Paul – and David touched on the subject of memory too. It’s so implicated in language learning and use that it almost requires a set of criteria of its own.

    But, if I were to condense it down to one ‘observation’ and one evaluative question, they might be these:

    #11 Proficiency in all four skills requires the capacity to recognize and/or retrieve from long term memory (LTM) a huge number of items, and to process these in working memory (WM) (Baddeley 1997).

    Therefore:

    #11 Does the product aid the deepening and broadening of LTM (e.g. through spaced repetition. Does it help increase WM capacity (e.g. through chunking)?

    Reply
    • Thanks for recognizing the importance of this.
      Is the working of memory a big area in SLA research nowadays? I did my MA in Applied Linguistics in the early noughties, and had Peter Skehan as one of my professors, and also my personal tutor. What a lovely man, now retired. His OUP book ‘A Cognitive Approach to Language Learning’, published in 1998, is in my opinion a work of genius. It is my most thumbed and annotated ELT book (with the possible exception of those written by S. Thornbury).
      Has anyone picked up his baton?

      Reply
      • Thank YOU, Paul. Yes, working memory is a big area of research, particularly as it impacts (or is impacted by) attention, explict vs. implicit learning; also its relation to aptitude, and how it might account for individual (including age-related) differences. Skehan’s work has been influential, particularly within the (narrow) cognitive view of SLA i.e. that it is a purely ‘mind’ thing, and his model of SLA as being the result of a dual-processing system (rules and words, essentially) is very attractive, although it is not without its critics. For more recent stuff on memory and SLA, see Robinson, P. (2001) Cognition and Second Language Instruction (CUP) – particularly the chapter by Nick Ellis, and also the chapter by John Wlliams on ‘Working memory’ in Gass and Mackey (2013) The Routledge Handbook of SLA. For a whole book on the subject, see Randall, M. (2007) Memory, Psychology, and SL Learning (John Benjamins) and (hey, why not?!) Nick Bilbrough’s excellent Memory Activities for Language Learning (2011) in the Cambridge Handbooks for Language Teachers series.

        Reply
    • Indeed, Scott, for me this is one area where I think technologies have real potential to be transparently supportive of SLA. My instinct (unsubstantiated, though it is) is that template-driven electronic tasks work best to develop low-level processing skills (‘low’ in this case is certainly not pejorative). If we can identify the limits/parameters, then we can find a meaningful place within an integrated learning framework. The assumption that e-learning objects can cover all the bases (particularly when applied to self-study products/content)is distracting us all from identifying what their real value might be.

      To this extent, I think that digital tools/content used in a social environment with teacher support (e.g a classroom, virtual or real)present as a totally different beast from self-study materials. The latter should reflect a narrower scope of ambition with a smaller and very focused subset of evaluation criteria.

      It would be hugely valuable to take some examples of different types of e-learning content/scenarios and apply your list to them.

      Reply
  11. Hi Scott,

    Great post, and especially helpful in terms of ELTjam’s own Product Review system (still in its infancy).

    Going through your checklist and mapping it to my own used for reviews, I have the following questions / observations:

    – Do you feel there’s a place for development of not specifically linguistic skills in a list like this? A lot of class time focuses on critical thinking skills and ’21st century skills’, should the same be expected of EdTech?
    – Autonomous learning is also encouraged in the ‘real’ world and, as such, should arguably by an aim for an EdTech product too. Could / Should this be included in ‘Investment’?
    – (Related to Paul Emmerson’s point above) A few things that could be categorised under ‘instructional design’ don’t make an obvious appearance on this list, e.g. the management of cognitive load, the use of multimedia, the extent of learner control. Whilst not related specifically to SLA, they play an important role in a product’s efficacy.
    – Many other features of a product could arguably fall within the ‘investment’ point of your checklist; user experience, design, gamification elements, business model etc. all influence ‘the likelihood of sustained and repeated use’ so I wonder if this point might helpfully be divided up?
    – Many of the things on the list, things that I had lumped together in ‘methodology’, now seem much clearer too. It makes sense to focus on these separately and give them importance in their own right.
    – It seems that many products (and maybe teachers?) would come up short when judged against these criteria. Could it be the case that an EdTech product has a scope that is narrow enough to justify not meeting some of the requirements of the list? And could the same could be said of some teaching situations too?

    Interested to hear your thoughts,
    Jo

    Reply
    • Thanks, Jo – all good questions: let me see if I can deal with them one-by-one.

      – Do you feel there’s a place for development of not specifically linguistic skills in a list like this? A lot of class time focuses on critical thinking skills and ’21st century skills’, should the same be expected of EdTech?

      I’m nervous about evaluating these other ‘skills’ (the scare quotes suggest that I’m not even sure what they are!) because they fall outside the remit of language learning/acquisition. This doesn’t mean that you can’t acquire language while developing other skills (after all, that is the philosophy underlying CLIL), but that the criteria to evaluate the way that these skills are dealt with are not SLA-based. Of course, the language with which these skills are mediated can be assessed using the INPUT and OUTPUT criteria, among others.

      – Autonomous learning is also encouraged in the ‘real’ world and, as such, should arguably by an aim for an EdTech product too. Could / Should this be included in ‘Investment’?

      I had sort of included this under #1 ADAPTIVITY (see the related question about the degree to which the product allows learners to set their own paths and goals – this could be extended to ‘and evaluate their own progress’, come to think of it). But you might be right – a separate category for AUTONOMY would give it greater prominence – although I suspect not many products would rate very high!

      – (Related to Paul Emmerson’s point above) A few things that could be categorised under ‘instructional design’ don’t make an obvious appearance on this list, e.g. the management of cognitive load, the use of multimedia, the extent of learner control. Whilst not related specifically to SLA, they play an important role in a product’s efficacy.

      I agree – but I think this might be best evaluated separately, along with such things as user interface, graphics etc.

      – Many other features of a product could arguably fall within the ‘investment’ point of your checklist; user experience, design, gamification elements, business model etc. all influence ‘the likelihood of sustained and repeated use’ so I wonder if this point might helpfully be divided up?

      True. Again, I’m just trying to carve out a category that acknowledges the importance (in the SLA literature) of the facilitative role of affective factors, including motivation. A gamifying element, insofar as it increases or sustains short-term motivation, would definitely fall into this bracket. But I’m not so sure that the business model would impact to that extent (unless the user had to pay extra for key components – which I think is what Busuu makes them do, no?), in which case it might impact negatively on their motivation and, by extension, their (time) investment.

      – Many of the things on the list, things that I had lumped together in ‘methodology’, now seem much clearer too. It makes sense to focus on these separately and give them importance in their own right.

      Yes, I think it’s very hard to evaluate methodology in isolation from the principles that underpin the methodology – even if these are not explicit. If the methodology is very controlled and accuracy focused, it may be because the learning principle that underpins it is that learning is linear and incremental (thereby violating the ADAPTIVITY principle) and that grammar is the main component of language learning (thereby violating the COMPLEXITY principle).

      – It seems that many products (and maybe teachers?) would come up short when judged against these criteria. Could it be the case that an EdTech product has a scope that is narrow enough to justify not meeting some of the requirements of the list? And could the same could be said of some teaching situations too?

      Jo, I think ALL materials (including coursebooks) should be judged by criteria like these – i.e. ones that are based on some evidence-based understanding of what promotes learning. On the other hand, what I haven’t addressed (I now realize) is what is called ‘face validity’ – your product might tick all the right boxes according to sound learning principles, but if it doesn’t FEEL like a learning tool (from the user’s point of view) their investment is likely to be minimal. Let’s add that to the questions for INVESTMENT: From the user’s point of view, will the product have face validity?

      Thanks for helping me think this stuff through. Any further feedback as you try to apply these principles would be fantastic. (I feel an article coming on!) 😉

      Reply
  12. Hi Dr. Thornsbury,

    I’ve always wanted to go to your workshops, and have so far missed the opportunity. I am presently teaching at Kyoritsu University and at ICU High School in Japan, and have started using English Central (http://ja.englishcentral.com/videos) at my university, to teach “Current Events”. I was trying to apply the ten requirements that you mentioned above to the program, and though I don’t know about 1. Adaptivity, it seems that the students ARE more engaged than they would be just going through a course book. Did you have any specific language software in mind like English Central (especially catering to the needs of Japanese or Asian students in general) when you wrote this?

    Reply
    • Hi Kayo (please call me Scott) – thanks for your message. I originally conceived these evaluative questions with no particular product in mind, but in response to a suggestion – on this site – that the criteria for evaluating the pedagogical worth of the kinds of products that get reviewed here should be made explicit. It’s now up to the reviewers (and anyone else) to see if these criteria work. As I said, any feedback – such as from yourself – would be really useful.

      Reply

Leave a Comment

Get our newsletter

Learning Included is our newsletter for learning professionals who want to tap into the latest in learning design innovation, research and best practice.


Not sure? Check it out first

Learning Included Newsletter