AI’s attention deficit

  • Themes: Technology

The deployment of Large language models (LLMs) in education rubs up against a fundamental human problem: attention and its scarcity.

Caricature of a Victorian classroom.
Caricature of a Victorian classroom. Credit: Heritage Image Partnership Ltd / Alamy Stock Photo

Equations Show

Demonstrate how to solve equations with a video in the style of a cooking show. Use props to represent variables and numbers. Combine it with other videos in Clips to create an equations cookbook.

Math Man

Create an illustrated book about a superhero who uses math in everyday life. Sketch characters who encounter challenges in which they need to use math to solve a problem. Have the superhero enter the scene and use math to save the day!

Apple, Everyone Can Create

If you want an example of why technology hasn’t improved education, you could do worse than flick through Everyone Can Create, a glossy document setting out Apple’s vision of how technology can be used in the classroom. Barely any of it pays any attention to the science of how humans learn. Most of the recommended lessons are designed to be fun activities that maximise students’ time using Apple devices and applications.

The guide is a perfect example of the problems with education technology: too often it is all about the technology and not about the education. If technology is going to improve education, then first we have to acknowledge the reality of how humans learn. Then, we have to work out how technology can – and cannot – support that process.

If you typed the prompt ‘traditional classroom’ into an AI-image generator, you might see a picture of desks in rows, a teacher at the front talking and writing on a blackboard, and some bored students with their heads slumped on the desks. This is the caricature of the traditional classroom, and it is one that has come in for some legitimate criticism. Many students have sat in classrooms like this and learned very little as the teacher’s words washed over them.

The progressive alternative is a classroom, where students are less passive and can make choices about what and how they learn. Instead of inauthentic lectures and book learning, students would engage with real tasks and naturally discover the knowledge they need to solve them.

These solutions to the passivity of the traditional classroom create more problems. If students are left to discover the knowledge they need, they are very likely to discover the wrong facts and acquire misconceptions. Real-world tasks might seem intrinsically motivating and exciting, but they are also complex, and easily overwhelm working memory. It is also quite hard to come up with genuine real-world problems that school students are capable of understanding, which is why you end up with contrived activities like maths cookbooks.

The most successful methods of classroom instruction address the weaknesses of both the traditional and progressive methods. They involve carefully constructed curriculums, which break down complex tasks into smaller steps, provide explicit instruction for each step, and can be adjusted based on frequent feedback from student responses. These methods involve strong teacher guidance and high-quality curriculums, which stop students discovering the wrong thing. They also involve frequent questions and answers, which keep students actively engaged.

Synthetic phonics reading programmes are perhaps the most famous and successful application of these principles. These programmes break down the complex skill of reading into smaller steps by teaching the correspondences between letters and sounds. They also provide plenty of opportunities for practice and repetition, so that students are able to read fluently and automatically, without having to stop and think about what they are doing.

These principles can be applied to any content area, and are generally referred to as the principles of direct instruction. One of the leading exponents of this approach, Siegfried Engelmann, has said that direct instruction is all about the ‘picky, picky detail’. Tiny differences in the examples presented to students can make a big difference to their understanding. Suppose you want to teach students what a verb is – you could start with a short explanation, and a few examples.

Jack sprinted to the shop.

She plays football every day.

Trains are a type of transport.

The problem with these examples is that the verb is always the second word in the sentence, so students might assume that this is in fact a defining feature of a verb. Suppose you then give them the following sentence and ask them to find the verb.

Every day, she plays football.

If they reply ‘day’, you’ll know they’ve made the wrong inference.

Engelmann wrote a fascinating book called Could John Stuart Mill Have Saved Our Schools? in which he argued that the principles Mill outlines in A System of Logic could be used very effectively in education as a means of structuring examples and predicting the inferences that will be made from them.

Is direct instruction traditionalist or progressive? One might see it as a blend of the two: it retains the traditionalist authority of the teacher, but combines it with an intense focus on feedback from students. In practice, direct instruction is generally disliked by progressives and viewed more favourably by traditionalists. In England, a group of teachers sympathetic to these principles have nicknamed themselves ‘neo-trads’, which is probably as good a label as any.

If we apply these insights to education technology, they explain a lot about what has and hasn’t worked in the past and what is more likely to work in the future. Providing high-quality content at scale does not on its own solve the education problem. Education does require interaction. It requires mental activity on the part of the student, and it requires teachers to adapt their instruction based on student responses. On their own, books cannot provide this. If they did, then technology would have solved the problem of education decades, even centuries, ago: if not with the printing press then with the public library. But while the printing press and public libraries are wonderful, they are not interactive.

Interactivity on its own is not enough, and this explains why many modern and progressive forms of education technology don’t work either. They are high on interactivity, but low on structure, guidance and quality content. They assume that students can learn by just tootling around on the internet and clicking on links to their favourite YouTube videos. They don’t realise the careful sequencing of content and the amount of practice required for students to grasp new ideas.

The most successful forms of education technology are, therefore, those that combine interactivity with structured curriculums. Some programmes like this exist already, typically for maths and foreign languages, where the sequencing of content is more easily agreed upon. In developed countries, they are often used for homework, but in developing countries which lack enough schools and teachers, they are sometimes used as the main form of instruction. Maybe the most famous consumer-facing programme is Duolingo, which teaches foreign languages using many of the principles discussed above.

These applications were all designed before Large Language Models (LLMs) existed. Could LLMs improve on these programmes, and if so, how? There are a couple of promising avenues. Practice and repetition are a vital part of learning, and creating enough varied examples and questions for students to practise is one of the most time-consuming parts of curriculum creation. LLMs are very good at creating content, so they could potentially be used to create questions and examples. They could also be used to create entirely new curriculum models for niche subjects. One of the most difficult parts of creating a direct instruction curriculum is breaking down a complex skill into its constituent parts. Really good models only tend to exist for big popular subjects like maths, early reading and widely-spoken foreign languages.

Are LLMs capable of performing these tasks? Recall what Engelmann said about direct instruction: that it is all about the picky detail. LLMs are terrible at the picky detail. They frequently make up entirely new facts, get things wrong, change their mind and do so in a way that is incredibly plausible. A few examples: I asked GPT-4, probably the most cutting-edge model, to list all the letter combinations that can represent the sound /p/. This is an elementary part of any synthetic phonics programme. It identified ‘p’, but missed ‘pp’. I asked it to list all the letter combinations that can represent the sound /t/: not only did it miss some, but it listed some sounds represented by the letter ‘t’, a classic misconception that frequently confuses students. On other topics it makes errors, too. I asked it what Joseph Chamberlain’s view of the 1870 Elementary Education Act was. It said he was supportive (he wasn’t). I asked it to list sportsmen who had played cricket and football for England, and it included Andrew Flintoff (he didn’t). These hallucinations, as they are termed, are a major block to using LLMs for any jobs where accuracy really matters.

It’s a sign of how much faith our societies have in technological progress that most people assume that future models will easily eliminate these errors. LLM advocates will point to a couple of reasons for optimism. New models will be trained on more data, which they say will make them more powerful. And other applications can be used to patch up weaknesses: for example, LLMs are terrible at maths, but the latest models work around that problem by sending maths problems away to external calculators. Educational applications might be able to combine LLMs with other techniques: a new maths chatbot, Rori, combines 500 micro-lessons with an LLM-powered chat function, and has shown some promise in raising attainment in schools in Ghana.

LLM sceptics, however, say that errors are features of LLMs, not bugs. They’ll argue that hallucinations won’t be solved by more powerful models, because they are fundamental to the way that LLMs work. They’ll say that expecting current models to keep improving is like trying to build trees to reach the moon. You might see a lot of rapid initial progress, but the progress will level off long before you get close to your final goal. They’ll also say that even the workarounds with external calculators can fail. One popular maths education website, Khan Academy, has integrated an LLM-powered chatbot into its system, but a recent review in the Wall Street Journal suggested it was still making elementary maths errors.

Who is right? We will see fairly soon if high-quality and accurate LLM-powered applications are possible and if they get widely used, but even if LLMs are not capable of the required accuracy, existing educational programmes could continue to be developed using content created by human experts instead.

Education technology is powerful because it is scalable, easily personalised and flexible. A virtual tutoring system can provide individually-tailored instruction to millions of students whenever they want it, something that is impossible for even the very best human teachers, but digital education has its downsides, too. Most programmes require students to have their own laptop or tablet, and most of the instruction they provide is screen-based. Currently, it does not seem as though students can learn as well through a screen as they can from a human. The enforced school closures brought about by Covid-19 gave the whole world the chance to experiment with many different types of on-screen education, and the results were underwhelming. Generally speaking, countries with the longest school closures tended to have the steepest falls in student attainment. Not only that, but online adult education courses have always had extremely high drop-out rates.

Perhaps that’s partly because current on-screen education is not as good as it could be, but it may also be because learning on-screen is intrinsically hard. In order to learn, we have to pay attention to whatever it is we are learning. Modern digital devices are distraction machines, and often feature dozens of applications all competing for our attention. They are plugged into the internet, where the biggest companies profit by how much attention they can capture and have developed all kinds of sneaky features like endless scrolling and autoplay videos that make it hard to tear yourself away from them.

Educational apps can use some of these techniques themselves: many sites have elements of gamification, ‘streaks’ that reward you for logging in every day and social features such as leaderboards and like buttons. There is nothing wrong with these, but aping the features of consumer entertainment means taking part in an arms race, where education will always be the loser. Most educational apps are already many times more engaging than an old-fashioned textbook, but they’re not competing against old-fashioned textbooks: they’re competing against apps that can use all the same features, but aren’t constrained by having to teach something.

What’s needed is a space where this competition doesn’t exist, and students can focus on learning without the pull of social media or instant messaging. Blending the strengths of technology with the strengths of a distraction-free classroom may require technological innovation itself: devices may need to be adapted to have less functionality and connectivity, so message notifications don’t interrupt students trying to study. Some lessons will always need to be screen-free, and they could be integrated with online curriculums by scanning and digitising paper scripts.

As far back as the 1970s, the economist Herbert Simon observed that in a world where information is free, what will become scarce is whatever it is that information consumes. And information consumes human attention. I’ve spent most of this essay discussing the technical difficulties of creating good education materials based on sound principles of learning, and getting LLMs to work reliably is one of the biggest. But the implication of Simon’s argument is that this may be the easy part: the hard part is getting students to pay attention to whatever you create. Technological change happens not just through the invention of new technologies, but through the integration of those technologies into human routines and institutions. If we want to improve education, then that is the challenge we have to meet.

If you enjoyed this essay by Daisy, listen in through the link below to her in conversation with the EI team:

Author

Daisy Christodoulou