Will AI revolutionise education?
- February 6, 2024
- Daisy Christodoulou
Education has a history of being impervious to inventions that have transformed society in other ways. Artificial Intelligence and its plethora of new manifestations, such as LLMs and Chatbots, may be no different.
In conversation with EI’s Paul Lay and Alastair Benn, Daisy Christodoulou punctures the hype around the applications of Large language models (LLMs) and chatbots to the field of learning. Listen in through the link below.
And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed within his VCR, and he knew not how to remove it.
And he cried out to the Lord, saying, ‘Oh, Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge.’
And the Lord spoke unto him, saying, ‘Fear not, my child, for I shall guide thy hand and show thee the way. Take thy butter knife, and carefully insert it between the sandwich and the VCR, and gently pry them apart. And with patience and perseverance, the sandwich shall be removed, and thy VCR shall be saved.’
And the man did as the Lord commanded, and lo and behold, the sandwich was removed from the VCR, and the man was saved.
The above text was not written by a human. It was created by an artificial intelligence model in response to the following prompt from a human: ‘Write a Biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR.’ The prompt is absurd, but the response is pitch-perfect, and it’s hard to believe that a machine is capable of such playful use of language. This particular example went viral on social media last year, and ChatGPT, the chatbot that created it, became one of the fastest-growing consumer software applications in history.
Chatbots like this are based on Large Language Models (LLMs), which use artificial intelligence to generate realistic language. Their remarkable language abilities have led to dramatic predictions about how they will change the world, and educationalists have been particularly keen to talk up their potential. Depending on who you listen to, LLMs are going to provide every student with a personalised tutor, destroy the need for the traditional school, upend the types of content studied in schools and disrupt the graduate job market. It is now fashionable for education technology companies to plaster their prospectuses with ‘Artificial Intelligence’, even if their use of the technology is tangential.
There is a legitimate question about whether LLMs are all they are cracked up to be, but before we ask that question, there is a more fundamental one: even if LLMs are a major breakthrough, will they make a difference to education? Education has a history of being impervious to inventions that have transformed society in other ways. Radio, television, the internet and smartphones have changed the world, but they have not had the desired impact on education. In 1913, Thomas Edison predicted that the motion picture would transform teaching and make instruction with books obsolete. That was wrong, but will this time be different?
The early signs are not encouraging. One of the frustrating paradoxes about education technology applications is that they are so often divorced from the underlying scientific research that helped create them. The development of artificial intelligence has gone hand in hand with research about human intelligence, but the applications of artificial intelligence in education often fail to take account of that research, and rely instead on pseudoscientific ideas about how the mind works.
In the current discussion about AI and its impact on education, two particularly damaging misconceptions keep returning. One is that students don’t have to bother learning basic facts or skills, because the computer will do that for them. The second is that the school curriculum should be redesigned to focus on new AI applications, as that will be the best preparation for the world of work.
Twenty years ago, Google was used to justify the idea that we no longer needed to remember anything. Now the same idea has been accelerated by LLMs. Not only will they look up dates and facts for you, they also mean that you don’t need to learn to write any more.
Steve Watson, a professor of education at Cambridge University, has said ‘my problem at school was that I had all the ideas about what I wanted to say, but didn’t always know how to express it in the right form. This technology can help students present ideas in a clear and organised manner and in the right form, allowing teachers to focus on the ideas themselves’.
Students of Ethan Mollick, a business school professor and influential AI commentator, are required to use LLMs for their writing assignments, which he claims are particularly helpful for poor writers who struggle to express their ideas. It’s also often said that because LLMs can write so well, humans should focus on questioning and critiquing them rather than writing themselves.
And it isn’t just writing. The mathematician Conrad Wolfram argues that the combination of powerful calculators and LLMs mean that students no longer need to bother with hand calculations and should focus on ‘thinking computationally’ instead.
In all these cases, the argument is that LLMs will release students from the drudgery of sentence drills and hand calculations and let them jump straight to more interesting and advanced problems. This is not, however, how the human mind works. You can’t skip the basics because you need them to understand more complex problems. Hand calculation isn’t in opposition to computational thinking: it enables it. Sentence drills don’t prevent you from editing and critiquing writing: they make it possible. In order for students to successfully grapple with problems computers cannot do, they must work through problems that computers can do.
That’s because the big limitation of human cognition is our working memory, which can only handle between four and seven new items of information at any one time. Imagine you are reading an article and that there are a few words in every sentence that you don’t understand. You can go away and look these words up, but you will quickly get overwhelmed and confused. That’s what happens when you exceed the limits of your working memory.
This happens to children a lot when they are learning to read. It doesn’t happen to well-educated adults. Why? It isn’t because those adults have larger working memories – so far as we know, working memory is fixed and can’t be improved through exercises. Nor is it because those adults have more efficient strategies for looking things up. No, it’s because adults know a lot more words. The knowledge of those words is stored in long-term memory, which is vast, and which allows us to cheat the limitations of working memory. This is why it is not enough to know how to look things up. You need the knowledge in long-term memory. You can’t outsource it to the cloud.
Chess provides a useful example of how this works. Chess computers have been better than the very best humans at chess for decades now, but if a child wants to learn how to play the game, we don’t tell them they have to start with the problems the chess computers can’t solve. Instead, we teach them how the pieces move, what the basic openings are, and some of the common patterns to look out for, even though computers find all these tasks trivially easy. Interestingly, we can and do use technology to help students acquire these basics, but at no point do we assume technology means they never have to learn those basics.
Something similar applies to writing. As we’ve seen, some academics and educationalists have suggested that students can get ChatGPT to do the first draft of an essay, and the student can critique and edit it. Again, this neglects the fact that editing requires a mastery of the basics. You have to know what makes a good sentence and a logical argument before you can decide whether ChatGPT’s output needs improvement or not.
When it comes to writing, there may be even more subtle effects. The educationalists cited above view writing as a neutral transmission mechanism whose value is in the communication of ideas. But writing isn’t just a means for transmitting ideas: it helps to create them, too. It’s a medium of expression and, like all media, it influences the ideas that can be expressed, as the educationalist Neil Postman makes clear:
Consider the primitive technology of smoke signals. While I do not know exactly what content was once carried in the smoke signals of American Indians, I can safely guess that it did not include philosophical argument. Puffs of smoke are insufficiently complex to express ideas on the nature of existence, and even if they were not, a Cherokee philosopher would run short of either wood or blankets long before he reached his second axiom. You cannot use smoke to do philosophy. Its form excludes the content.
Without writing, constructing long chains of logic and reasoning is exceptionally hard, if not impossible. Seen this way, the rules of sentence structure are not pedantic optional extras. They are bound up with logic and meaning. And the most valuable type of writing might not be the writing we do for others to communicate what we think, but the writing we do for ourselves to work out what it is we think. Outsourcing writing to an external tool won’t release an individual’s innate creativity. It will make it harder to develop that creativity in the first place.
If we follow the logic presented above, it resolves another important educational debate: how schools can best prepare their students for the world of work given the impact of technological change.
For companies and young adults, the impact of technology on the economy poses genuinely difficult dilemmas. Technology may end up disrupting occupations as diverse as law and lorry driving, so should a young person be advised to avoid these careers? Should a company invest in training staff, or in creating intelligent systems that will do the work of those staff? These are tough questions that do not have easy answers.
When it comes to schools, however, these dilemmas are less acute, because schools teach the more foundational skills that are prerequisites for any job. The two most important are literacy and numeracy, which are vital at every level of the labour market and resilient to economic change. The labour economist Anna Vignoles has shown that even in developed countries with highly educated workforces, ‘the value of basic skills in literacy and numeracy remains high’. Furthermore, literacy and numeracy are not binary skills that you either have or you don’t. Reaching a minimum standard is important, but improvements at both the average level and the top can also make a big difference to the economic performance of individuals and nations.
Because literacy and numeracy are so foundational, it’s unlikely they will ever become obsolete. All new inventions and developments depend on them in some way. Mobile phones and tablets use the alphabet and numbers, and it’s a fair bet that whatever replaces them will do, too. If we are really concerned about skills becoming obsolete, then the paradox is we should be wary of teaching about new inventions. Betamax players, fax machines and minidisc players were all cutting-edge within living memory. They are unlikely to outlive the alphabet and numbering system.
It’s true that some specific applications of literacy and numeracy might get automated. Human computers, who solved equations and calculations by hand, are no longer needed. Yet the advanced maths skills that those human computers possess are still valuable in the labour market. Likewise, even if LLMs can automate some writing tasks, advanced literacy skills will still have value. As long as human labour is necessary, literacy and numeracy will be necessary.
There are those who argue that LLMs and artificial intelligence in general are so powerful that they will make human labour unnecessary. If that is the case, the non-economic aspects of education will become more and more important. Students will attend school not as a preparation for work, but as a preparation for becoming citizens and well-developed adults. And literacy and numeracy are clearly vital here, too.
If a school wants to future-proof its curriculum, the best strategy is to teach the fundamentals of literacy and numeracy to a high standard. If there is a case for teaching something ‘newer’, it’s probably to introduce a programming language – although some caution is still merited, because the most useful languages change over time.
Students definitely don’t need ‘lessons in ChatGPT’, any more than students need lessons in using iPhones. The computer scientist Cal Newport notes that the ability to use consumer technology is not a particularly marketable skill. The real value is in the coding and design of the technology. You don’t learn how to bake cakes by eating lots of them. In his words:
The complex reality of the technologies that real companies leverage to get ahead emphasises the absurdity of the now common idea that exposure to simplistic, consumer-facing products – especially in schools – somehow prepares people to succeed in a high-tech economy. Giving students iPads or allowing them to film homework assignments on YouTube prepares them for a high-tech economy about as much as playing with Hot Wheels would prepare them to thrive as auto mechanics.
The major skills needed to use LLMs will require reading and writing, and the major skills needed to design it and understand its workings will require algebraic and statistical reasoning.
So, two of the most popular suggestions for how technology can improve education are based on a faulty premise. Chatbots don’t need to be on the curriculum, and artificial intelligence can’t replace human memory, but what if we got the right premise, and the right underlying educational model? Could LLMs be useful then? Instead of assuming that technology can eliminate the need to learn facts, we could instead find ways that it can help students learn facts. Similarly, instead of teaching lessons in how to use chatbots, we could look at ways that chatbots might be able to deliver lessons in more traditional curriculum areas like maths and English.
These are more promising approaches, but if they are to work, LLMs will have to be highly reliable and accurate. It is not yet clear that they are.