In this column, in my textbook, and in a speech “What Society Must Require from AI” I am currently giving around the world, I document some of the hype, exaggerated claims, and unrealistic predictions that workers in the field of artificial intelligence (AI) have been making for over 50 years. Here are some examples. Herb Simon, an AI pioneer at Carnegie-Mellon University (CMU), who later won a Novel Prize in Economics, predicted in 1958 that a program would be the world’s best champion by 1967. Marvin Minsky of MIT, and Ray Kurzweil, both AI pioneers, made absurd predictions (in 1967 and 2005) that AI would achieve general human intelligence by 1980 and by 2045. John Anderson, discussed below, made the absurd prediction in 1985 that it was already feasible to build computer systems “as effective as intelligent human tutors”. IBM has recently made numerous false claims about the effectiveness of its Watson technology for domains as diverse as customer support, tax filing, and oncology.
I am particularly interested in the use of computers in education. I have watched and participated in computer innovations for education since I worked with Seymour Papert and Wally Feurzeig on the first version of the LOGO language in 1966, and since I taught a course focusing on social issues raised by technology in education in 1972.
The field of intelligent tutoring is an exciting area of AI research. The field was pioneered by John Anderson and his collaborators at CMU in the 1980s. However, work has progressed slowly, because of difficulties in specialized topics like user modelling, that is, understanding what a student knows, what misconceptions he or she may have, and how he or she derives an answer to a question. The biggest successes have been in teaching subjects such as mathematics, where answers and methods of reasoning are well-defined. There have been few other successes.
This past week, I participated in a day-long seminar at the UNSESCO Mahatma Gandhi Institute for Education in Peace and Sustainable Development (MGIEP). The topic was the use of AI for teaching social and emotional learning, which they define as being comprised of empathy, mindfulness, compassion, and critical inquiry (EMC2). EMC2 is a wonderful idea, but I argued that AI could not yet play a fundamental role in such teaching because of the following serious problems:
1. It is often unclear if one is communicating with a person or an artificially agent.
2. AIs are often incompetent, unreliable and inconsistent.
3. AIs have no common sense and no intuition.
4. AI decisions and actions, especially those of machine learning, are not transparent and cannot be understood.
5. Decisions and actions are often biased and unfair.
6. AIs exercise no discretion or good judgment in deciding what to say to people and when to say it.
7. We have no reasonable way of assigning and enforcing accountability and responsibility for algorithmic decisions and actions.
8. Finally, we use AIs even though we do not trust them.
The temptation to view AI as a near-term solution for educational systems that have insufficient budget and resources manifests itself throughout the globe. For example, in my home province of Ontario, where coservative governments are typically at odds with teachers’ unions over issues including salaries and benefits, the current government has in the past year been discussing allowing high school students to do all their work online and introducing e-learning courses as requirements for high school students with the goals of slashing education budgets and raising average class sizes to 35.
FOR THINKING AND DISCUSSION
Should we trust education in empathy and compassion and critical thinking, or for that matter history or literature, to robot teachers that are not competent, reliable, patient, empathic, sensitive, and wise? Does the answer change in venues such as India, where the student-teacher ratio in rural schools often is as high as 80.