Human-Centered AI — an important new book by Ben Shneiderman

Ben Shneiderman is an Emeritus Distinguished University Professor in the Department of Computer Science at the University of Maryland, and a much-honoured pioneer in the field of Human-Computer Interaction. His recent book, Human-Centered AI, is a valuable contribution to the literature discussing challenges for the appropriate use of artificial intelligence and proposing approaches and steps to achieve a safer and more humane future incorporating the likely increased use of AI. 

Although there is much that I could discuss, I shall focus primarily on Part 3, Design Metaphors, and Part 4, Governance Structures. 

Part 3 elaborates the distinction between mostly autonomous AI systems, intended to do things for humans while operating on their own, promising convenience and implying safety, and systems designed to be used and controlled by humans, with tasks aided by machine intelligence. He compares and contrasts pairs of metaphors that illustrate this distinction: Intelligent agents and supertools, teammates and tele-bots, assisted autonomy and control centers, and social robots and active appliances. He discusses the extent to which systems viewed as intelligent agents, collaborative workers, autonomous beings, or social robots are starting to achieve some of the goals sought by AI researchers and developers, but stresses that today’s systems are far from reliable and trustworthy. Shneiderman is quite correct in advising against the use of human form and anthropomorphism in robots. 

Part 4 proposes a novel and useful framework of governance structures for human-centered AI: 

Where possible, he bases proposals upon methods already used in other domains such as airline and drug safety, and in hospitals and the US military. 

The discipline of software engineering suggests the use of audit trails, verification and validation testing, fairness testing, and explainable user interfaces. The latter is a subset of the very important and difficult problem of explainable AI, which in my view is the most important problem AI must solve if it is to be trusted: why did you make that decision or carry out that action? 

Shneiderman stresses the importance of AI developers and adopters to have a safety culture, one in which the top leadership of an organization is publicly and privately committed to, and imaginative in designing methods to ensure that products are safe and are used safely. Technology failures and near misses must be rigorously reported and investigated.  

Industry must adopt standard practices based on safety and accountability, including planning oversight, continuous monitoring, and retrospective analysis of disasters. Principles used in financial audits could be adapted for AI audits. There could be insurance against AI failure. Non-governmental, civil society, professional, and research organizations must provide oversight.  

The author also discusses government regulation, and notes that despite active statements by government agencies expressing concerns and proposing safeguards, there have been few laws enacted. 

Ben is an optimist, and his book radiates hope that systems will require human control and oversight, and that governance structures will be in place and effective. Yet here is the dilemma and my concern. There are many uses of AI … I call them non-consequential, such as speech recognition and language translation … where errors have minor consequences, and where human flexibility and adaptability can overcome the errors, as for example by speaking to Siri again slowly and with better enunciation.  Yet AI is also being proposed for many consequential uses, as in self-driving cars, senior care, and autonomous weapons, where insufficient time for human control will exist or be deemed possible. It is in these domains where we still face the greatest challenge in ensuring that AI actions and decisions be consistent and reliable, fair, explainable, and trustworthy, and that they be subject to human accountability and responsibility. 

Ben’s book is scholarly yet very readable. It is an important resource for managers, researchers, developers, and students in AI, human-computer interaction, computers and society, and computer ethics. 

More information may be found at https://cacm.acm.org/magazines/2021/8/254306-responsible-ai/fulltext?mobile=false, https://bdtechtalks.com/2022/03/21/human-centered-ai-ben-shneiderman/, and https://global.oup.com/academic/product/human-centered-ai-9780192845290?cc=ca&lang=en&

FOR THINKING AND WRITING AND DISCUSSING 

What changes will you make to your plans for developing or using AI based on Ben’s concerns and recommendations? 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s