Opinion

ChatGPT and the AI (‘Artifical Imperfections’) test

One big current idea is AI—artificial intelligence—where we seek to make computers do the sorts of things minds can do. Some of these activities are normally described as ‘intelligent’ e.g., reasoning, others are not e.g., vision. Unfortunately, as you can experience from the latest manifestation—ChatGPT—AI is so over-hyped with fantasies of power that, with AI’s publicity machine, it’s not just the pigs that are flying; I like to say it’s the whole farm. Regular readers of this newsletter will know my position on ‘AI’. The term ‘AI’ itself is highly misleading. At present it nowhere equates with human intelligence. At best it refers to the use of machine learning, algorithms, large data sets, and traditional statistical reasoning driven by prodigious computing power and memory. It is in this sense that I will use the term here. The media and businesses subsume a lot of other lesser technologies under the term ‘AI’—it’s a nice shorthand, and suggests a lot more is going on than is the case.

ChatGPT is representative of a number of natural language processing tools driven by AI technology. It allows you to have human-like conversations, and much more, with the chatbot. The language model can answer questions and also assist you with tasks like composing emails, essays, and code. On initial use ChatGPT is impressive. But in the chat function and in the writing essays section I tried it does get things wrong. You could write this off to the fact that it’s officially in the ‘research’ stage. The problem here is that software is never out of the developmental stage, correcting mistakes, and updating to fit with more powerful technologies. That’s partly why we have had version 3, 4, 5—no doubt infinitum. This would not matter except for two facts. Firstly, ChatGPT will have mass adoption—it already has a strong user base, not least because at the moment it’s free. Secondly, most of the users will be naïve as to what goes into this AI software. They will not understand its limitations.

Stepping back, we need a better steer on what the limitations of AI are. As a counter-weight I am going to look at how these technologies come with in-built serious practical and ethical challenges. To start off, here are three things you need to bear in mind:

    • 1. AI is not born perfect—These machines are programmed by humans (and possibly machines) that do not know what their software and algorithms do NOT cover, cannot understand the massive data being used, e.g biases, accuracy and meaning, do not understand their output, and cannot anticipate the contexts in which the products are used, nor the likely impacts.
    • 2. AI is not human and never will be—Despite all our attempts to persuade ourselves otherwise. The ‘brain like a computer, computer like a brain’ metaphor is shown by recent neuroscience to be a thin and extremely misleading simplification. And our language tricks us into saying AI is intelligent, feels creates, empathises, thinks, and understands—it does not and will not—because AI is not alive, and does not experience embodied cognition. Meredith Broussard, in her book Artificial Unintelligence, summed this up neatly: If it’s intelligent it’s not artificial, if it’s artificial, it’s not intelligent.
    • 3. AI creates very large ethical and social responsibility challenges. As today’s ChatGPT examples illustrate, the wider its impact, the more of the challenges reveal themselves.

 

In preparing a new book, Globalisation, Automation and Work: Prospects and Challenges, I have created what I am calling an AI (Artificial ‘Imperfections’) Test to guide practical and ethical use of these technologies. There are nine benchmark points in this test—see Figure 1. Let’s look at these one by one …

    • 1. AI is brittle. This means AI tends to be good at one or two things rather than, for example, the considerable flexibility/dexterity of humans and their skills. You will hear plenty of examples of what AI cannot do. To quote the Moravec paradox (Moravec 1988):
      “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.

      ChatGPT is very attractive because it is unusually adept and widely applicable, due to its fundamental natural language model and the vast data lake it has established and continually updates.

    • 2. AI is opaque. Forming a technology ‘black box’, it is not clear how AI works, how decisions are made, and how far its judgements and recommendations can be understood and relied upon. A lot of people are working on how to counter the intangibility and lack of transparency that comes with AI. They also have to wrestle with the immense data loads and the speed AI operates at—much of it beyond human understanding. In such a world, not easily identifiable small errors can accumulate to massive misunderstandings. ChatGPT is an open field for these problems
    • 3. AI is greedy. It requires large data training sets, and thereafter is set up to deal with massive amounts of variable data. Processing power and memory race to keep up. The problem is that a great deal of data is not fit for purpose. And bad data can create misleading algorithms and results. The idea that very big samples solve the problem—what is called ‘Big Data’ (as used in ChatGPT, for example)—is quite a naïve view of the statistics involved. It is not really possible to correct for bad data. And the dirty secret of Big Data is … that most data is dirty!
    • 4. AI is shallow. It is shallow in that it skims the surface of data. As said earlier, it does not understand, feel, empathise, learn, see, create or even learn in any human sense of these terms. Michael Polanyi is credited with what has become known as the Polanyi Paradox- people know more than they can tell. Humans have a lot of tacit knowing that is not easily articulated. With AI there is actually a Reverse Polanyi Paradox: AI tells more than it knows, or rather, more accurately, what it does not know.
    • 5. AI is eminently hackable. It does not help that well-funded state organisations are often doing the hacking. Online bad actors abound. The global cybersecurity market continues to soar, reaching some $US 200 billion by 2023 with a 12 percent compound annual growth rate thereafter. Of course, AI can be used both to support but also hack cybersecurity.
    • 6. AI is amoral. It has no moral compass, except what the designer encodes in the software. And designers tend not to be specialists in ethics or unanticipated outcomes.
    • 7. AI is biased. Every day there are further examples of how biased AI can be, including ChatGPT and similar systems. Biases are inherent in the data collected, in the algorithms that process the data, and in the outputs in terms of decisions and recommendations. The maths, quants and complex technology throw a thin veil of seeming objectivity over discriminations that are often misleading e.g., predictions of future acts of crime, and can be used for good or ill.
    • 8. AI is invasive. Shoshana Zuboff leads the charge on AI invasiveness with her recent claim that ‘privacy has been extinguished. It is now a zombie’. AI, amongst many other things, is contributing to that outcome.
    • 9. AI is fakeable. We have also seen plenty of illustrative examples of successful faking. Indeed, with a positive spin, there is a whole industry devoted to this called augmented reality.

 

So, looking across these challenges and AI’s likely impacts, you can see that technologies like Chat GPT contain enough practical and ethical dilemmas to fill a text book. New technologies historically tend to have a duality of impacts—simultaneously positive and negative, beneficial and dangerous. With emerging technologies like AI and ChatGPT we have to keep asking the question—does ‘can’ translate into ‘should’? In trying to answer this question, Neil Postman suggested many years ago the complex dilemmas new technologies throw up. We always pay a price for technology. The greater the technology the higher the price. There are always winners and losers and the winners always try to persuade the losers that they are winners. There are always epistemological, political and social prejudices imbedded in great technologies. Great technologies (like AI) are not additive but ecological—they can change everything. And technology can become perceived as part of the natural order of things, and therefore controls more of our lives than may be good for us. These wise warnings were given in 1998, and have even more urgency today.

My own conclusion—and fear—is that collectively our responses to AI display an ethical casualness and lack of social responsibility that put us all in peril. Once again professional, social, legal and institutional controls lag almost a decade behind where accelerating technologies are taking us. It is time, I think, to start issuing serious digital health warnings to accompany these machines.

To Top