Reading Comprehension. AI

This reading comprehension exercise about AI is part of a complete B2 English course.

About this exercise.

Type: Reading Comprehension

Question Types: Multiple choice, Multiple Matching

Time: 10 minutes

Level: B2 / C1

Instructions: Read the text about Artificial Intelligence (AI) and answer the questions below.

AI. The Dawn of a New Era?

Artificial intelligence, or AI, has recently become a hot topic, with some amazing developments being made in the field. One of the most popular of such developments is the use of “chatbots” and “language models”, such as ChatGPT and DALL-E, which are already revolutionising the way we work and communicate. This article will focus on the examples of the aforementioned ChatGPT and DALL-E, discussing their strengths and weaknesses and will look at how they will impact on our future.

What is AI?

First, however, we need to clarify what AI is. Artificial intelligence is an attempt to simulate human intelligence with machines, specifically computers. These efforts concentrate particularly on producing language that sounds human, recognising and responding appropriately to human speech and recognising visual objects. A long standing test of whether a computer can be “intelligent” is the Turing test. Named after the pioneering computer scientist, Alan Turing, this test simply requires a machine to convince a human that they are talking to another human. Although many people might say that current AI is able to present as human, it is still relatively easy to identify the AI as non-human. This article will explore some of the ways that AI still fails to present as indistinguishable from a human.

What are ChatGPT and DALL-E?

ChatGPT is a chatbot developed by OpenAI that uses a language model to generate human-like responses to user input, giving the impression that the user is speaking with another human. Chatbots are commonly used, for example, to facilitate sales on website. In the case of ChatGPT, it has been trained on a large dataset of human conversations and can handle a wide range of topics, from general chit-chat to more complex discussions. OpenAI, the organisation responsible for ChatGPT, has also made waves with their image-creation utility; DALL-E. Rather than producing text, this is a neural network that generates images from text descriptions. It has been trained on a dataset of text-image pairs and can generate a wide range of images, from photorealistic to highly stylised, based on given text input.

The Advantages of Commercial AI

One of the main strengths of both applications is their ability to generate human-like responses and images. ChatGPT’s responses are often indistinguishable from those of a human. This ability to mimic human interaction make it a useful tool for customer service or personal assistant applications. New applications for the software are being discovered daily. Social media is full of ingenious examples of AI creation, from coding websites to writing the content that is hosted on that website. DALL-E’s generated images are also impressive, showing a level of creativity and style that is difficult for humans to replicate. This makes them both valuable tools for creative tasks, such as generating content for social media or advertising.

Another strength of ChatGPT and DALL-E is their efficiency. Both models can generate responses so much faster than any human could. This speed makes them incomparably useful for generating content at scale. This could be especially useful in sectors like marketing, where the ability to generate ideas quickly is major advantage.

The Downsides

However, ChatGPT and DALL-E also have their limitations and problems. One already apparent problem of ChatGPT is that it can generate responses that are incorrect, inappropriate or offensive. This is because it has been trained on a dataset of human conversations that includes a wide range of topics and language, including incorrect, offensive or inappropriate material. This can be addressed by using appropriate filters and moderation, but it is still an issue that needs to be considered when using this type of software. Notable examples so far include the software generating an article justifying anti-semitism and instructions on how to build a bomb.

Fundamentally, the quality of the output still depends on the quality of the training data. Both models generate responses and images based on patterns and associations learned from the such data. If the training data is biased or limited in some way, this will be reflected in the output. For example, if the training data includes an overwhelming number of conversations with a certain demographic, the model is likely to generate responses that are more characteristic of that demographic. Similarly, if the training data for DALL-E is stylistically limited, the model have the same limitations.

Reflecting our Biases

This bias from the source material can cause various other problems. For example, many people have used AI image generation to make profile pictures of themselves for social media. However, many women were disappointed to find that their Ai generated pictures were highly sexualised. This is because women were represented in a much more sexualised way than men in the images that the AI was trained on.

This issue of dependence on the input material reveals the most important limitation of any software of this kind; that it is only really able to imitate and has no capacity to innovate. Any definition of intelligence that did not include the ability to innovate would surely be somewhat incomplete. Unable to provide any real innovation, such software cannot yet compete with human ingenuity or creativity.

Another issue related to the datasets that AI software is trained on is that of intellectual property. Many artists whose work, even copyrighted material, has been used as a stimulus for this type of software are not happy. Some have even pointed out examples where partial remnants of signatures appear in images produced by AI. The creator of DALL-E, however, claims that there is no database where copies of other people’s work are stored. Rather, he says, any apparent preproduction is more a matter of inspiration than plagiarism or intellectual property theft.

Unfair Competition

The use of AI to generate artistic output also creates another ethical problem; does AI art devalue human art? A contestant in a state fair art competition recently won first prize with a beautiful digital painting. However other competitors and commentators were angry. The reason? The winning painting had been produced by an AI. Many competitors felt that that it was unfair to enter AI artwork into the competition. Admittedly, the artwork was really quite impressive, despite being produced by software.

Academics have also raised the alarm about how easily students can use AI to cheat. Before they even knew what ChatGPT was, many teachers and professors became aware that something was happening. Many reported that students were submitting work that was far beyond their normal ability or effort. Plagiarism is hardly a new problem for academia. In fact, academic platforms like Google Classroom have integrated plagiarism detectors that scan the work for combinations of words that are identical to existing work. There are already applications which can identify AI text with a good degree of certainty, but students can easily avoid detection by copying and paraphrasing AI content.

The main limitation of current AI is that despite providing responses that, at least superficially, seem quite sophisticated, it has no conceptual understanding. For example, simple logical or mathematical problems that could be solved by most children are still impossible for AI. Queries like “If it takes 2 shirts 2 hours to dry in the sun, how long will it take for 16 shirts to dry in the sun?” will provide comically incorrect answers arrived at by superfluous calculations. The reason for this? Although the chatbot recognises the words and can assemble a response, it has no ability to understand the subject at all. This is perhaps the greatest weakness of contemporary AI software and exposes the enormous gap between human intelligence and the superficial nature of AI.

Adapting to the Future

Despite these limitations, ChatGPT and DALL-E have the potential to revolutionise the way we work and communicate. When used as a virtual assistant to automate tasks in bulk, AI can free up humans to focus on more complex tasks.

Even with this lack of innovation, personal experience or any sort of real understanding, this type of software will bring about enormous change in the way we live and work. If AI can reliably replace human labour, it is highly likely that many jobs will completely disappear. This change will likely be comparable with the industrial revolution. Many jobs in agriculture became obsolete when farming became mechanised. Textile workers lead by Paul Ludd tried to sabotage the machines that replaced them by throwing their shoes into the machinery.

Both of these examples had enormous economic and social consequences and given the scope of the change at hand we should anticipate a good deal of uncertainty. This time the machines aren’t just coming for menial jobs, no sector will remain unaffected by this technology. Whether it be trading stocks, teaching, customer relations or therapy, AI will make many previously considered safe jobs redundant.

Unless we, like the Luddites, sabotage this wave of mechanisation we will need to carefully manage the change that brings to avoid socioeconomic chaos.