What is Generative AI?
ChatGPT
Chat GPT (the letters mean Generative Pre-Trained Transformer) is an example of what has come to be known as ‘Generative AI’. These computer algorithms produce artificial content such as texts, computer programs, images, audio or video from the data that it has been trained on. Generative AI generates output in response to a text prompt from a user such as – ‘write a short children’s story about a family of rabbits in the woods’.
Large Language Models
Generative AI makes use of Large Language Models (LLMs) that have been trained on vast quantities of diverse texts such as articles, blogs and digital books, scraped from the internet. The naturalness of the output is a result of fine-tuning the model using human feedback to produce responses that are as close to natural language as possible. The same techniques are being used for audio as well as static and moving images.
Discriminative AI
‘Discriminative AI’ on the other hand is based on algorithms designed to classify something, such as identifying a person from a database of faces, or a cancerous tumour from an MRI scan.
Rapid take up
In November 2022, Chat GPT was launched by Open AI on an unsuspecting world, by January 2023 it had reached 100m users, a feat that took Facebook 4 years to achieve. Microsoft is a major benefactor having invested $11b into the company to date.
Not Intelligent
For many people, the response of these Generative AI chat bots seems completely human, even intelligent, yet in reality they have no intelligence at all, they are stochastic parrots, that is statistical pattern matching engines that generate output based on what people have already said.
The danger that generative AI applications present is not so much that they are intelligent but that we think that they are! What appears to be intelligence is simply a simulation of human creativity, whether it be holding a conversation, writing an essay, producing a piece of artwork or generating new computer code.
Hallucinations
Generative AI has a propensity to produce unexpected responses that have been dubbed ‘hallucinations’ or ‘confabulations’ that programmers don’t understand because they are either incorrect or just simply weird. This is a property of the algorithms as the are statistically based providing the most likely sequence of tokens that match the input prompt.
Real Intelligence
Human reasoning involves deduction, induction and abduction processes3 and these are used by doctors in medical diagnosis or lawyers determining a case. AI algorithms are missing one crucial aspect, abduction, a process that no one yet has a theory for, so we can’t encode it. In that sense the I in AI is a misnomer, there is no intelligence at all.