I have been listening to podcasts about AI for some time now. Most people fear the new and the unknown—something that is understandable in the human psyche, as fear is what has allowed us to survive over the millions of years we have inhabited the Earth. The stimulus causing fear can be a real thought or not, or a threatening stimulus (for example, the presence of a lion). Some authors claim that there are certain fears inherent to human beings with an almost instinctive component, such as darkness, uncertainty, or death.

However, what has changed, in my humble opinion, is that nowadays we have become more comfortable, less combative, and much more overprotective. We hardly know how to face challenges or new trials, and sometimes fear paralyzes us.

Perhaps that is the reason for feeling fear toward AI entering our lives—the fear that it might surpass us, that sense of uncertainty from not knowing what might happen or how to act.

From my point of view, I am certain it will surpass us in many of the functions we perform today, just like every invention or new technology created by human hands throughout our history. This happened with the radio (with the broadcast of H.G. Wells’s The War of the Worlds) or with the telephone (in its time, there were people who refused to use it because they did not understand how it worked). Other more technological examples we can name are the fax, computer, CD, printers, DVD, audiovisual platforms, podcasts, etc.

In conclusion, society advances, and we with it.

However, what makes GPT-3 different? It learns as it is used. So… can we come to think that one day it might surpass us? It’s difficult. As I mentioned at the beginning of the article, what differentiates us from Artificial Intelligence is fear, and that… my dear ones, is a feeling—something that, for now, a machine does not possess.

Setting aside circumlocutions, I am going to provide you with some technical data so that you can better understand what this AI (Artificial Intelligence) consists of.

What is GPT-3?

GPT-3 is a computer program created by the private company OpenAI in San Francisco. It is a gigantic neural network and, as such, is part of the deep learning segment of machine learning, which itself is a branch of the computer science field known as Artificial Intelligence or AI. The program is better than any previous program at producing lines of text that sound as if they had been written by a human.

What can it do?

Natural Language Processing includes, as one of its main components, Natural Language Generation, which focuses on generating natural text in human language. However, generating content understandable to humans is a challenge for machines that do not truly know the complexities and nuances of language. Using text from the Internet, GPT-3 is trained to generate realistic human text.

GPT-3 has been used to create articles, poetry, stories, news reports, and dialogues using only a small amount of input text, which can be used to produce large amounts of quality copy.

It is also used for automated conversational tasks, responding to any text a person types on the computer with a new piece of text appropriate for the context. GPT-3 can create anything with a text structure, and not just human-language text. It can also automatically generate text summaries and even programming code.

How does it work?

GPT-3 is an example of what is known as a language model, which is a particular type of statistical program. In this case, it was created as a neural network.

The name GPT-3 is an acronym for “Generative Pre-trained Transformer,” of which this is the third version so far. It is “Generative” because, unlike other neural networks that output a numerical score or a yes/no answer, GPT-3 can generate long sequences of original text as output. It is “Pre-trained”—it was not built with any domain knowledge, although it can complete domain-specific tasks, such as foreign language translation.

A language model, in the case of GPT-3, is a program that calculates the probability of a word appearing in a text given the other words in the text. This is what is known as conditional word probability.

For example, in the sentence “I wanted to make an omelet, so I went to the fridge and took out some ____,” the blank space could be filled with any word, even gibberish, given the infinite construction of language. But the word “eggs” probably has a higher number of probabilities to fill that blank space in most normal texts—higher than, say, “elephants.”

When the neural network is being developed—called the training phase—GPT-3 receives millions and millions of text samples and converts words into what are called vectors: numerical representations. That is a form of data compression. Afterward, the program attempts to decompress this compressed text into a valid sentence. The task of compressing and decompressing manages to develop the program’s accuracy in calculating the conditional word probability.

Once the model has been trained—that is, its conditional probability calculations across billions of words are made as accurate as possible—it can then predict what words come next when a person typing a starting word or words requests it. That action of prediction is known in machine learning as inference.

This leads to a surprising mirror effect. Not only do probable words emerge, but the texture and rhythm of a genre or the form of a written task—such as sets of questions and answers—are reproduced. So, for example, GPT-3 can be given some names of famous poets and samples of their work, then the name of another poet and only the title of an imaginary poem, and GPT-3 will produce a new poem in a way that is consistent with the rhythm and syntax of the poet whose name has been suggested.

The ability of GPT-3 to respond in a way consistent with an example task—including forms to which it was previously never exposed—makes it what is called a “few-shot” language model. Instead of being highly fine-tuned or “trained” on a given task, GPT-3 already has so much information about the many ways words combine that it can be given only a handful of examples of a task and it acquires the ability to perform that new task as well.

We Writers Will Keep Writing

AI is on the rise and will continue to learn day by day. There will be those who use it as a tool to create a map when developing their idea, but I doubt it will come to replace us. Nonetheless, there are several parameters that predict we will keep writing:

  • Creativity: It is true that it can process data and transform it into written content. But it is light-years away from possessing its own creativity like people do.
  • Lexical Variation: AI is improving at emulating the way we communicate, but it has not yet defined our natural language. Solid writing avoids the repetition of the same words, phrases, sentence lengths, and sentence structure. We like to change things up. AI has a tendency to sound a bit robotic after a while, although it is improving.
  • Feeling: When you read a book, article, poem, or any other written piece, you can often feel the writer’s soul bleeding through every word. No matter how sophisticatedly we program AI processes, there is simply no way to recreate the level of feeling, passion, and commitment that allows us to connect with one another so deeply through indirect means and not exclusively through intelligence.
  • Empathy: Empathy, which derives from the Greek term empátheia, is also called interpersonal intelligence (a term coined by Howard Gardner) and refers to a person’s cognitive ability to understand another’s emotional universe. It allows us to perceive the feelings of others and make them feel less alone. For empathy to exist, it is necessary to set aside moral judgments and phenomena rooted in affection (sympathy, antipathy) such that one can have an understanding attitude rather than one of compassion toward the other’s circumstance. It consists of an objective and rational effort to carry out an intellectual understanding process that allow us to comprehend the feelings of the other.
  • Experience: Without experiences to draw from, AI writing is often superficial—it makes sense, but something is missing.
  • Literary Creative Fluency: AI needs parameters to draw from; without them, it is not capable of writing or developing narration. Many writers “go with the flow” and allow the writing to guide them naturally, sometimes in directions they didn’t necessarily plan to go. We make decisions as we write. AI has not yet become intuitive and adaptable at that level. Instead of switching topics to a related theme if that is how the piece flows, AI will often double back and repeat information because it is tied to a specific outline or set of topics to follow.

A Story Written by AI and the Same Written by Me

To finish, I have put this AI to the test. I have introduced certain parameters so that it would create a story. This story follows the same variables as one written by me a few months ago. I show you both so that you can compare.

Enjoy the reading.


UNUSUAL HAPPINESS (AI Version)

Inspector Rafael and Sub-inspector Cristina decided to take a well-deserved break in the Sierra Nevada mountains after having solved a complicated drug trafficking case. During the first day of their trip, while walking in the countryside, Cristina twisted her foot and suffered a sprain.

Worried about his partner’s injury, Rafael decided to find shelter for the night. Fortunately, they found a small refuge in the mountains, where they took shelter from the cold and snow.

While Rafael took care of Cristina, he began to notice some strange details in the refuge. The walls were covered with bloodstains, and there was a shovel in the corner. Rafael decided to investigate a bit more and discovered that the refuge had been used by a serial killer who had been sought by police for years.

Rafael and Cristina quickly realized that the killer could return at any moment, so they prepared a trap to catch him. After a long wait, the killer finally appeared and began to threaten Rafael and Cristina with a knife.

However, thanks to the quick intervention of the two inspectors, they managed to capture the killer and bring him to justice. After that exciting experience, Rafael and Cristina returned home, satisfied to have been able to resolve another complicated case and to have saved their lives in a dangerous situation.

UNUSUAL HAPPINESS (Human Version)