Brave Against the Machine: Why We’re All Talking About ChatGPT

cute robot chatting with human

It is clear that AI is here to stay. As experts in the field of communication, we wanted to see how well we measure up to the latest wunderkind – an all-knowing chatbot.

Don’t worry, if you haven’t heard of ChatGPT yet, you’re probably just one conversation or headline away from being introduced to one of the most exciting and scary tools in our relatively young digital history. Summing it up, ChatGPT is an online chatbot that happily chats with its users and likes to write texts, give advice, write small snippets of code and even enjoys taking a shot at poetry.

The company behind this new tool is OpenAI, an artificial intelligence research laboratory founded in 2015 by some of the biggest tech faces in the world. In 2019 the company also piqued Microsoft's interest, with whom they have had a very successful partnership since.

 

After the image came the word

If OpenAI sounds familiar, it is because another one of their products already made huge waves in 2021. Their image generator DALL-E is an artificial intelligence capable of transforming simple word prompts into images. Would you like to see a banana peel made from orange skin? Perfect! A picture of Beethoven dancing the night away with Peppa Pig? Coming right up!

Whatever your mind might conjure up – DALL-E will happily spit it out for you. And as if that weren't revolutionary enough: it does so within a single minute. Naturally, DALL-E raised as much excitement as it raised concerns, but ChatGPT seems to have taken things a step further. Or rather: a whole lot further.

 

The modern oracle 

The charm of ChatGPT is being able to have a conversation with it and getting answers to pretty much any question. Want a cool new recipe based on your favorite foods? Or perhaps an Excel formula that speeds up your workflow? Maybe gift recommendations for your significant other? Or are you stuck writing an article and need help? ChatGPT can do all of the above. While we did play with the idea of letting it write this article, revealing our plot twist in a sort-of “aha!” moment at the end, we found we literally couldn't shut up about this topic and decided to do it all by ourselves. Call us old-fashioned.

Back to the true celebrity of this article. The best part of our mysterious chatting friend is that you can try talking to it yourself, simply visit chat.openai.com and create a free account. Keep in mind that with the rising user base keen on exchanging words with a bodiless companion, you might not be able to get an appointment at the busier times of the day, so we recommend trying it early in the morning or late at night.

All images in this article have been created using Midjourney, an AI image generator with a monthly subscription fee.

banana mixed with orange

So what’s the problem here?

Ah, as expected from our readers, diving right into the crux of the issue! Sadly, there is more than just one gray area with these new tools, and that includes pretty much every piece of AI that generates any sort of content, whether that be ChatGPT, DALL-E or its competitors Midjourney, Stable Diffusion and others. We split these problems into individual points.

 

1. Copyright and learning resources

Let’s take a look at how ChatGPT works. There is no feasible way to write software this complex, so a few brilliant minds figured out the concept of “let’s show this software thousands of images of something, like, let’s say, a crosswalk, tell it what it is, and let it figure out how it’s going to identify this thing that we call a crosswalk all by itself.” This also includes showing random images of lines on the ground that aren’t a crosswalk, but look similar to one, nudging it to understand the subtle differences by itself.

Because AI makes all these discoveries itself, the results can be quite unpredictable. How the AI learns is therefore heavily based on the content and resources that are used to train it. For example, ChatGPT being a text-based chatbot, it had to be trained on hundreds of billions of words scraped from the internet in 2021. This is also relevant because ChatGPT’s knowledge of the world is limited to what has transpired by 2021. It has no knowledge of current affairs until it gets updated.

Cute, small AI robot reading a book with a magnifier.

But where do all of these resources come from? These articles and images? Let’s say we teach the AI to write an article in the style of Hemingway, or make an illustration in the style of Tim Burton, is it fair to use thousands of images created by artists, including Tim Burton, without really paying these creators or giving them the proper acknowledgement of contributions? And who really owns this new content anyway, the company that created it, the user whose prompt generated it, or the countless people whose fruits of labor it was trained on? 

Naturally, the disclaimers on all of these tools say that the copyright belongs to the company which created the software, but is it really fair to monetize something without paying the people whose work it was built on? And what happens if, by some coincidence, the software inevitably ends up generating a content piece that has existed before. Who is liable for copyright infringement then?

At the same time, if the AI is trained on content, what happens if the content is morally questionable? Since ChatGPT is a piece of code, it has no intrinsic moral values, so it cannot really appropriately judge content that is biased, demeaning or has elements of violence or discrimination. How do we stop the AI from creating even more problematic content?

A pile of documents on an office desk.

2. Filters and ethics

Naturally, you might say, the companies who created these tools should just filter the content and solve the problem right at its source. Putting aside the “who watches the watchers” debate, or in our case, who regulates which content is acceptable and which isn’t, there is an even bigger issue at hand. If we want to teach AI to filter through, or not generate morally questionable content to begin with, we need to show the AI enormous amounts of that very same content. And “we” did.

One of the bigger controversies around the topic of filtering content was published by Time magazine. Their research exposed how OpenAI hired a company employing underpaid Kenyan workers to help the AI identify violence, hate speech, sexual abuse (basically what can only be described as the very worst of humanity). The workers would read and then label these types of content for hours on end to help OpenAI create a filter for ChatGPT. Unsurprisingly, this kind of work had serious mental repercussions for the employees and while the existence of these jobs has never been a secret, the lack of regulations could be considered downright abusive.

 

3. The aftermath of using AI

Talking about work: even if everything behind the scenes was a fluffy dream filled with butterflies, tools like ChatGPT have the potential to seriously shake up the everyday life of pretty much everyone.

What happens if a student decides to be a bit lazy and have the AI write an entire essay for them? Or what happens when someone enters an AI-generated piece of content into a contest and wins? Well, it did happen. And let’s just say people weren’t exactly thrilled about it: College Student Caught using ChatGPT and An A.I.-Generated Picture Won an Art Prize.

It is as amusing as it is worrying, that they only caught the student using ChatGPT because their essay was too good. On the other hand, AI-generated art winning an art competition re-opens fundamental questions about the meaning of creativity and art.

Even more worrying than the borderline existential questions are the potential consequences of using these tools. Will students learn less? Will art become commoditized and artists disappear? Will designers and writers lose their jobs? Will research be written by AI in the future? Actually, this has already happened, and once more: people were not amused. So how can we ensure that these research papers stay trustworthy? Will this ultimately lead to the downfall of creativity and truly original ideas? Or will ChatGPT simply become a tool in our everyday lives, a way to get out of a creative rut or skim the entire internet for ideas in mere seconds.

cute robot stops bad red colored robots from entering

Oh no, we’re all doomed!

Don’t worry dear reader, not all hope is lost. Like any new technology, if we approach it with the right moral values and some respect, AI can be extremely beneficial. AI is still in its infancy and while talking about these things today is important, AI could prove to be an invaluable tool for humanity once it matures. 

For example, since AI learns in its own way, what if we train it to detect potential cancer far sooner than our eyes could? What if we could identify disease early by analyzing everyone’s DNA in record time, or scan entire galaxies for habitable planets in mere minutes?

Like every tool humanity has invented, it will depend on us to responsibly use and develop these technologies. We will need to have lots of discussions about how we use it, when we use it and, most importantly, how far we choose to develop it.

 

We’re not going anywhere

AI, at least for now, isn’t coming for people’s jobs either, at least not permanently. Designers and writers might even start utilizing these tools to iterate faster and skip annoying steps of the process. We mustn’t overlook the fact that using these tools efficiently also requires a specific skillset, so in a perfect world, AI would actually end up creating jobs. Because the only way to interact with AI is with words, there is a clear trend showing that people who are more knowledgeable and articulate consistently get better results.

And let’s remember that a lot of the tools we use on a daily basis already use simpler AI to help us with certain tasks. They help us designers remove unwanted objects from photos, help us write better content by providing suggestions and on-the-fly corrections or help translate our writing more accurately than ever before.

It is very likely that we will need a human overseeing and directing these tools for quite some years to come, as the current iteration of AI tools is still pretty naive, even to a funny extent.

For example, AI isn’t good at questioning itself and always provides a confident result, even when it’s flat out wrong, like in this chat: 

ChatGPT claiming 10 plus 10 is 25.

Or more importantly, who else will teach AI about the perils of using Comic Sans? And who else will undo the disaster, when AI inevitably stumbles upon an online repository of dad jokes?

We bring up all of these points to say that AI is nowhere near perfect, nor ready or interested in taking over the world quite yet. The current technologies are impressive, but we’re not as close to opening Pandora’s box as some news headlines might suggest. Stephen Hawking, Elon Musk and other prominent tech figures have warned us about the dangers of AI on multiple occasions, but the term seems to have lost the letter “G” in the process. AGI is an abbreviation of Artificial General Intelligence, and this is the correct term referencing technology that could imitate the ability and potential of humans. What the tech companies are currently doing with AI can maybe be described as stepping stones towards AGI, but reaching it is still a ways off.

So rest assured, we can all sleep safe and sound for the foreseeable future, disabling the alarm clock in the morning just a touch too violently, without worrying it will get back at us any time soon.