Jayne Webb | March 3, 2023
Ominous warnings about artificial intelligence (AI) are everywhere — the new AI language model tools mean that original human thought, artistry, and writing are doomed!
But are they really? History records many warnings over the centuries as new-fangled things arrived — like books, machinery, the motor-car, the wireless (radio), TV, the internet… only a century ago many men believed giving women the vote was recklessly dangerous too.
The most prominent AI language tool so far is OpenAI’s ChatGPT, whose models were trained in collaboration with Microsoft. The GPT stands for ‘generative pre-trained transformer’. Language models like ChatGPT are fed large amounts of data and instructions to generate predictions and string words together in a meaningful way. Competitors are also on the scene — Google’s Bard, and Chinese tech giant Baidu’s Ernie (Enhanced Representation through Knowledge Integration).
Kneejerk reactions to new tools are not new. Even when women started to vote, the sky didn’t fall in. But with any new tool, it’s important to:
Instead of only seeing negatives, we could try seeing the possibilities of these AI language models.
Some experts believe that generative chatbots are the future of search engines, where AI searches the internet for one correct answer to a search query rather than serving up pages of links.
If we’re saving ourselves the ‘trouble’ of searching, sifting, and synthesising information, do we then get smarter in different ways? We might not be able to remember the phone numbers of our nearest and dearest, but perhaps we’re now a genius at something else.
Another obvious advantage is how fast these models can pull something together — that on the surface is accurate and articulate. If you’re staring at a blank screen, this could be the ignition you need. If you have some good thoughts already but want to stretch your thinking a little, or test out opposite assumptions, using a generative language model could be just the ticket. You can keep asking questions to discover more, burrow deeper, and play devil’s advocate.
Speed is something that excites companies that deal with thousands of customer enquiries. Companies heralding the new tech are hoping they can use it for predictable, repeatable tasks to free up time for more complex things.
If you already approach writing as a process, with distinct steps, an AI tool could help in the gathering and planning stages of your writing. At Write, we encourage writers to spend most of their time thinking and planning, and these new generative language models could help with that. This relies on information being accurate though, and we’ll talk more about that below.
Generative language models could also support students in education. Some educators are concerned that students will pass off generated text as their own. Others feel they should embrace the new technology and use its strengths to improve teaching and learning. Ironically, some have argued that educators may need to build more genuine connections with students so they can easily recognise the authentic voice of the student as compared to generated text.
Fear of ChatGPT in schools and universities is misplaced anxiety, argues education professor Carol Mutch.
Read the article on Newsroom’s website
A new tool is only as useful as the person wielding it or the information it has been given. Used wisely and with care, it should help rather than harm.
Thinking about what could go wrong, we might think of driverless cars running amok, amplified mis- and dis-information, and real damage because people based their decisions on wrong information.
The technology is still in its infancy. This will change, but ChatGPT is currently restricted to content that was on the internet in 2021, and it does not update (although the next generation is being developed). It presents its answers as fact, when we know the net is full of deliberate or accidental inaccuracies — some of them more dangerous than others. ChatGPT does make mistakes. But because the sentences are grammatically correct and ‘human-like’, we’re much more likely to believe what we read without verifying it — that’s how our fast-thinking systems work.
Language models churning out vast amounts of content will also add to the overload of information that we already have to cope with. So we’ll rely much more on mental shortcuts, which don’t always serve us well when deeper understanding is needed.
Too much content is a problem now, but how much worse might it get when you don’t need people to produce it? Knowing what to write and what to cull is a skill that will probably always need a human in the driving seat.
Replication is the name of the game. Yes, you can get AI to write with a personality, but it has to use a personality that exists already, or an amalgamation of something that exists already. AI technology paraphrases rather than quoting. And when information is continually summarised and copied, some of it gets lost (like when music is compressed). Clarity decreases each time, but we aren’t aware of this because all the sentences are nice and tidy and seem complete.
‘ChatGPT Is a blurry JPEG of the web’, argues technical writer Ted Chiang.
Read the article on the New Yorker website
Here’s what ChatGPT came up with when we asked it about the differences between human and AI writing.
Human writing and AI writing differ in several ways, including the following:
1. Creativity and Originality: One of the key differences between human writing and AI writing is creativity and originality. Humans can create unique and original ideas that AI cannot generate. Human writers can bring their personal experiences, emotions, and perspectives into their writing, resulting in a unique style and voice that is difficult for AI to emulate.
2. Contextual understanding: Humans have a better understanding of the context in which they are writing. They can use their intuition, background knowledge, and cultural awareness to produce more meaningful content. AI writing, on the other hand, relies on algorithms that analyze patterns and data to generate text.
3. Purpose and Intent: Human writers have a specific purpose and intention when they write. They may be trying to persuade, entertain, inform, or educate their readers, and they adjust their language and style accordingly. In contrast, AI writing is programmed to generate text based on specific parameters and criteria, without any specific intent or purpose.
4. Emotional content: While AI writing can sometimes mimic human emotions, it lacks the depth and authenticity of human emotions. Human writing can convey a range of emotions, including humor, sadness, joy, anger, and more, in a way that AI cannot replicate.
5. Human error: Human writing can contain mistakes, errors, and imperfections, which can add character and authenticity to the writing. AI writing, on the other hand, is generally error-free and consistent, which can make it seem less human-like.
Overall, while AI writing can be a useful tool for generating content quickly and efficiently, it cannot fully replicate the creativity, context, purpose, and emotional content of human writing.
We asked ChatGPT other questions about plain language, writers, humans, and communication. It generated answers like this in seconds. They each followed the same format, echoing point 1 above about originality and creativity. And did you notice the bold claim that AI writing tends to be error-free? This isn’t true.
At Write we emphasise human-centred writing. This means having empathy and understanding for your readers. We recommend you spend plenty of time on thinking and planning — to understand what your readers think and know, what questions they have, and what they need to get things done.
Even though we’re plain language experts and advise writers to weed out the waffle, we also know that light and shade, contrast, and heft and twang in writing are what brings it alive for people. If you continually write 15-word sentences with no variation or contrast, your readers will soon be asleep. No one style of writing appeals to all people — we need a mix of voices and styles that are still easy to understand. And writing needs to be and feel authentic.
When we talk about large language models like ChatGPT, let’s be careful we don’t portray them as ‘alive’. If we start talking about ‘robots’, ‘sentient beings’, and ‘runaway trains’, for example, people find it harder to see that decisions and actions by humans are responsible for the tech and how the tech affects people’s lives. If we talk about a ‘tool’, it’s easier for people to see the other humans involved because humans need to wield the tool and give it the right information.
Human critical thinking skills will be in high demand. These language models will become widespread, so it’s vital that we’re teaching and learning the critical thinking skills needed to assess information. Coupled with information overload, our fast-thinking brains tend to accept information at face value, and the more something is repeated (even to negate it), the more we believe it. From a young age, we’ll need to be learning how to sift and assess information critically before accepting it.
Other humans, and the tech they create, could be what’s needed to balance the power and strength of new AI tools. Edward Tian, a 22-year-old computer science student at Princeton University, has created an app, GPTZero, that can sniff out AI in writing. It assigns a score based on variability and familiarity, and is fairly accurate at detecting whether a human hand is behind the writing. He says that the popularity of his app speaks to ‘a human urge to know the truth’. Educators can breathe out now.
At Write, we believe in using the power of words for good. Our vision is that everyone, no matter what their circumstances, can get the information they need, understand it easily, and act on it.
It’s early days, but this new technology tool could be a great support if we approach it with care. When you’re talking about new AI-generated tech, we suggest you:
If you’re using the tech yourself, we suggest you:
So, we’re not at the mercy of a machine that’s out to replace us. Instead, we have in our hands a tool that can support us to give people the information they need, when they need it. Used with care and caution, this new tool has the potential to support humans to do what humans do best.
We gave ChatGPT the last word. We asked it for one-word answers:
In its favour, we can see that it is obliged to answer honestly!
Would you still prefer to get help from the good humans at Write? We can help with whatever you’re stuck on.
If you need help starting your AI writing journey, come join our AI Writing Insights workshop for an introduction into writing with AI at work.
AI Writing Insights: Balancing opportunity and risk
If you need help writing for real readers, we can support you with consulting, coaching, training, or doing the writing and editing for you.
Write’s services
If you need to sharpen your critical thinking skills, we have the perfect workshop.
Critical Thinking workshops
If you need help connecting with humans using your website, sign up for our web content training.
Web Content workshops
And our instant online training is always at your fingertips.
Write Online free trial