Jacob Lister | August 16, 2023
Love it or hate it, artificial intelligence (AI) is here to stay. From writing entire books to reading people’s minds, AI has pushed the boundaries of what humans thought was possible.
If you write for work, you’re probably wondering how to get the most from AI based on large language models like the popular ChatGPT. As with any tool, the quality of what AI can make depends on how well you use it.
In the same way calculators changed the way we do maths, AI is a tool that could revolutionise the way we write — but only if we use it correctly. So how can you get the most from AI?
Every AI based on a large language model works by guessing the sequence. It’s like if I asked you what number comes next:
2, 4, 6, 8, ?
You know the answer is 10 because you’ve seen this sequence thousands of times. AI does this exact same thing, but with massive amounts of words.
Simply put, AI works in three steps.
Large language model AI like ChatGPT are not magic — they’re just excellent at guessing which words should come next.
Prompt engineering is about making your input more specific, so the AI can give a more specific output. By doing this, you’re helping the AI filter information from its database that isn’t relevant to the sequence it’s trying to guess.
For example, when I ask ChatGPT ‘How to bake a cake?’, it responds with a generic recipe and 12-step instructions. Here’s the ingredient list it generated:
This is okay, but what if I’m gluten-free, vegan, and still want a cake that’s delicious? I can change my input to say ‘How to bake a delicious, gluten-free, and vegan cake?’ Here’s the ingredient list it generated:
By making my prompt more specific, the AI knows to ignore any ingredients that aren’t gluten-free or vegan.
The more complex the task, the more important prompt engineering becomes. Prompt engineering helps the AI create an output that’s as useful and relevant as possible.
As the amount of information the AI searches through increases, so does the chance that irrelevant or untrue information will be present in the output.
The same is true for the size of the output. If the AI gives a 2,000-word output instead of a 50-word output, there’s a greater chance that unhelpful information will be present in the output.
Researchers and user testers have identified some useful strategies.
We can help AI filter unnecessary information by asking it to pretend to be an expert. When we do this, AI is more likely to only use information that actual experts would use. More importantly, the AI is more likely to respond with follow-up questions an actual expert would ask.
For example, if you asked ChatGPT ‘How do I lose weight?’, ChatGPT responds with 12 generic pieces of advice like ‘Mind your eating’. This is okay, but we can do better.
To give the AI a personality, you give a more detailed input. For example:
You are a professional personal trainer with 30 years of experience. Make me a plan on how I can lose weight.
This time, ChatGPT responds with seven specific areas of life to focus on, including nutrition, rest, and exercise. For each area, it gives two to four specific instructions, with clear guidance on how to proceed.
Chat-based models of AI like ChatGPT are designed to be used like a conversation. You have a high chance the output you get won’t be right the first time round. This is okay.
Unlike Google search, ChatGPT remembers the previous inputs and outputs so it can improve its outputs. When you instruct the AI to ask you follow-up questions, you’re helping it understand what output you want.
For example, we can add two instructions to the previous prompt to make it better.
You are a professional personal trainer with 30 years of experience. Make me a plan on how I can lose weight. Ask follow-up questions to help guide you to a diagnosis, or to make your advice more specific. Do you understand?
Instead of giving generic advice, the AI asks us important questions.
When you answer these questions, the AI might ask further questions, or give specific guidance based on your answers.
If you’ve ever tried giving instructions to a child, you know what I mean here. They’re more likely to follow instructions correctly when you tell them what to do, instead of telling them what they shouldn’t do.
When you give instructions, include examples or a description of how you want the output presented. Do you want a bullet-point list? A table? A poem of exactly 476 words written for a 9-year-old? AI can do all these things, and more — you just need to empower it.
On the other hand, AI is more likely to produce unhelpful results if you give examples of what you don’t want.
Basically, avoid saying things like:
Don’t use sources from before the year 2000.
Instead, say something like:
Only use sources written after the year 2000.
From the AI’s perspective, the amount of information you want in your output is much smaller than the amount of information to exclude. You’re not being helpful by adding to the already massive list of things to exclude.
AI works best with clear instructions, and instructions are clearest with basic verbs (action words). Whenever possible, try to use the most basic form of verbs you know.
Use Describe
Not Give a description of
Use Suggest
Not Make a suggestion about
Use Analyse
Not Make an analysis of
Use Solve
Not Provide a solution for
AI can lie. Researchers have found that AI will often present information as fact even when it knows it is untrue. Sometimes, if the AI isn’t sure about something, it will just make something up. And researchers have no idea why it does this!
You won’t be surprised to hear that the internet is full of biased content like racism and homophobia. And content that is simply untrue. Despite the best efforts of researchers developing AI, this content will inevitably creep into the databases AI uses.
The danger is that AI presents information so convincingly that it’s easy to believe anything it says. For these reasons, subject-matter experts (like you) will always be needed to check that AI’s outputs are as unbiased and true as possible.
To make your job as the fact checker easier, you can ask the AI to include sources of information in the output it provides. This way, you can check the accuracy of the source itself.
ChatGPT’s default data privacy settings allow its parent company OpenAI to keep your data to train future AI models. This is necessary for it to develop more capable AI. However, this means that any data you put into ChatGPT basically becomes OpenAI’s property. Planning to process any confidential data? You may want to reconsider.
You can change your privacy settings in ChatGPT to not save your data. But your data will still stay in its system for 30 days. Any information you put into ChatGPT is at risk of being stolen or leaked within those 30 days.
If you work in government, the guidance below from the Office of the Privacy Commissioner is useful. It sets out potential privacy risks to consider and steps you should take before using AI in your work.
Read guidance on using AI from the Office of the Privacy Commissioner
As with anything you put on the internet, you never know where info you put into AI will end up, or how it will be used. Stay safe out there!
If using AI isn’t for you, then Write is here to help! We offer a range of writing workshops for all your work needs.
Check out our AI Writing Insights workshop for an introduction into writing with AI at work.
AI Writing Insights: Balancing opportunity and risk
Check out our Business Writing Essentials workshop for tips on all things writing.
Business Writing Essentials Workshop
Check out our Critical Thinking Workshop to out-think the machines!