Prompt engineering or being able to write great AI prompts has rapidly become one of the most in demand skills in the workforce. As Generative AI tools for creating written content or images start to take centre stage, a need for a new skill set is emerging around crafting great prompts to get the best results.
The AI prompt is what you input into an AI model to request your results. It can be a statement, a question or even just conversational. Large Language Models (LLMs) are getting very smart these days, but they still need a starting point and as with all technology over the years the results will vary depending on the quality of the input.
Being good at writing AI prompts can help you bridge that results gap, and we have seven useful tips that you can apply to your prompts to help you get better results from your queries.
Be Specific with Your Requests
Use Clear and Concise Language
Provide Context When Necessary
Utilise Follow-Up Questions for Clarity
Prime the AI persona
Leverage Keywords and Commands
Set the Tone or Style
Generative AI Tools respond best when your prompts are highly specific. They work best when they are finely tuned to your specific needs, so take the time to outline exactly what you want in clearly defined details from the start. You can then further tune up request with more specific requests once you see your initial results.
AI content generation is rarely a one and done process. You will need to finely tweak each request to get exactly what you want. Drill down into the finer details if you want to see the best results.
Basic Prompt: “Write a blog post.”
Specific Prompt: “Write a 500-word blog post about the top 5 SEO strategies for small businesses in 2024, including keywords, backlinking, and local search optimization.”
This specific prompt clearly outlines the topic, word count, and key points to cover, leading to a more targeted and useful response. You will need to tune it further to meet your needs exactly, but it will give you a good start.
As with all good communication, clarity is key. The more information you can give your AI model the more likely it is to understand and return the results you need. Clear commands will also allow you to be more specific, which we have already seen above helps the model enormously.
Remember, your prompt is data that the model will need to analyse and compare to its existing data. AI models are very good at prediction from existing data but they are not mind readers, only you can give them the exact directions to the information they need to find.
Basic Prompt: “I need some info on that thing where you make websites show up more on Google.”
Clear Prompt: “Provide an overview of basic SEO techniques for improving Google search rankings.”
The clear and concise prompt directly communicates the need without ambiguity, making it easier for the AI to generate relevant information. However, if you can’t remember the name for something, why not ask the AI tool first? Then you can be more specific in your next prompt.
This tip is especially helpful when troubleshooting an issue as the additional context can really help your AI model shine. As you would, if you were speaking to a doctor or mechanic you want to give as many details as possible.
Again, this is an example of using prompt engineering to fine-tune your model and narrow down the options that the program needs to assess before returning your result. If you are searching on a specific error message always add the additional context for the best results.
Basic Prompt: "How do I fix the ‘ModuleNotFoundError’?”
Contextual Prompt: “I’m working with Python 3.8, and I’m getting a ‘ModuleNotFoundError’ when trying to import Pandas. How do I fix it?”
By providing the extra context of the tool being used in the context of the error you can help the AI tool find a much clearer and more exact answer.
When using prompts, you may want to think of the process as more of a conversation than a one step process. Rather than searching for information with a short, quick phrase as you would in a search engine, you can converse back and forth with the AI tool using open questions to get the results you want.
The AI tool can remember the context of the conversation that you have had so far so if you want it to return to an earlier step or need it to expand on any part of the process you can use follow up questions.
Basic Prompt: “Explain quantum computing.”
Follow-Up Question: “Can you give a simpler explanation of how qubits work compared to classical bits?”
The follow-up question narrows down the topic to a specific aspect that may not have been sufficiently clarified in the initial response. This allows for a deeper understanding and better responses.
Once you boot up your AI tool of choice and get straight to prompting, that’s fine but you may be missing an opportunity to fine-tune your model to give you better results.
Prime your model by defining its personality before you start asking your questions or giving commands. You can ask the AI tool to act as an expert in any given field and the results you receive will be very different from the basic persona.
Basic Prompt: “Write a blog on cars.”
Persona-led Prompt: “Acting as an expert content creator create me an engaging blog about cars.”
Priming the AI tool to act as a certain persona will allow it to specify the type of content you are trying to create and work within the dimensions of that model.
Using certain keywords in your prompts can help you achieve your goals more quickly than other less specific verbs. Just like when writing in a programming language, AI tools will respond better to certain keywords and commands that it understands clearly. So instead of using open verbs like write or create, you can niche down to the specifics with commands.
A few examples would be:
Explain: Use this to request detailed explanations or to have complex concepts broken down into simpler terms.
Summarize: Ideal for getting a concise overview of a longer text or concept. You can provide the text to the Ai tool for it to summarise.
List: When you want information presented in a bullet-point or numbered format for easy reading.
Compare and contrast: Use this to ask for differences and similarities between two or more items.
Advantages/Disadvantages: Use this command to inquire about the pros and cons of a specific topic, method, or item.
Don’t forget that you can always prompt your AI tool for a list of specific commands that will get the best responses. There’s no better place to ask than in the tool itself.
Keyword Prompt: “What keywords can I use in my prompts to get best results?”
You can give your AI tool more information about what kind of voice you are trying to achieve in your content. Make it clear whether your goal is to produce something in a professional tone or a casual one.
You can also specify what variation of a language you would like in your text. For example, specify whether you want the tool to produce text in American or British English.
Style Prompt: "Write a professional email to a client apologizing for a delayed project delivery.”
If the style is not quite right on first iteration, try specifying further with additional adjectives like: Professional and persuasive, casual and conversational or technical and detailed. You will need to experiment to find the voice that best suits your goals.
Getting good at writing AI prompts may take a little time and practice but if you keep your prompts specific, clear, and concise you will soon start to see better results from your AI model.
However, if you ever get stuck don’t forget you can always ask the expert: the LLM itself!
Reach out by using the form below if you would like to learn more about our bespoke AI solutions that will help you generate and translate your content more efficiently and beat your competitors to market. Or please click on the links below for some great deep dive articles on LLMs and the future of AI.
Generative AI seems to be everywhere these days. But what is it and how can this technology help your business grow? Find out more.
Can LLMs keep your data safe? Discover the importance of data security when working with Large Language Models.