How to Make the Most of Prompt Engineering

Introduction to Prompts
Large Language Models (LLMs) are designed to understand and generate human language. These models are trained on very large datasets that include books, articles, and websites – basically the entire public internet. This allows them to understand the intricacies of human language. Through this training, LLMs have learned to generate text, answer questions, and carry on conversations in a very human-like way.

At the heart of interacting with an LLM is the concept of a prompt. A prompt is a question or instruction provided by the user, serving as a starting point for the model’s response. This natural language input could be as simple asking what actors have ever played Batman. It could also involve a more complex task, such as asking it to debug a piece of software that’s behaving in unexpected ways.
The importance of prompts cannot be overstated for anyone trying to make the most out of their interactions with an LLM. They help the model to understand the user’s intent and determine how to construct an appropriate response. Building an effective prompt enables more accurate and contextually relevant outputs and in turn, better user experiences.
What is prompt engineering?
Prompt engineering refers to the process of structuring a request you have for an LLM in such a way that the LLM will reliably produce the output you desire. As more and more people use LLMs as part of their everyday work, specific techniques have been discovered that help achieve certain results. For instance, when asking an LLM to write an article, many prompt engineers have realized that asking first to produce a list of key points or an outline, then asking the LLM to write the article based on those key points yields better results than simply asking for a draft of the article. This is a very basic example of prompt engineering!

How does prompt engineering work?
Prompt engineering is part art and part science. It generally involves crafting questions or instructions to produce the most effective response from your large language model. The good news is that, in practice, this doesn’t require you to be an expert in data science with a deep understanding of how generative AI models work. After all, the beauty of LLMs is that they are built to understand the same language you use naturally.
However, there are times when knowing the nuances of what works well in a particular model will be helpful. For example, if you provide the same prompt to three different image generators like DALL-E, stable diffusion, and Midjourney, each model’s output is likely to produce three very different images!
The prompt engineering process almost always starts the same way – with defining your objective clearly. What exactly is the desired outcome you would like the LLM to achieve? With this goal in mind, you can experiment with various prompt styles to fit the situation. For example, asking broad, open-ended questions about an existing essay or academic paper can help you assess if your paper conveys the appropriate high level messages you intended. On the other hand, providing very specific instructions, or even structured templates, can be more suitable when you are asking the generative AI model to assist with software development tasks where you need to generate specific code.

Moreover, prompt engineering often involves a blend of creativity and analytical thinking, as finding the most effective prompt can be akin to solving a puzzle. Each adjustment can significantly impact the model’s performance, making prompt engineering a key skill for anyone looking to leverage the full potential of large language models. Through an iterative process of experimentation and refinement, prompt engineers can vastly improve the quality of output they can generate with these artificial intelligence systems.
Why is prompt engineering important in generative AI?
Simply put, prompt engineering is an essential technique required to make generative AI useful for almost any non-trivial application. While certainly powerful, upon even moderate scrutiny the creative content generated by artificial intelligence models, such as blog posts, articles and essays, is usually fairly terrible. Likewise, LLMs are often prone to hallucinate, or put it politely, make stuff up, when they are lacking knowledge about subjects they are being asked about.
Prompt engineering gives users a way to instruct the generative AI model on how to behave appropriately in other important ways as well. For example, we mentioned earlier that LLMs are often trained on a corpus of the entire public internet. As you likely are aware, the public internet is full of many truly awful people who say many truly awful things. Prompt engineering can help to ensure that those offensive statements aren’t reflected in your applications. This is particularly important as companies build generative AI applications for customer facing tasks like responding to complaints or providing first level responses to customers making a sales inquiry.
Prompt engineering techniques
Now that we’ve explained what prompt engineering is and why it’s important, let’s now turn our attention to the actual techniques that are commonly employed when creating the specific prompts that reliably trigger the model’s ability to produce the desired outcomes.

Chain-of-thought prompting (CoT)
CoT prompting is a key technique in AI prompt engineering that instructs the LLM how to perform complex tasks by explaining the reasoning behind arriving at a given response. Introduced in a paper by Jason Wei, et. al., the authors used this example to illustrate how CoT help the LLM to identify the desired keywords necessary to perform reasoning in its response:

As you can see, this technique involves explaining to the LLM, using a few examples, not only what the correct answer to a question is, but also the logical reasoning that was used to arrive at that answer.
Especially effective for complex, multi-step problems, Chain of Thought prompting can enable the AI system to the problem sequentially, at each step identifying the key pieces of information it should pay attention to, and how it should use those pieces of information to produce the desired response.

By instructing the generative AI models how to reach a conclusions, CoT prompting augments an LLMs ability to produce text with a highly surprising ability to apply complex commonsense reasoning to solve problems.
Request, task, format framework
The Request, Task, Format (RTF) technique in prompt engineering systematically structures interactions with generative AI tools, focusing on multiple steps: defining the request, specifying the complex tasks, and formatting the desired output. This method introduces a disciplined approach to crafting effective prompts, ensuring each element of the interaction is explicitly outlined for optimal clarity and effectiveness.
At the core of the RTF technique is the importance of defining the request clearly. This involves stating precisely what information or action is being sought from the generative AI. Whether the user is looking for an answer to a question or creative content generation, articulating the request clearly sets the stage for a relevant and targeted AI response.

Specifying the task is the next critical step, where the prompt clearly outlines the specific action the AI is expected to perform along with intermediate steps that might be required. This specificity guides the generative artificial intelligence in understanding not just the goal but also intermediate steps along the pathway to achieving it, whether through logical reasoning, data processing, or creative synthesis. It’s about making the ‘how’ as clear as the ‘what’.
The final component, formatting the response, dictates the structure or form in which the AI’s output should be delivered. Whether the user needs a bulleted list, a detailed paragraph, a summary, or an existing code snippet, setting these expectations upfront ensures the response meets the user’s needs both in content and presentation.
Employing the RTF technique not only enhances the precision and clarity of generative AI interactions but also improves the user experience. By making the exchange between human and generative AI more predictable and understandable, it streamlines the process of achieving meaningful communication, leading to outputs that are directly aligned with user intentions.
An example of a prompt that uses RTF might look something like this:
Request: Explain the concept of machine learning bias.
Task: Outline the causes of bias in machine learning algorithms and its impact on natural language processing.
Format: Write a detailed explanation in three paragraphs, with one paragraph dedicated to causes, one to impacts, and one to potential mitigation strategies.
In this example, we clearly describe the overall request we have for the large language model, then describe the complex task required to complete the request and finish with information describing how we would like the response formatted.
Persona adoption prompt engineering techniques
By employing persona adoption techniques, prompt engineers can direct a language model to adopt a specific tone, style, or perspective in its responses, enhancing user engagement. This approach allows for a high degree of customization, enabling generative AI systems to deliver responses that are not just contextually accurate but also stylistically aligned with the user’s expectations or the specific scenario at hand. Whether the aim is to mimic a friendly advisor, a strict coach, or a neutral informant, persona adoption tailors AI interactions to be more engaging and relatable.

The application of persona adoption spans various domains, from customer support bots that convey empathy and patience, to educational platforms where a nurturing or motivational persona can significantly impact learning outcomes. Creative fields also benefit, with AI generating in context learning that can create text that reflects a particular authorial voice or character perspective.
Persona adoption in prompt engineering not only enriches the AI’s interaction capabilities but also introduces a layer of personalization, making generative AI tools more versatile and impactful across a range of applications.
Effective prompts that use this techniques would look something like these:
Prompt: “As a fitness coach known for your motivational and upbeat attitude, what message would you send to someone who feels guilty for missing their workout session?”
Prompt: “As a customer service representative who is always helpful and understanding, how would you inform a customer that their order will be delayed due to unforeseen circumstances?”
This approach to prompt engineering instructs the generative AI model to adopt the target persona which in turn helps to produce optimal outputs.
Directional stimulus prompting techniques
The technique of directional stimulus in AI prompt engineering precisely guides AI responses to fit a desired narrative or factual framework. Described initially in a paper by Zekun Li, et. al, this technique allows prompt engineers to steer AI content toward specific objectives, enhancing the output’s relevance and accuracy. The example given in the research paper is shown here:

This method is effective in use cases where you can easily provide hints or clues to tell the LLM where you would like it to focus as it is working to produce the final answer. This approach is helpful in ensuring your expected response takes into account the key pieces of information you want the LLM to focus on.
A prompt engineer using this technique might create prompts that look something like these:
Prompt: Given the context of the Industrial Revolution, analyze how technological advancements influenced urban development. Hint: Specific innovations and their direct impact on city growth during the 18th and 19th centuries and describe the most commonly reached conclusion by historians.
Prompt: Write a short article about generated knowledge prompting. Emphasize the contrast between this technique and other complexity based prompting techniques such as least to most prompting along with approaches that leverage in context learning. Highlight how these approaches are used with open source language models and how they can be used in refining large language models excluding any techniques that are specific to text to image models or inconsistent explanation trees.
Tips and best practices for writing prompts
To elevate the quality of AI-generated content, understanding the art of prompt writing is key—beginning with these foundational tips and best practices.
Clarity and Specificity
The cornerstone of effective prompt writing is clarity. When writing prompts, the prompt engineer should be concise yet specific enough to guide the generative AI toward the desired outcome. Paying attention to details such as word order and thinking about the complex reasoning that might be required can be helpful to achieve an effective prompt. Ambiguous prompts can lead to irrelevant or off-target responses, so it’s crucial to define the specific task clearly and set precise expectations.
Zero Shot Prompting Techniques
Zero shot prompting requires generative AI tools to respond based on pre-existing knowledge within the large model, without any examples. The key here is to craft prompts that are self-contained, providing all necessary context and details within the prompt itself. This zero shot, technique is useful for gauging the AI’s baseline understanding and creativity.
Few-Shot Prompting Techniques
Few-shot prompting involves iterative refinement of the output by providing a first prompt followed by subsequent prompts that either refine the output of the previous steps or instruct the LLM how to proceed based on the output produced so far. This technique can significantly enhance the AI’s ability to produce specific and relevant outputs.
Incorporating Context
Adding sufficient context to your prompts can drastically improve the AI’s responses. Context helps the AI understand not just what you’re asking for but why, which can be particularly important for complex requests or when the desired outcome involves nuanced understanding. Techniques like Retrieval Augmented Generation (RAG) can be especially effective at improving responses. For text to image AI systems, this may involve providing example images to help create the desired response.

Response Formatting
Specifying the format you expect for your response can help produce the desired output. For example, with natural language text responses, you can indicate whether you’re looking for a detailed report, a list, a summary, or creative prose. As a prompt engineer, you must clearly state your formatting preferences to ensure the generative AI response is immediately usable.
Testing and Iteration
Prompt writing is as much an art as it is a science. It requires testing different formulations and iterating based on the AI’s responses. Each iteration offers insights into how the AI interprets prompts, allowing for continuous refinement and improvement.

This structured approach to prompt writing provides a comprehensive framework for generating high-quality generative AI responses. By mastering these techniques, prompt engineers can enhance the effectiveness and accuracy of AI-generated content across various applications.
Prompt engineering security
In the domain of prompt engineering, similar to software engineering, security plays a pivotal role, particularly in addressing and mitigating prompt injection attacks.
Prompt Injection
Prompt injection occurs when malicious inputs are designed to alter the behavior of generative AI models, leading to unauthorized outcomes. These attacks exploit the way AI models process inputs, tricking them into executing unintended actions or divulging sensitive information. The risks associated with these attacks are significant, compromising the integrity and reliability of AI systems, and potentially causing harm if sensitive data is exposed or manipulated.
Mitigate Prompt Injection Attacks
To counteract prompt injection threats, a multi-layered approach to security is essential:
Preventive Measures
Developing robust prompt validation protocols as part of your software development process is crucial. By scrutinizing input prompts for anomalies or malicious patterns, engineers can prevent many attacks before they occur. Additionally, training AI models to recognize and reject injection attempts enhances system resilience.
Detection and Response
Implementing real-time monitoring systems that detect unusual AI responses or usage patterns is a vital part of a secure software development process. Upon detection, swift response mechanisms, including prompt isolation and analysis, can mitigate impacts and prevent further exploitation.

Ongoing Monitoring and Updates
Continuous monitoring and regular updates to security measures and generative AI services are imperative. As new types of prompt injection threats emerge, prompt engineering practices must evolve, incorporating the latest security advancements and threat intelligence.
By prioritizing security within prompt engineering practices and adopting comprehensive measures against prompt injection, the safety and integrity of AI-generated content can be significantly enhanced, protecting both the systems and the users they serve.
Career opportunities for prompt engineers
As companies increasingly recognize the value of tailored AI interactions, the demand for skilled professionals in this area is growing, leading to more firms hiring prompt engineers.
What skills does a prompt engineer need?
Essential skills for prompt engineers include a basic understanding of AI and machine learning principles, familiarity with natural language processing, creativity in prompt design, and the ability to analyze and improve AI responses.

Prompt engineering jobs
Careers in prompt engineering span various industries, from tech companies focused on AI development to businesses seeking to enhance customer interaction through AI. For those with the right blend of technical knowledge and creativity, the field offers a dynamic and evolving career path.
Conclusion
Prompt engineering is an important emerging skillset, offering opportunities for innovation. As we’ve explored, from crafting effective prompts to securing AI interactions and understanding career paths, the field is ripe with potential. For those ready to dive in, an interesting world of chain of thought rollouts to maieutic prompting awaits you to text to image generators awaits you!