Prompt Engineering: Definition, Examples, Tips and More
Natural language processing has been a field with affluent areas of implementation using underlying technologies and techniques. In recent years, and especially since the start of 2022, Natural Language Processing (NLP) and Generative AI have experienced improvements. This made prompt engineering a particular skill to understand for anyone to master language models (LMs).
- Understanding Prompt, Prompt Engineering, and examples
- Tips on refining your prompts
- Elements of Prompts and Prompts Pattern
- Prompting Techniques
Prompt engineering knowledge assists in better understanding the capabilities and limitations of fundamentally using large language models (LLMs).
This article was published as a part of the Data Science Blogathon.
Table of contents
- What is Prompt Engineering?
- Examples of Prompt Engineering
- How to Engineer your AI Prompts?
- Elements of a Prompt
- Standard Prompts Pattern
- Prompting Techniques
- What to Avoid When Creating Prompts?
- Frequently Asked Questions
What is Prompt Engineering?
Prompt engineering is a practice in natural language processing field of artificial intelligence where text describes what the AI demands to do. Guided by this input, the AI generates an output. This could be in different forms with the intent to use human-understandable text conversationally to communicate with models. Since the task description is embedded in the input, the model performs more flexibly with possibilities.
What are Prompts?
Prompts are a detailed description of desired output expected from the model. It is the interaction between a user and the AI model. This should give us more understanding of what engineering is about.
Examples of Prompt Engineering
The prompts used in large language models such as ChatGPT and GPT-3 could be simple text queries. With all this, the quality is measured by how much detail you can provide. These could be for text summarization, question, and answer, code generation, information extraction, etc.
Since LLMs could be used to solve complex problems where many instructions are included, it is vital to be detailed. Let’s see some examples of basic prompts:
Antibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the body's immune system to fight off the infection. Antibiotics are usually taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance. Summarize the above into 2 sentences:
This output the summary in a Q&A pattern.
Antibiotics treat bacterial infections by killing or preventing their reproduction, enabling the immune system to fight off infections. Oral or intravenously administered, they are not effective against viral infections and can lead to antibiotic resistance.
We just saw an illustration of using LLMs. The possibility only goes endless. Prompt engineering can be tailored for various outputs. Here are examples for different content types:
Text – ChatGPT, GPT
- What’s the difference between generative AI and traditional AI?
- Provide 10 variations for the headline, “Top generative AI use cases for the enterprise.”
- Write an outline for an article on the benefits of generative AI for marketing.
- Generate 300 words for each section of the article.
- Create engaging headlines for each section.
- Craft a 100-word product description for ProductXYZ in five styles.
- Define prompt engineering in iambic pentameter, Shakespearean style.
Code – ChatGPT, Codex
- Act as an ASCII artist translating object names into ASCII code.
- Identify mistakes in a given code snippet.
- Write a function multiplying two numbers and returning the result.
- Develop a basic REST API in Python.
- Explain the function of a provided code.
- Simplify a given code.
- Continue writing a provided code.
Images – Stable Diffusion, Midjourney, Dall-E 2
- Depict a dog in a car wearing sunglasses and a hat in the style of Salvador Dali.
- Illustrate a lizard on the beach in the style of claymation art.
- Create an image of a man using a phone on the subway, 4K resolution with bokeh blurring.
- Design a sticker illustration of a woman drinking coffee at a table with a checkered tablecloth.
- Visualize a jungle forest with cinematic lighting and nature photography.
- Generate a first-person image looking out at orange clouds during a sunrise
How to Engineer your AI Prompts?
The quality of the prompt is critical. There are ways to improve them and get your models to improve outputs. Let’s see some tips below:
- Role Playing: The idea is to make the model act as a specified system. Thus creating a tailored interaction and targeting a specific result. This saves time and complexity yet achieves tremendous results. This could be to act as a teacher, code editor, or interviewer.
- Clearness: This means the removal of ambiguity. Sometimes, in the cause of trying to be detailed, we end up including unnecessary content. An excellent way to achieve this is to be brief.
- Specification: This is related to role-playing, but the idea is to be specific and channeled to a streamlined direction. This avoids a scattered output.
- Consistency: Consistency means maintaining flow in the conversation. Keep a uniform tone so that you can ensure legibility from the conversation.
Elements of a Prompt
These are the attributes that make up the skeleton of prompts. These can be:
- Instruction: It is a statement tasking the model to perform something.
- Context: Context is what streamlines the model to the problem. If not, it can go completely out of context and give poor responses.
- Input Data: It is the input as a whole single entity.
- Output Indicator: In role-playing, it indicates the type of output which will be a code. This element helps the model channel outputs suitably.
Standard Prompts Pattern
Let us try to see an overview of what a format looks like. Below is an example between a user and the model with straightforward instructions.
User: <Instruction> Model: <Response>
Few-Shot: It is a pattern for prompts using in-context learning. Here there is provision for in-context education, allowing the model to process examples beforehand. We will look at this more in the next section below. Few-shot can be formatted as follows:
<Instruction> <Response> <Instruction> <Response> <Instruction> <Response> <Instruction>
In a question-and-answer pattern, we have:
Q: <Question>? A: <Answer> Q: <Question>? A: <Answer> Q: <Question>? A: <Answer> Q: <Question>? A:
There are different techniques used in writing prompts. They are the backbone.
1. Zero-Shot Prompting
Zero-shot provides a prompt that is not part of the training yet still performing as desired. In a nutshell, LLMs can generalize.
Classify the text into neutral, negative, or positive. Text: I think the presentation was awesome. Sentiment:
The knowledge of the meaning of “sentiment” made the model zero-shot how to classify the question even though it has not been given a bunch of text classifications to work on. There might be a pitfall since no descriptive data is provided in the text. Then we can use few-shot prompting.
2. Few-Shot Prompting/In-Context Learning
In an elementary understanding, the few-shot uses a few examples (shots) of what it must do. This takes some insight from a demonstration to perform. Instead of relying solely on what it is trained on, it builds on the shots available.
3. Chain-of-thought (CoT)
CoT allows the model to achieve complex reasoning through middle reasoning steps. It involves creating and improving intermediate steps called “chains of reasoning” to foster better language understanding and outputs. It can be like a hybrid that combines few-shot on more complex tasks.
What to Avoid When Creating Prompts?
Before rounding up, they are some things we should avoid when creating prompts:
- Information Overload (Ambiguity): Try as much as possible to provide brief information since it could become junk and reduce the accuracy of the results.
- Open-Ended Questions: It is recommended that we avoid asking inexact or open-ended questions. A vague question might be: Can you help me find my way home? They are non-specific and too generic and will cause imprecision and less resourceful responses.
- Poor Use of Constraints: Constraints are boundaries and limitations to how scattered a situation can get. This requires providing specific requirements. This could be to role-play the model.
We have seen a detailed guide to prompt engineering providing insights into the fundamentals and how they function in AI models. AI has experienced a complete revolution regarding use cases with endless possibilities and futuristic applications. Prompts can guide AI models like human instructions, revolutionizing the future. Examples like ChatGPT. Understanding the principles and pillars is crucial for effective AI use.
- NLP and Generative AI have experienced improvements since 2022, making prompt engineering crucial for mastering language models.
- Prompt engineering in AI uses text input for descriptions, model flexibility, and understanding human-readable text.
- Refining prompts is essential for improving quality and outputs, using role-playing to save time and maintain consistency in conversations for better flow and legibility.
Frequently Asked Questions
A. Prompt Engineer specializes in ensuring the development and refining of text prompts as much as possible for the AI models use. They know the state-of-the-art approach to producing responses from AI models.
A. Anyone with a basic knowledge of how the models work, and good computer skill can horn the skills to become a PE.
A. Even though sometimes you may want to write a few lines of code which are still part of the input, it is not a requirement to always do so. A significant goal of PE is to eliminate complex coding and interact with AI via human-readable language.
A. We have three major approaches to PE. Some may have some ways of carrying out this art, but the commonly used ones include n-shot prompting, chain-of-thought (CoT) prompting, and generated knowledge prompting.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.