The Art of Crafting Powerful Prompts: A Guide to Prompt Engineering

Babina Banjara 21 Aug, 2023 • 10 min read

Introduction

Prompt engineering is a relatively new field focusing on creating and improving prompts for using language models (LLMs) effectively across various applications and research areas. We can easily understand the capabilities and restrictions of large language models( LLMs) with the aid of quick engineering skills. Researchers intensively use prompt engineering to increase LLMs’ capacity for various tasks, including question answering and mathematical reasoning. Developers use prompt engineering to develop dependable, effective prompting techniques that work with LLMs and other tools.

This article was published as a part of the Data Science Blogathon.

What are LLMs?

A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses some techniques and a large set of data to understand, generate, summarize, and predict new content.

Understanding Language Models | Guide to Prompt Engineering | LLM | Design prompts | prompting technique | OpenAI API

Language models use autoregression to generate text. The model forecasts the probability distribution of the following word in the sequence based on an initial prompt or context. The most likely word is then produced, repeated continuously to produce words based on the original context.

What is Prompt Engineering?

Prompt engineering refers to crafting effective prompts or instructions to guide the behavior of a language model or AI system. It involves formulating queries or commands to elicit the desired response or output from the AI. Prompt engineering is crucial for fine-tuning models like GPT-3 to generate accurate and relevant results.

By carefully designing prompts, developers can control the AI’s output style, tone, and content. It requires understanding the model’s capabilities and limitations, experimenting with different phrasings, and iterating to achieve the desired outcome. Prompt engineering is essential to harness the potential of AI systems while avoiding biases, inaccuracies, or unintended outputs.

Why Prompt Engineering?

Although LLMs are great for generating the appropriate responses for various tasks, where the model predicts the probability distribution of the next word in the sequence and generates the most likely words. This process continues iteratively, and the given task is fulfilled. But then there are several challenges to generating the relevant responses.

  • Lack of common sense knowledge
  • Does not have the contextual understanding sometimes
  • Struggle to maintain a consistent logical flow
  • May not fully comprehend the underlying meaning of the text

To address these challenges, prompt engineering plays a crucial role. Developers can guide the language model’s output by carefully designing prompts and providing additional context, constraints, or instructions to steer the generation process. Prompt engineering helps mitigate language model limitations and improve the generated responses’ coherence, relevance, and quality.

Guide to Prompt Engineering | LLM | Design prompts | prompting technique | OpenAI API

Why is Prompt Engineering Important to AI?

Prompt engineering requires understanding the AI model’s capabilities and the desired outcomes. It’s a crucial step in harnessing the power of AI technology effectively and responsibly.

  • Controlled Output: AI models like GPT-3 generate responses based on the provided prompts. Effective prompt engineering allows developers to control and shape the AI’s output, ensuring it aligns with the intended purpose and tone.
  • Precision: Crafting well-defined prompts helps obtain accurate and relevant results from AI systems. Without proper prompts, AI might produce vague or incorrect responses.
  • Mitigating Bias: Prompt engineering can help mitigate biases in AI outputs. By providing clear and unbiased prompts, developers can reduce the likelihood of generating biased or sensitive content.
  • Adaptation: Different AI models have different strengths and weaknesses. Prompt engineering allows developers to tailor prompts to specific models, maximizing their performance and adaptability.
  • Contextual Understanding: Crafting prompts that provide context enables AI to generate more coherent and contextually appropriate responses, improving the overall quality of interactions.
  • Intended Use Cases: Proper prompts ensure that AI systems are used for their intended purposes. For instance, precise prompts are crucial in medical or legal applications to ensure accurate and safe outputs.
  • Efficiency: Well-designed prompts streamline the interaction with AI systems, minimizing the need for multiple iterations or corrections, which saves time and resources.
  • Ethical and Responsible Use: By thoughtfully engineering prompts, developers can contribute to AI’s ethical and responsible use, avoiding harmful or misleading content.

Examples of Prompt Engineering

Here are some simple examples of prompt engineering:

  1. Task: Translate a sentence from English to French.
    • Unclear Prompt: “Translate this.”
    • Effective Prompt: “Please translate the following English sentence into French: ‘How are you today?'”
  2. Task: Summarize a news article.
    • Unclear Prompt: “Summarize this article.”
    • Effective Prompt: “Provide a concise summary of the main points in this news article about climate change.”
  3. Task: Generate a creative story starting with a given sentence.
    • Unclear Prompt: “Continue this story.”
    • Effective Prompt: “Build a story around this opening sentence: ‘The old house at the end of the street had always been…'”

Designing Prompts for Various Tasks

The first task is to load your OpenAI API key in the environment variable.

import openai
import os
import IPython
from langchain.llms import OpenAI
from dotenv import load_dotenv

load_dotenv()
# API configuration
openai.api_key = os.getenv("OPENAI_API_KEY")

The ‘get_completion’ function generates a completion from a language model based on a given prompt using the specified model. We will be using GPT-3.5-turbo.

def get_completion(prompt, model="gpt-3.5-turbo"):
        model=model,
        messages=messages,
        temperature=0, # this is the degree of randomness of the model's output
    )
    return response.choices[0].message["content"]

Summarization

The process performed here is automatic text summarization, one of the common activities in natural language processing tasks. In the prompt, we just ask to summarize the document and enter a sample paragraph; one doesn’t give sample training examples. After activating the API, we will get the summarized format of the input paragraph.

text = """
Pandas is a popular open-source library in Python 
that provides high-performance data manipulation and analysis tools. 
Built on top of NumPy, Pandas introduces powerful data structures, 
namely Series (one-dimensional labeled arrays) and DataFrame 
(two-dimensional labeled data tables), 
which offer intuitive and efficient ways to work with structured data. 
With Pandas, data can be easily loaded, cleaned, transformed, and analyzed 
using a rich set of functions and methods. 
It provides functionalities for indexing, slicing, aggregating, joining, 
and filtering data, making it an indispensable tool for data scientists, analysts,
and researchers working with tabular data in various domains.
"""
prompt = f"""
Your task is to generate a short summary of the text

Summarize the text below, delimited by triple 
backticks, in at most 30 words. 

Text: ```{text}```
"""

response = get_completion(prompt)
print(response)

Output

Text Summarization | Output

Question Answering

By providing a context with a question, we expect the model to predict the answer from the given context. So, the task here is an unstructured question answering.

prompt = """ You need to answer the question based on the context below. 
Keep the answer short and concise. Respond "Unsure about answer" 
if not sure about the answer.

Context: Teplizumab traces its roots to a New Jersey drug company called 
Ortho Pharmaceutical. There, scientists generated an early version of the antibody,
dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the 
surface of T cells and limit their cell-killing potential. 
In 1986, it was approved to help prevent organ rejection after kidney transplants,
making it the first therapeutic antibody allowed for human use.

Question: What was OKT3 originally sourced from?

Answer:"""

response = get_completion(prompt)
print(response)

Output

Question Answering | Output

Text Classification

The task is to perform text classification. Given a text, the task is to predict the sentiment of the text, whether it is positive, negative, or neutral.

prompt = """Classify the text into neutral, negative or positive.

Text: I think the food was bad.

Sentiment:"""

response = get_completion(prompt)
print(response)

Output

Text Classification| Output |

Techniques for Effective Prompt Engineering

Effective, prompt engineering involves employing various techniques to optimize the output of language models.

Some techniques include:

  • Providing explicit instructions
  • Specifying the desired format using system messages to set the context
  • Using temperature control to adjust response randomness and iteratively refining prompts based on evaluation and user feedback.

Zero-shot Prompt

For zero-shot prompting, no examples are provided for training. The LLM understands the prompt and works accordingly.

prompt = """I went to the market and bought 10 apples.
I gave 2 apples to the neighbor and 2 to the repairman. 
I then went and bought 5 more apples and ate 1. 
How many apples did I remain with?

Let's think step by step."""

response = get_completion(prompt)
print(response)
Zero-shot Prompt

Few Shot Prompts

When zero shot fails, practitioners utilize a few-shot prompt technique where they provide examples for the model to learn and perform accordingly. This approach enables in-context learning by incorporating examples directly within the prompt.

prompt = """The odd numbers in this group add up to an even number: 4, 8, 9, 15, 12, 2, 1.
A: The answer is False.

The odd numbers in this group add up to an even number: 17,  10, 19, 4, 8, 12, 24.
A: The answer is True.

The odd numbers in this group add up to an even number: 16,  11, 14, 4, 8, 13, 24.
A: The answer is True.

The odd numbers in this group add up to an even number: 17,  9, 10, 12, 13, 4, 2.
A: The answer is False.

The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1. 
A:"""

response = get_completion(prompt)
print(response)
Few Shot Prompts

Chain-of-Thought (CoT) Prompting

By teaching the model to consider the task when responding, make prompting better. Tasks that use reasoning can benefit from this. To achieve more desired results, combine with few-shot prompting.

prompt = """The odd numbers in this group add up to an even number: 4, 8, 9, 15, 12, 2, 1.
A: Adding all the odd numbers (9, 15, 1) gives 25. The answer is False.

The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7. 
A:"""

response = get_completion(prompt)
print(response)
Chain-of-Thought (CoT) Prompting

Now that you have a basic idea of various prompting techniques let’s use the prompt engineering technique to create an order bot.

What All You Can Do With GPT?

The main purpose of using GPT-3 is for natural language generation. It supports lots of other tasks along with natural language generation. Some of these are:

What Else Can Be Done With GPT? | Applications of GPT | LLM | Guide to Prompt Engineering | LLM | Design prompts | prompting technique | OpenAI API

Create an Order Bot

Now that you have a basic idea of various prompting techniques let’s use the prompt engineering technique to create an order bot using OpenAI’s API.

Defining the Functions

This function utilizes the OpenAI API to generate a complete response based on a list of messages. Use the parameter as temperature which is set to 0.

def get_completion_from_messages(messages, model="gpt-3.5-turbo", temperature=0):
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=temperature, # this is the degree of randomness of the model's output
    )
    return response.choices[0].message["content"]

We will use the Panel library in Python to create a simple GUI. The collect_messages function in a Panel-based GUI collects user input, generates an assistant’s response using a language model, and updates the display with the conversation.

def collect_messages(_):
    prompt = inp.value_input
    inp.value = ''
    context.append({'role':'user', 'content':f"{prompt}"})
    response = get_completion_from_messages(context) 
    context.append({'role':'assistant', 'content':f"{response}"})
    panels.append(
        pn.Row('User:', pn.pane.Markdown(prompt, width=600)))
    panels.append(
        pn.Row('Assistant:', pn.pane.Markdown(response, width=600, 
        style={'background-color': '#F6F6F6'})))
 
    return pn.Column(*panels)

Providing Prompt as Context

The prompt is provided in the context variable, a list containing a dictionary. The dictionary contains information about the role and content of the system related to an automated service called OrderBot for a pizza restaurant. The content describes how OrderBot interacts with customers, collects orders, asks about pickup or delivery, summarizes orders, checks for additional items, etc.

import panel as pn  # GUI
pn.extension()

panels = [] # collect display 

context = [ {'role':'system', 'content':"""
  You are OrderBot, an automated service to collect orders for a pizza restaurant.
  You first greet the customer, then collects the order, 
  and then ask if it's a pickup or delivery. 
  You wait to collect the entire order, then summarize it and check for a final 
  time if the customer wants to add anything else. 
  If it's a delivery, you ask for an address. 
  Finally, you collect the payment
  Make sure to clarify all options, extras, and sizes to uniquely 
  identify the item from the menu.
  You respond in a short, very conversational friendly style. 
  The menu includes 
  pepperoni pizza  12.95, 10.00, 7.00 
  cheese pizza   10.95, 9.25, 6.50 
  eggplant pizza   11.95, 9.75, 6.75 
  fries 4.50, 3.50 
  greek salad 7.25 
  Toppings: 
  extra cheese 2.00, 
  mushrooms 1.50 
  sausage 3.00 
  Canadian bacon 3.50 
  AI sauce 1.50 
  peppers 1.00 
  Drinks: 
  coke 3.00, 2.00, 1.00 
  sprite 3.00, 2.00, 1.00 
  bottled water 5.00 
"""} ]  

Displaying the Basic Dashboard For the Bot

The code sets up a Panel-based dashboard with an input widget and a button for initiating a conversation. When the button clicks, the ‘collect_messages’ function is triggered to process the user input and update the conversation panel.

inp = pn.widgets.TextInput(value="Hi", placeholder='Enter text here…')
button_conversation = pn.widgets.Button(name="Chat!")

interactive_conversation = pn.bind(collect_messages, button_conversation)

dashboard = pn.Column(
    inp,
    pn.Row(button_conversation),
    pn.panel(interactive_conversation, loading_indicator=True, height=300),
)
dashboard

Output

Creating an Order Bot using OpenAI's API | Output | Guide to Prompt Engineering | LLM |

On the basis of the given prompt, the bot behaves as an order bot for a Pizza Restaurant. You can see how powerful the prompt is and can easily create applications.

Conclusion

In conclusion, designing powerful prompts is a crucial aspect of prompt engineering for language models. Well-crafted prompts provide a starting point and context for generating text, influencing the output of language models. They play a significant role in guiding AI-generated content by setting expectations, providing instructions, and shaping the generated text’s style, tone, and purpose.

  • Effective prompts result in more focused, relevant, and desirable outputs, improving language models’ overall performance and user experience.
  • To create impactful prompts, it is essential to consider the desired outcome, provide clear instructions, incorporate relevant context, and iterate and refine the prompts based on feedback and evaluation.

Thus, mastering the art of prompt engineering empowers content creators to harness the full potential of language models and leverage AI technology, such as OpenAI’s API, to achieve their specific goals.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Frequently Asked Questions

Q1. What does a prompt engineer do?

A. A prompt engineer designs and develops prompt systems, which are algorithms used in natural language processing to generate human-like text responses based on given inputs.

Q2. Is prompt engineering the future?

A. Prompt engineering is a rapidly evolving field with great potential for the future of AI technology and language processing.

Q3. Can anyone learn prompt engineering?

A. Yes, anyone with a strong interest in AI and language processing can learn prompt engineering through online courses, tutorials, and hands-on practice

Q4. Does prompt engineering require coding?

A. Prompt engineering typically involves coding skills, as engineers need to write and modify algorithms, work with programming languages, and understand the technical aspects of NLP frameworks.

Babina Banjara 21 Aug 2023

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers