Back to Blog

Build AI Apps with Python: Prompt Engineering Patterns — Few-Shot Chain-of-Thought Role | Episode 24

Celest KimCelest Kim

Video: Build AI Apps with Python: Prompt Engineering Patterns — Few-Shot Chain-of-Thought Role | Episode 24 by Taught by Celeste AI - AI Coding Coach

Watch full page →

Build AI Apps with Python: Prompt Engineering Patterns — Few-Shot Chain-of-Thought Role

Improving AI responses often comes down to how you craft your prompts. This example demonstrates four key prompt engineering patterns—zero-shot vs few-shot, direct answer vs chain-of-thought, generic assistant vs role prompting, and unformatted vs format-controlled output—using the same question to show how each approach enhances the quality and clarity of AI-generated answers.

Code

from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

# Initialize the chat model (e.g., OpenAI GPT-4)
chat = ChatOpenAI(model_name="gpt-4", temperature=0)

# Example question to improve answers for
question = "Why does the sky appear blue during the day?"

# Zero-shot prompt: just the question
zero_shot_prompt = PromptTemplate(
  input_variables=["question"],
  template="Answer the following question concisely:\n{question}"
)

# Few-shot prompt: provide examples before the question
few_shot_prompt = PromptTemplate(
  input_variables=["question"],
  template=(
    "Q: Why is grass green?\n"
    "A: Grass appears green because it reflects green wavelengths of light due to chlorophyll.\n\n"
    "Q: Why is the ocean blue?\n"
    "A: The ocean looks blue because water absorbs colors at the red end of the spectrum and scatters blue light.\n\n"
    "Q: {question}\n"
    "A:"
  )
)

# Chain-of-thought prompt: encourage step-by-step reasoning
cot_prompt = PromptTemplate(
  input_variables=["question"],
  template=(
    "Answer the question with detailed step-by-step reasoning:\n"
    "{question}"
  )
)

# Role prompt: specify the assistant's role and tone
role_prompt = PromptTemplate(
  input_variables=["question"],
  template=(
    "You are a science teacher explaining concepts to curious students.\n"
    "Use analogies and simple language.\n"
    "Question: {question}\n"
    "Answer:"
  )
)

# Format control prompt: request numbered list with no fluff
format_prompt = PromptTemplate(
  input_variables=["question"],
  template=(
    "Answer the question in a numbered list format with concise points and no extra fluff:\n"
    "{question}"
  )
)

# Create chains for each prompt pattern
zero_shot_chain = LLMChain(llm=chat, prompt=zero_shot_prompt)
few_shot_chain = LLMChain(llm=chat, prompt=few_shot_prompt)
cot_chain = LLMChain(llm=chat, prompt=cot_prompt)
role_chain = LLMChain(llm=chat, prompt=role_prompt)
format_chain = LLMChain(llm=chat, prompt=format_prompt)

# Get answers
answer_zero_shot = zero_shot_chain.run(question)
answer_few_shot = few_shot_chain.run(question)
answer_cot = cot_chain.run(question)
answer_role = role_chain.run(question)
answer_format = format_chain.run(question)

print("Zero-shot answer:\n", answer_zero_shot)
print("\nFew-shot answer:\n", answer_few_shot)
print("\nChain-of-thought answer:\n", answer_cot)
print("\nRole prompt answer:\n", answer_role)
print("\nFormat controlled answer:\n", answer_format)

Key Points

  • Few-shot prompting improves AI responses by providing examples that guide the model's format and style.
  • Chain-of-thought prompts encourage step-by-step reasoning, leading to more thorough and accurate answers.
  • Role prompting tailors the AI's tone and depth by assigning it a specific persona or expertise.
  • Controlling output format, such as requesting numbered lists, makes responses clearer and easier to follow.
  • Combining these patterns can significantly enhance the quality and usefulness of AI-generated content.