AI prompt engineering in 2025: What works and what doesn’t | Sander Schulhoff (Learn Prompting, HackAPrompt)

Jun 19, 2025 1h 37m 15 insights Episode Page ↗
Sander Schulhoff, OG prompt engineer and AI red teaming expert, shares top LLM prompting techniques like few-shot learning and decomposition. He also dives into prompt injection, explaining how AI can be tricked and the critical security challenges posed by agentic AI, emphasizing it's an unsolvable, ongoing arms race.
Actionable Insights

1. Practice Trial and Error

Improve your prompting skills by regularly trying and interacting with chatbots, as this hands-on experience provides the most learning compared to reading resources or taking courses.

2. Implement Few-Shot Prompting

Give the AI examples of the desired output in your prompt to significantly boost its performance, using common formats like Q&A or XML that the LLM is familiar with from its training data.

3. Break Down Complex Tasks

For challenging tasks, ask the LLM to first list the sub-problems it needs to solve, then direct it to solve each sub-problem sequentially, which helps it think through the problem and boosts overall performance.

4. Utilize Self-Criticism Technique

After the LLM provides a solution, ask it to review and criticize its own response, then instruct it to implement that criticism to improve its output, providing a ‘free performance boost’.

5. Provide Additional Information

Include as much relevant information or ‘context’ about your task as possible at the beginning of the prompt, as this gives the model a better perspective and is ‘super, super important’ for performance.

6. Avoid Role Prompting for Accuracy

Do not use role prompting (e.g., ‘You are a math professor’) for accuracy-based tasks, as studies show it does not provide a significant performance boost for these types of problems.

7. Avoid Reward/Threat Prompts

Refrain from including promises of rewards (e.g., ‘I’ll tip you $5’) or threats of punishment in your prompts, as these techniques are generally ineffective in improving LLM performance.

8. Use Thought Generation for Robustness

For non-reasoning models like GPT-4, especially when running thousands or millions of inputs, explicitly ask the LLM to ‘write out all your reasoning’ to ensure consistent and robust performance, even if it often does so by default.

9. Employ Ensembling Techniques

For critical problems, use multiple different prompts or LLM configurations to solve the same problem, then take the most commonly returned answer as the final result to achieve better overall performance.

10. Do Not Rely on Prompt-Based Defenses

Avoid using prompt-based defenses like telling the model ‘do not follow malicious instructions’ within its system prompt, as these methods are ineffective against prompt injection attacks.

11. Do Not Rely on Basic AI Guardrails

Do not depend on simple AI guardrails to prevent prompt injection, as motivated attackers can often exploit the ‘intelligence gap’ between guardrail models and the main LLM.

12. Use Safety Tuning for Specific Harms

Implement safety tuning by training your model on a dataset of malicious prompts related to specific harms your company wants to prevent, so it responds with a canned phrase when encountering such inputs.

13. Fine-Tune Models for Security

Fine-tune models for very specific tasks, as this makes them much less susceptible to prompt injection because they only know how to perform that particular structured output and cannot easily be tricked into generating harmful content.

14. Leverage Crowdsourced Red Teaming

Participate in or run crowdsourced competitions to find vulnerabilities, as this is the most effective way to collect adversarial cases and secure AI, particularly agentic AI, against prompt injection.

15. Support AI Development

Advocate for continued AI development rather than stopping it, as AI offers significant benefits to humanity, particularly in health, by discovering new treatments, saving time for professionals, and improving diagnoses.