Prompt Engineering Basics
Practical techniques for writing better requests to AI agents
Prompt engineering is the practice of writing clear, effective requests to AI agents. You don't need to be a technical expert — these are practical communication skills that help you get better results from any AI interaction.
The Fundamentals
Be Explicit About What You Want
AI agents are literal. They do what you ask, not what you meant. Compare:
- Ambiguous:
@Assistant look into our competitors - Explicit:
@Assistant create a document comparing our top 3 competitors (Acme, Globex, Initech) on pricing, features, and target market. Use a table format.
The explicit version tells the agent exactly what to research, how many competitors, which dimensions to compare, and what format to use.
Specify the Output Format
Tell the agent how you want results delivered:
- "Summarize in 3 bullet points"
- "Create a table with columns for name, price, and rating"
- "Write a one-paragraph executive summary followed by detailed findings"
- "Create a task for each action item"
Provide Relevant Context
Even though agents have access to your workspace, explicitly referencing resources helps:
@Analyst using the data in the "Customer Feedback" database,
identify the top 5 feature requests from enterprise customers
in the last 30 days. Cross-reference with our current roadmap
document to flag which ones we're already planning.
Common Patterns
The Research Request
@Researcher find information about [topic].
Focus on [specific angle].
Sources should be [recency/quality criteria].
Compile findings into a document titled "[title]".
The Task Breakdown
@Assistant break down this project into tasks:
[project description]
Create a task group called "[name]" and add each task with:
- Clear title and description
- Estimated complexity (tag as simple/medium/complex)
- Suggested assignee based on our team's roles
The Analysis Request
@Analyst query the [database name] for [data criteria].
Calculate [specific metrics].
Compare against [benchmark or previous period].
Highlight anything unusual or noteworthy.
The Content Creation
@Writer draft a [content type] about [topic].
Audience: [who will read this]
Tone: [formal/casual/technical]
Length: [word count or section count]
Include: [specific elements to cover]
Iteration and Refinement
Your first prompt rarely produces the perfect result. Iteration is normal and expected:
In-Thread Refinement
Reply in the thread to refine the agent's output:
- "Make the tone more casual"
- "Add a section about pricing"
- "The third point is wrong — our product launched in 2024, not 2023"
- "Good, now save this as a document"
Building on Results
Chain requests that build on previous outputs:
1. @Researcher find the top 10 SaaS companies in HR tech
2. @Researcher for each company in that list, find their latest funding round and employee count
3. @Writer create an executive briefing from the research above
What to Avoid
Don't Assume Prior Knowledge
Each new conversation starts with fresh context. If you discussed something yesterday, reference it explicitly or ask the agent to recall it:
@Assistant recall what we discussed about the API migration plan
Don't Overload a Single Request
Break complex workflows into steps rather than putting everything in one message. This gives you checkpoints to verify quality and correct course.
Don't Be Vague About Constraints
If there are limits, state them:
- "Use only public sources — no paid databases"
- "Keep the document under 500 words"
- "Only include companies founded after 2020"
Key Takeaways
- Clarity beats brevity — A longer, specific prompt gets better results than a short, vague one
- Specify format — Tell agents how you want results structured
- Iterate naturally — Refine in-thread just like you would with a colleague
- Provide context — Reference specific resources, timeframes, and constraints
- Break down complexity — Multi-step workflows work better as sequential requests