Vendor Evaluation
Systematically evaluate and compare vendors using structured research and scoring
Evaluating vendors — whether for software, services, or partnerships — involves gathering scattered information and making apples-to-apples comparisons. This recipe structures the process so agents handle the research while you focus on the decision.
The Pipeline
DEFINE → Set evaluation criteria and weight them
RESEARCH → Gather data on each vendor
SCORE → Rate vendors against criteria
RECOMMEND → Final comparison with recommendation
Step 1: Define Criteria
Start by creating a database to structure the evaluation:
@Assistant create a database called "Vendor Evaluation - CRM"
with columns: vendor (text), pricing_score (number),
features_score (number), ease_of_use_score (number),
support_score (number), integration_score (number),
total_score (number), pricing_notes (text),
features_notes (text), recommendation (text)
Then define what you're looking for:
@Assistant create a document titled "CRM Vendor Criteria" with:
Evaluation criteria for a CRM serving a 50-person sales team:
1. Pricing (weight: 25%) - Total cost for 50 seats, contract terms
2. Features (weight: 30%) - Pipeline management, reporting, automation
3. Ease of Use (weight: 20%) - Onboarding time, UI complexity
4. Support (weight: 15%) - Response time, channels, documentation
5. Integration (weight: 10%) - API quality, existing integrations
Vendors to evaluate: HubSpot, Salesforce, Pipedrive, Close, Freshsales
Step 2: Research Each Vendor
Create research tasks for each vendor:
@Researcher for each vendor in the CRM Vendor Criteria document,
create a subtask and research:
- Current pricing for 50 seats (monthly and annual)
- Key features and any missing features from our requirements
- G2/Capterra ratings and common complaints
- Integration options with our existing stack
- Support options and SLA
- Any recent news (acquisitions, outages, pivots)
Write findings into a document titled "[Vendor Name] - CRM Evaluation"
The agent will create 5 research sub-tasks and work through them systematically, creating a document for each vendor.
Step 3: Score and Compare
Once research is complete, populate the scoring database:
@Analyst review each vendor evaluation document and score
each vendor in the Vendor Evaluation database:
- Score each category 1-10 based on the research findings
- Calculate total_score as weighted average using the criteria weights
- Add brief notes explaining each score
- Set recommendation to "strong yes", "yes", "maybe", or "no"
Step 4: Final Recommendation
Generate the decision document:
@Writer create a document titled "CRM Vendor Recommendation" using
the scoring database and individual vendor documents:
1. Executive Summary (recommended vendor and why, 2 sentences)
2. Scoring Comparison Table (all vendors side by side)
3. Top Pick: Detailed Analysis
- Why this vendor wins
- Key risks and mitigations
- Implementation timeline estimate
4. Runner-Up: Why it's second
5. Vendors Not Recommended: Brief explanation
6. Next Steps (trial, POC, contract negotiation)
Making the Decision
The dossier gives your team everything needed for a decision meeting:
@Assistant post in #leadership:
"CRM vendor evaluation complete. Recommendation document ready
for review. Key finding: [top vendor] scores highest on features
and integration, with competitive pricing at $X/seat/month.
Please review before Thursday's decision meeting."
Variations
Software Tool Evaluation
Same pattern but add criteria for:
- Security and compliance (SOC2, GDPR)
- Data migration complexity
- API documentation quality
Agency/Consultant Selection
Adapt criteria to:
- Portfolio quality and relevance
- Team expertise and availability
- Pricing model (fixed vs. hourly)
- References and case studies
Cloud Infrastructure Comparison
Focus criteria on:
- Performance benchmarks
- Geographic availability
- Cost modeling at different scales
- Managed service offerings