The vendor evaluation sheet.
Bring this to every pitch. Twelve questions that separate real systems from demoware. Score each vendor as Adopt, Pilot, Watch, or Avoid. Don't sign anything until every cell has an honest answer.
Now evaluating
Anthropic Claude (direct)
Frontier model + custom build
Q01
What model is underneath?
Why it matters · Frontier vs no-name affects accuracy, safety, and future-proofing.
Q02
Can the model be swapped?
Why it matters · Models improve every six months. You should not be locked in.
Q03
Where does PBVR data live?
Why it matters · Residency affects compliance, latency, and what you can promise owners.
Q04
Who can see it?
Why it matters · Training-on-your-data clauses are a real risk. Read the contract.
Q05
Can PBVR export its data?
Why it matters · If you can't leave, you don't own the relationship — they do.
Q06
How is this better than Claude + setup?
Why it matters · If we can replicate it in a weekend with a frontier model + MCP, we should.
Q07
Does it integrate with our PMS?
Why it matters · An AI that can't see Track or Hostfully is a demo, not a system.
Q08
Does it support human review?
Why it matters · Approval queues are non-negotiable for guest-facing output.
Q09
Does it log actions?
Why it matters · If we can't audit what the AI did, we can't trust it in production.
Q10
Can it honor PBVR brand voice?
Why it matters · Configurable voice + style enforcement, not 'we can fine-tune later'.
Q11
Can it connect to property memory?
Why it matters · If it can't reach our knowledge base, it will hallucinate confidently.
Q12
What happens if PBVR leaves?
Why it matters · Termination, data return, transition — written in plain language.
◐
ADOPT
Adopt
Strong fit · sign now
PILOT
Pilot
Promising · 90-day test
WATCH
Watch
Re-evaluate quarterly
AVOID
Avoid
Walk · explicitly noted
0/5 vendors scored.