Part 2: Translate Technical Claims
The Problem
Your developer says: “The API rate limits are causing the webhook to fail intermittently.”
You nodded. You have no idea what that means. And now you can't tell if this is a 2-hour fix or a 2-week problem.
The 12 Terms You'll Hear Most
API (Application Programming Interface)
What they say:
“We'll build an API integration between your CRM and marketing platform.”
What it actually means:
Two software tools talking to each other automatically. One sends data, the other receives it.
Why you care:
- Real cost: Usually $5K-$25K to build, depending on complexity
- Real timeline: 2-6 weeks for custom integrations
- Red flag: If a vendor says "native integration" but actually means "API integration," you're paying extra
Questions to ask:
- →“Is this a pre-built integration or custom API work?”
- →“What happens if the API goes down?”
- →“Are there any rate limits?”
Rate Limits
What they say:
“We hit the API rate limit, so the sync failed.”
What it actually means:
The software has a speed limit. Like Netflix limiting you to 4 streams at once. If you try to send 10,000 emails in 1 second, the system says "slow down."
Why you care:
- This can break your campaigns mid-flight
- You're often paying for higher rate limits without knowing it
- It affects real-time personalization promises
Questions to ask:
- →“What are the rate limits for our plan tier?”
- →“What happens when we hit them?”
- →“Can we pay to increase them if needed?”
Webhook
What they say:
“We'll set up a webhook to trigger the workflow.”
What it actually means:
An automated notification. When X happens in Tool A, it instantly tells Tool B to do Y. Example: When someone fills out a form (Tool A), it instantly adds them to your CRM (Tool B).
Why you care:
- Webhooks = real-time automation (good)
- But they can fail silently (you won't know unless you check)
- They're often the reason your "automated" workflow stopped working
Questions to ask:
- →“What happens if the webhook fails?”
- →“Do you send error notifications?”
- →“How do we test this before going live?”
Machine Learning vs "AI"
What they say:
“Our AI predicts customer behavior.”
What it actually means:
AI is a broad term that can mean anything from basic automation to actual intelligence. Machine Learning (ML) is a subset where the system learns patterns from data without being explicitly programmed.
Why you care:
- NOT ML: "If customer opens 3 emails, tag them as engaged" — That's a rule you wrote
- IS ML: "System analyzes 10,000 customers' behavior and predicts who will churn" — System learned the pattern
- Vendors use "AI" for basic if/then rules. You're paying AI prices for automation features.
Questions to ask:
- →“Is this machine learning or rule-based automation?”
- →“What data are you training the model on?”
- →“How often does the model retrain?”
Token (in AI context)
What they say:
“Our plan includes 100,000 tokens per month.”
What it actually means:
Tokens = chunks of text the AI processes. Roughly 750 words = 1,000 tokens. Both input (your prompt) and output (AI's response) count against your limit.
Why you care:
- You can blow through token limits fast
- Overages cost real money (usually $0.01-$0.10 per 1K tokens)
- Longer prompts = more tokens = higher costs
Questions to ask:
- →“What happens when we exceed token limits?”
- →“Do image generations count as tokens?”
- →“Can we see real-time usage tracking?”
Latency
What they say:
“The model has 200ms latency on average.”
What it actually means:
Latency = delay. How long it takes for the system to respond. 200ms = 0.2 seconds (feels instant). 2000ms = 2 seconds (feels slow).
Why you care:
- High latency breaks user experience
- If your "AI chatbot" takes 5 seconds to respond, customers leave
- Affects real-time personalization promises
Questions to ask:
- →“What's the average response time under load?”
- →“What happens during traffic spikes?”
- →“Is there an SLA for response times?”
Training Data
What they say:
“Our model is trained on industry-specific data.”
What it actually means:
The examples the AI learned from. Like how you learned to write by reading thousands of examples. The quality and relevance of training data determines how well the AI performs for your use case.
Why you care:
- If trained on generic data, it won't understand your industry
- Outdated training data = outdated outputs
- "Industry-specific" could mean anything — ask for specifics
Questions to ask:
- →“What data was the model trained on?”
- →“When was it last updated?”
- →“Can we provide our own training data?”
Model Fine-Tuning
What they say:
“We can fine-tune the model for your brand voice.”
What it actually means:
Taking a pre-built AI model and training it further on your specific data. Like hiring someone with general experience, then training them on your company's way of doing things.
Why you care:
- Real fine-tuning costs $5K-$50K+ depending on complexity
- Requires significant data (usually 1,000+ examples)
- Many vendors claim "fine-tuning" but just mean prompt customization
Questions to ask:
- →“How much data do I need to provide?”
- →“What's the timeline and cost?”
- →“Can I see before/after examples from other clients?”
Prompt Engineering
What they say:
“Our prompt engineering ensures consistent results.”
What it actually means:
The art of writing instructions for AI to get the output you want. Like learning how to ask Google the right question to get the right answer.
Why you care:
- Bad prompts = inconsistent or wrong outputs
- You may need to invest time learning this
- Some vendors hide behind "prompt engineering" when their product doesn't work well
Questions to ask:
- →“Can I customize the prompts?”
- →“What happens when the output is wrong?”
- →“Do you provide prompt templates?”
Vector Database
What they say:
“We use a vector database for semantic search.”
What it actually means:
A special database that stores information by meaning, not just keywords. Lets AI find content that's conceptually similar, even if it doesn't use the exact words you searched for.
Why you care:
- Enables "smart" search and recommendations
- Required for RAG (Retrieval Augmented Generation) — making AI reference your company's docs
- Can be expensive to scale
Questions to ask:
- →“How much data can we store?”
- →“What's the cost at scale?”
- →“How do we update or delete information?”
LLM (Large Language Model)
What they say:
“We use a proprietary LLM for content generation.”
What it actually means:
The AI brain that powers ChatGPT, Claude, and similar tools. Trained on massive amounts of text to understand and generate language. Most "proprietary LLMs" are actually OpenAI or Anthropic models with custom prompts.
Why you care:
- "Proprietary LLM" usually isn't — building one costs $10M-$100M+
- Most vendors wrap existing LLMs (GPT-4, Claude) with their own interface
- What matters is how they use it, not whether they built it
Questions to ask:
- →“Which base model are you using?”
- →“What makes your implementation unique?”
- →“What happens when the underlying model is updated?”
Embeddings
What they say:
“We create embeddings of your content for better retrieval.”
What it actually means:
Converting text into numbers that capture meaning. Like giving every piece of content a GPS coordinate in "meaning space" so similar ideas are close together.
Why you care:
- Powers semantic search and content recommendations
- Quality of embeddings affects AI accuracy
- Different embedding models have different costs and capabilities
Questions to ask:
- →“Which embedding model do you use?”
- →“How often are embeddings updated?”
- →“What's the storage cost?”
Quick Reference Cheat Sheet
| Term | Plain English | Red Flag |
|---|---|---|
| API | Two tools talking automatically | "Native" but requires custom work |
| Rate Limits | Speed limits on data | No mention of limits until you hit them |
| Webhook | Instant notification trigger | No error monitoring |
| ML vs AI | Learning from data vs any automation | Calling rules "AI" |
| Token | Chunk of text AI processes | Unclear overage costs |
| Latency | Response delay time | No SLA for response times |
What You Just Learned
- You can now decode the 12 most common technical terms in vendor meetings
- You know which questions expose unclear pricing or capabilities
- You won't get trapped by jargon in your next martech review
Next: Learn the AI capability ladder—what AI can actually do today vs vendor promises.
Want daily intelligence like this? Join thousands of marketing leaders getting AI Ready CMO.