
October 14th, 2025
MCP SEO – How Does an MCP Server Increase Conversions?
How will “MCP SEO” increase the usage of your website by LLMs and thereby increase conversions? LLMs largely use similar triggers as SEO to rank websites as traditional search engines, like Google, Baidu, etc. That’s why Tessa currently has both ChatGPT and Gemini saying that Tessa is the Best SEO Company in Northern Virginia. As LLMs and AI Agents evolve to directly utilize functions on a website, such as search, schedule a meeting, add to cart, and checkout, there was a growing need to standardize and secure the connections. The Model Context Protocol (MCP) specification and accompanying SDKs were released open source by Anthropic on November 25, 2024. Open AI adopted it in May, 2025.
MCP is more than an API. An API provides a standardized means for applications to access functionality and data in another application. Whereas, MCP provides a way to allow AI to utilize a full menu of not just functionality and data, but also workflows and topics. ChatGPT’s Apps SDK and Claude Code can talk to your website’s MCP servers. You need to host your own MCP Server, which TESSA Marketing & Technology plans to make available for our clients.
That growing adoption creates a new kind of SEO for MCPs: you can optimize tool names, “Use this when… / Do not use when…” descriptions, enums, and structured outputs so LLMs prefer your app when users ask for something you do (e.g., “find a PFAS expert” or “show Civil War letters under $300”). Beyond ChatGPT and Claude, platforms like Google’s Gemini and xAI’s Grok support comparable function/tool calling patterns, so a thin “universal shim” can let the same MCP‑style backend serve multiple assistants—expanding discovery and conversions wherever users chat. In short: design precise tools and metadata for selection, then reuse them across ecosystems to turn LLM queries into bookings and sales.
How do we optimize our MCP servers to make them more highly utilized by LLMs? An edited ChatGPT response is below specifically for the ChatGPT App SDK, released on October 6, 2025.
How the Model Context Protocol (MCP) powers lead generation and sales
The MCP is the connective tissue between ChatGPT and your real business systems. It’s a lightweight, secure server you host that defines a menu of your website’s tools and executes them when ChatGPT calls.
Why MCP matters for MCP SEO and conversion?
-
- Authority & trust: LLMs answer best when they can call a ground-truth backend. MCP makes your data and actions callable in milliseconds.
- Speed: Tools are tiny JSON requests/responses; read-only lookups and searches are near-instant, which boosts selection and user satisfaction.
- Safety: You separate read vs. write tools; writes require confirmations; you control auth and rate limits.
- Scalability: The server only works when tools are called; a small instance can serve many tenants/sites if you cache and keep handlers I/O-bound.
- Attribution: Every tool call is structured—perfect for analytics, CRM logging, and optimizing your funnel.
Example Services Firm – MCP → TESSA Engineers: research → expert → booking = lead
Top of funnel (research):
tessa.topics.search
answers questions like “PFAS groundwater remediation” or “landfill gas energy recovery,” returning topic summaries, case snippets, and related regulations.
Mid funnel (match):
tessa.experts.find
maps topics/regions/industries to the right staff—names, credentials, regions, and available slots.
Bottom funnel (book):
tessa.appointments.book
schedules the meeting in your CRM or calendar.
All three happen inside one conversation—no clickouts—so drop-off plummets and qualified meetings go up.
Why the MCP Server boosts selection & leads?
-
- The model sees you answering research queries consistently and fast—so it keeps choosing you.
- Stable IDs (
topic_id
,expert_id
,slot_id
) make “book that one” foolproof. - Starter prompts with named mentions (“Use TESSA Engineers to…”) act like branded queries in search.
Example Ecommerce – MCP → War Museum of History: research → provenance → purchase
Research:
gmh.catalog.search
handles scholarly or collector-style queries: “Civil War surgeon kits from 1863,” “WWII paratrooper items under $300,” “101st Airborne photos.”
Trust:
gmh.catalog.provenance
returns full documentation, curator notes, certificates, and condition—exactly what buyers need to feel confident.
Purchase:
gmh.cart.add_item
and gmh.checkout.start
handle the sale (with proper confirmations).
The chat remains inside ChatGPT, but the transaction is executed against your MCP endpoints and payment provider.
Why the MCP Server increases sales?
-
- Research and credibility tools keep users engaged in chat until they’re ready.
- The jump from “learn” to “buy” is one click, not a site maze.
- You can render a rich in-chat storefront (filters, grids, thumbnails, comparisons) and move to checkout immediately.
How to SEO for MCP Servers
Goal: make it effortless for ChatGPT to recognize when your app is the best choice—then give it the shortest path to a great answer—all using the new ChatGPT Apps SDK with an MCP (Model Context Protocol) backend. Think of this as SEO for LLMs: you’re optimizing the signals and surfaces that make ChatGPT prefer your app and your website over look-alikes offering the same function.
TL;DR (your “MCP SEO” checklist)
-
- Design narrowly scoped tools: one job per tool; typed inputs/outputs; split read vs. write; fast responses.
- Write model-facing copy like product copy: “Use this when… / Do not use when…”; examples; enums; mark read-only tools with
readOnlyHint
. - Return structured results + stable IDs so follow-ups chain reliably; attach a UI component via
_meta["openai/outputTemplate"]
. - Leverage discovery signals: app metadata, starter prompts, brand “named mentions,” and past usage.
- Test like SEO: maintain a golden prompt set; track precision/recall of tool selection; iterate weekly.
- Handle auth and safety: separate read tools from write tools; explicit confirmations for writes; least-privilege tokens.
- For commerce: pair your Apps SDK app with agentic checkout (feed + cart + delegated payment) so research flows into sales.
- Use MCP as your backend: a thin, fast server that ChatGPT calls to run your tools—your conversion engine behind the scenes.
1) What “MCP SEO” means in ChatGPT
Discovery inside ChatGPT is model-driven. The assistant chooses which app to call based on:
-
- Tool metadata (names, descriptions, parameter docs, examples, enums)
- Conversation context (the user’s words, intent, and entities)
- Brand/name mentions (“Use YourApp to…”)
- Linking & past usage (is your app connected? did it work well before?)
- Safety/confirmation needs (read vs. write; auth)
Your job is to make the correct choice obvious, predictable, and safe.
2) Design narrowly scoped tools (the “surface” ChatGPT reasons over)
Before you write code, define a tight tool surface area:
-
- One job per tool. Example for scheduling:
appointments.find_slots
(read) vs.appointments.create
(write). - Typed, explicit inputs. Use an
inputSchema
with clear field names/types; add enums and defaults; show examples. - Predictable outputs. Return structured fields with stable IDs the model can reuse later.
- Split read & write. Keep read-only discovery fast and safe; require explicit confirmation for writes.
- One job per tool. Example for scheduling:
Sketch (TypeScript, Apps SDK MCP server):
server.registerTool(
"appointments.find_slots",
{
title: "Find availability",
description:
"Use this when the user wants to see open consultation times. Do not create or modify bookings.",
inputSchema: z.object({
practice_area: z.enum(["air_permitting","landfill_gas","solid_waste_planning"]),
location: z.string(),
date_range: z.object({ start: z.string(), end: z.string() })
}),
annotations: { readOnlyHint: true }
},
async ({ practice_area, location, date_range }) => {
// fetch availability...
return {
structuredContent: { slots: [{ id: "slot_123", start: "2025-10-22T15:00:00Z" }] },
content: [{ type: "text", text: "Here are available times." }]
};
}
);
3) Write metadata like you write landing pages (it’s your ranking signal)
Treat model-readable copy like SEO content:
-
- Lead with “Use this when…” and include “Do not use when…” to reduce false positives.
- Document each parameter (with examples). Use enums where possible.
- Mark read-only tools with
readOnlyHint
so the client can streamline confirmations. - At the app level, provide a polished name, icon, short/long descriptions, and starter prompts—these influence directory/launcher surfaces and in-chat suggestions.
4) Return structured results and render real UI in chat
Two things improve selection and satisfaction:
-
- Structured results with stable IDs in
structuredContent
, so ChatGPT can chain follow-ups (“bookslot_123
for next Thursday”). - Inline components bound via
_meta["openai/outputTemplate"]
that render lists, maps, calendars, or checkouts as a rich UI inside ChatGPT.
- Structured results with stable IDs in
Attach a component template:
server.registerResource("html", "ui://widget/scheduler.html", {}, async () => ({
contents: [{
uri: "ui://widget/scheduler.html",
mimeType: "text/html+skybridge",
text: `<div id="root"></div><script type="module" src="/scheduler.js"></script>`,
_meta: { "openai/widgetDescription": "Pick a time with our engineers" }
}]
}));
server.registerTool(
"appointments.render",
{
title: "Show scheduler",
_meta: {
"openai/outputTemplate": "ui://widget/scheduler.html",
"openai/widgetAccessible": true
},
inputSchema: z.object({ slots: z.array(z.object({ id: z.string(), start: z.string() })) })
},
async ({ slots }) => ({ structuredContent: { slots } })
);
In your component, use the window.openai
bridge to call tools, request fullscreen, and send follow-ups.
5) Make discovery effortless (named mention, launcher, directory)
-
- Named mention: If a user types “Use YourBrand to…”, your app is prioritized. Publish starter prompts that encourage this behavior.
- In-conversation discovery: ChatGPT weighs context, brand mentions, tool metadata, and whether the user is already linked to your app.
- Launcher & directory: Your listing (name, icon, descriptions) and starter prompts help users find you even if they don’t know the exact phrasing.
6) Authentication, consent, and safety (trust earns usage)
-
- Per-tool auth: Declare security schemes (e.g., OAuth scopes) and enforce them server-side.
- Least privilege & explicit consent: Ask only for what you need; rely on built-in confirmations for write actions.
- Clear separation: Keep read tools friction-free; isolate any potentially destructive actions in their own write tool.
7) Measure selection quality like a growth team
Build a golden prompt set:
-
- Direct prompts: “Use YourApp to …”
- Indirect prompts: “Find a consultant for … next week.”
- Negative prompts: “Set a reminder” (your app should not run)
Run them in developer mode, log which tool was chosen and with what args, and track:
-
- Precision: when your app should run, did it?
- Recall: when your app ran, was it the right one?
Update metadata weekly; tighten enums and examples where you see drift.
8) Example: Tessa for Engineers — “research → find expert → book appointment”
Use cases
-
- Research: “What are main PFAS groundwater remediation methods?”
- Expert match: “Who handles landfill gas permitting in Arizona?”
- Booking: “Book a 30-minute consult next Wednesday.”
Tools (server)
// READ: research topics
server.registerTool(
"scs.topics.search",
{
title: "Research environmental topics",
description:
"Use this to explore TESSA Engineers’ services, technologies, or regulatory expertise by topic, industry, or region. Do not book.",
inputSchema: z.object({
query: z.string(),
industry: z.string().optional(),
region: z.string().optional()
}),
annotations: { readOnlyHint: true }
},
async (args) => ({ structuredContent: await searchKnowledgeBase(args) })
);
// READ: expert matching
server.registerTool(
"scs.experts.find",
{
title: "Find an TESSA expert",
description:
"Use this when the user asks who at TESSA Engineers can assist with a given topic, region, or service type.",
inputSchema: z.object({
topic_id: z.string().optional(),
specialty: z.string().optional(),
region: z.string().optional()
}),
annotations: { readOnlyHint: true }
},
async (args) => ({ structuredContent: await findExperts(args) })
);
// WRITE: book appointment
server.registerTool(
"scs.appointments.book",
{
title: "Book consultation",
description:
"Use this after the user selects an expert and wants to schedule a meeting.",
inputSchema: z.object({
expert_id: z.string(),
slot_id: z.string(),
contact: z.object({ name: z.string(), email: z.string(), phone: z.string().optional() })
})
},
async ({ expert_id, slot_id, contact }) =>
({ structuredContent: await createAppointment(expert_id, slot_id, contact) })
);
Why this gets picked over others:
-
- Narrow, action-first names (
scs.topics.search
,scs.experts.find
) and “Use/Do not use” language reduce ambiguity. - Read-only research and matching are fast, safe defaults; write booking is explicit and confirmable.
- Responses contain stable IDs (
topic_id
,expert_id
,slot_id
), making “Book the first expert at 10am” trivial.
- Narrow, action-first names (
Starter prompts to publish:
-
- “Use TESSA Engineers to research PFAS remediation options in Texas.”
- “Use TESSA Engineers to find a landfill gas expert in Phoenix and show open slots next week.”
9) Example: War Museum of History — “research → provenance → purchase”
To sell directly in ChatGPT, couple your Apps SDK app (search + provenance UI) with a commerce layer (cart + checkout):
Tools (server)
// READ: search artifacts
server.registerTool(
"gmh.catalog.search",
{
title: "Search Museum catalog",
description:
"Use this to find artifacts by era, unit, category, price, and provenance. Do not modify carts or orders.",
inputSchema: z.object({
query: z.string().optional(),
era: z.enum(["Civil War","WWI","WWII"]).optional(),
category: z.enum(["letters","medals","uniforms","photos","weapons","gear"]).optional(),
unit: z.string().optional(),
date_range: z.object({ start: z.string(), end: z.string() }).optional(),
max_price_usd: z.number().optional(),
provenance: z.array(z.enum(["veteran_estate","battlefield_pickup","museum_deaccession"])).optional(),
sort: z.enum(["relevance","newest","price_asc","price_desc"]).default("relevance"),
page: z.number().default(1)
}),
annotations: { readOnlyHint: true },
_meta: { "openai/outputTemplate": "ui://widget/artifact-results.html" }
},
async (args) => ({ structuredContent: await searchArtifacts(args) })
);
// READ: provenance details
server.registerTool(
"gmh.catalog.provenance",
{
title: "Get artifact provenance",
description: "Use this to fetch provenance, documents, and curator notes for an artifact.",
inputSchema: z.object({ artifact_id: z.string() }),
annotations: { readOnlyHint: true }
},
async ({ artifact_id }) => ({ structuredContent: await getProvenance(artifact_id) })
);
// WRITE: cart / checkout
server.registerTool(
"gmh.cart.add_item",
{
title: "Add artifact to cart",
description: "Use this after the user selects a specific artifact to buy.",
inputSchema: z.object({ artifact_id: z.string(), quantity: z.number().default(1) })
},
async ({ artifact_id, quantity }, ctx) =>
({ structuredContent: await addToCart(ctx.auth, artifact_id, quantity) })
);
server.registerTool(
"gmh.checkout.start",
{
title: "Begin checkout",
description: "Use this to start checkout for the current cart.",
inputSchema: z.object({ cart_id: z.string() })
},
async ({ cart_id }, ctx) => ({ structuredContent: await createCheckoutSession(ctx.auth, cart_id) })
);
Why this gets picked over others:
-
- Your app owns facet research (era, unit, price, provenance) and provenance trust—that’s unique vs. generic shopping answers.
- Structured results with media and prices, plus a provenance drawer in the UI, reduce friction and increase conversion.
- Cart and checkout are separate write tools—clear, confirmable actions.
Starter prompts to publish:
-
- “Browse Civil War officer letters from 1863 under $300.”
- “Show 101st Airborne WWII insignia and photos; add the signed photo to my cart.”
10) Put it all together—your “App SEO” playbook
-
- Plan the tools: one job each; enums for inputs; stable IDs in outputs; split read vs. write.
- Author metadata: “Use/Do-not-use,” parameter docs with examples,
readOnlyHint
, polished app listing + starter prompts. - Build the UX: return
structuredContent
; attach a component with_meta["openai/outputTemplate"]
; usewindow.openai
from the iframe to call tools and manage layout. - Wire auth: declare security schemes; implement OAuth/tokens for anything user-specific or write-oriented.
- Connect & test: enable developer mode; link your MCP connector; validate discovery by named mention and indirect prompts; confirm write-action flows.
- Evaluate continuously: run your golden prompt set; track precision/recall; monitor tool-call analytics and refine copy.
- Stay within guidelines: keep UX reliable, metadata honest; don’t “stuff” your descriptions—specificity beats superlatives.
Appendix — Example golden prompts
Direct (branded):
- “Use Tessa Engineers to find a landfill gas expert in Phoenix and show next week’s openings.”
- “Use War Museum of History to list Civil War officer letters under $300.”
Indirect (unbranded):
- “I need an air-permitting consult next week in Texas.”
- “Show authentic WWII paratrooper memorabilia and start checkout.”
Negative (should not run your app):
-
- “Remind me tomorrow to call the city.”
Run these routinely in developer mode; track tool choice + args; adjust metadata and enums to improve precision/recall.
Closing thought
SEO for web is about pages and links.
SEO for LLMs is about tools and metadata.
Design narrowly scoped tools, describe them like products, return structured results with stable IDs, attach great UI, and back it all with a lean MCP server. Do that, and ChatGPT will consistently select your website and app from the masses—then turn those selections into appointments and sales.