Customer Experience News is a weekly newsletter about the most important news and discussions for Customer Experience and Customer Support Leaders.
This is all the weekly news you need in around 5 minutes.
Our main story today is about leveraging MCP for Customer Support. We’re not publishing a video this week because we’re on the road. We will be back next week with a video!
Today’s headlines are from:
MCP Is Everywhere This Week. Here’s What Support Leaders Need to Know Before Turning It On.
Four of the platforms covered in this week’s roundup shipped MCP-related features:
- Guru launched an MCP Server for its Knowledge Agents
- Linear expanded its MCP server for product management
- Plain opened its infrastructure to third-party AI agents via API
- ServiceNow’s Anthropic partnership positions Claude as the default model inside its Build Agent.
That’s not a coincidence. MCP is becoming the connective layer between enterprise tools and AI systems, and support teams are directly in the path of adoption.
But before you start connecting everything, it’s worth understanding what MCP actually does, what it doesn’t do, and where support leaders should focus first.
What MCP Actually Is (Without the Jargon)
MCP, or Model Context Protocol, is an open standard created by Anthropic that lets AI tools talk to other software. Think of it as a universal adapter. Before MCP, if you wanted an AI assistant to pull information from your helpdesk, your knowledge base, and your CRM, each connection required its own custom integration. MCP standardizes that connection so any AI tool that supports the protocol can plug into any system that exposes an MCP server.
For support teams, this means an AI agent or copilot could theoretically search your knowledge base, look up a customer’s account history, check a ticket’s status, and reference your internal process docs, all through a single protocol rather than a patchwork of API integrations.
That sounds like a breakthrough. And technically, it is. The problem is what happens when teams turn on every connection at once without thinking about what their AI should actually know.
The “Spray-and-Pray” Problem in Support
Rick Nucci, co-founder and CEO of Guru, described this pattern on LinkedIn this week: companies connecting MCP servers for Notion, Google Drive, Slack, Zendesk, and every other tool they use, then expecting useful results when an AI searches across all of them simultaneously.
As Nucci put it, this federated search approach surfaces “outdated pricing from 2 years ago, process docs from people who left the company, information nobody’s verified in months.” There’s no sense of what’s current, what’s approved, or what’s accurate.
For customer support, this problem is especially dangerous. A support agent using an AI copilot that pulls from six unfiltered sources might get a pricing answer from a deprecated Google Doc, a troubleshooting step from an archived Slack thread, and a policy reference from a Notion page that hasn’t been updated since the last product overhaul. The AI doesn’t know the difference. It just returns whatever it finds.
Now put that in front of a customer. An AI agent responding to a live chat with outdated return policy information or referencing a feature that was sunset six months ago doesn’t just create a bad experience. It creates ticket escalations, erodes trust, and generates the exact kind of rework that AI was supposed to eliminate.
Start With One Verified Knowledge Layer
The guardrail here isn’t turning MCP off. It’s choosing where to turn it on first.
Support teams should identify a single, verified knowledge source as their primary MCP connection. This is the system that contains your approved, current, customer-facing knowledge: product documentation, troubleshooting guides, pricing and policy information, and the workflows your team actually follows today.
For many teams, that’s their knowledge base platform, whether that’s Guru, an internal wiki that’s actively maintained, or the knowledge module inside their helpdesk. The key criteria: it needs to be a source where content is reviewed, updated on a regular cycle, and treated as the single source of truth for how your company communicates with customers.
Nucci’s framing is useful here: “One verified layer that sits between your raw data and your AI tools. When you fix something once in that layer, it’s fixed everywhere.”
What This Means for Product Knowledge vs. Customer-Facing Knowledge
Support teams operate with two distinct types of knowledge: what customers need to know (help articles, product docs, FAQs, policies) and what agents need to know (internal procedures, escalation paths, workarounds, product context that isn’t public-facing).
MCP doesn’t distinguish between these. If you connect your AI to a system that contains both public knowledge articles and internal notes about a product bug your engineering team hasn’t patched yet, the AI will treat both as equally valid responses.
Before connecting any MCP server, support leaders should audit the knowledge source for this separation. Can you scope what the AI accesses? Does the platform support permission levels that carry through the MCP connection? Guru’s MCP Server, for example, explicitly preserves existing permission structures. Not every implementation will.
This is also where the conversation about which platform becomes your primary MCP matters most. Your customer-facing AI agent and your internal agent copilot may need different primary sources. The AI responding to customers should pull from verified, public-facing documentation. The copilot assisting your agents might also need access to internal process docs and escalation criteria. These are different knowledge scopes with different risk profiles, and they should be configured accordingly.
A Practical Starting Point For Customer Support
For support leaders evaluating MCP adoption, the sequence matters more than the speed:
First, identify which knowledge source in your stack is the most current, verified, and actively maintained. That becomes your primary MCP connection.
Second, audit that source for content freshness, accuracy, and proper separation between customer-facing and internal knowledge. If the knowledge base hasn’t been reviewed in months, connecting it to an AI via MCP just amplifies the existing problem faster.
Third, add secondary connections deliberately and one at a time. If you add your CRM as a second MCP source, define what data the AI can access and test the outputs before it touches a customer interaction. If you add Slack or internal docs, scope it to specific channels or folders rather than the entire workspace.
Fourth, treat MCP connections like you would any customer-facing channel: monitor what the AI surfaces, review it regularly, and build a feedback loop so agents can flag when the AI returns outdated or incorrect information.
The platforms shipping MCP features this week are building the infrastructure. The operational question for support leaders isn’t whether to adopt MCP. It’s whether your knowledge is ready for it, and how you control what your AI learns to say on your behalf.
Attio Launches Ask Attio for CRM Queries
Attio, a CRM platform, launched Ask Attio, a conversational AI interface that lets users query their entire CRM through natural language. The feature indexes records, emails, calls, notes, product data, and connected tools into what Attio calls a “Universal Context” layer.
Operational Impact: This is CRM feature has relevance for CX leaders who use Attio to manage customer relationships. Customer Success teams can generate account transition briefs from prior conversations, sales context, and product usage. Sales teams report cutting meeting prep to 30 seconds. The feature handles pipeline updates, task creation, and follow-up drafts after calls.
Implementation Considerations: Ask Attio requires data from multiple connected sources (email, calls, product data) to deliver its full value. If your CRM data is sparse or inconsistent, the AI will reflect that. The feature is available on Plus, Pro, and Enterprise plans with Free plan access coming later. Teams should evaluate data completeness before expecting the “full picture” results Attio describes in its marketing.
Plain Launches Bring Your Own AI Agent
Plain, a B2B customer support platform, released “Bring Your Own Agent” on February 9. The feature lets teams connect any third-party AI agent to Plain’s infrastructure via API. Supported agents include Parahelp, Sierra, Decagon, Intercom Fin, Decimal, or custom-built solutions.
Operational Impact: This decouples AI agent selection from helpdesk selection. A support team can run their preferred AI agent while keeping Plain’s channels, queues, workflows, and escalation paths intact. When a human agent steps in, Plain automatically transitions the thread status so it surfaces in the correct queue. All agent activity appears under a unified Activity view with basic stats and status breakdowns. Human-centric queues like “Needs First Response” automatically filter out AI-handled threads.
Implementation Considerations: Your agent subscribes to webhooks, responds to threads, and manages status through the API. This requires engineering effort to integrate. Teams evaluating this should confirm their AI agent vendor supports webhook-based integrations and can handle Plain’s thread status model. The automatic filtering of AI-handled threads from human queues is a useful default, but teams should verify the handoff logic matches their escalation criteria.
Brainfish Ships Self-Serve Agents and Integrations
Brainfish, an AI-powered customer support and knowledge management platform, released several February updates including Self-Serve Agents that deliver contextual answers within the product, expanded system integrations for pulling data from more sources, and a Follow-Up Widget that checks whether users successfully completed their task after receiving help.
Operational Impact: The Self-Serve Agents aim to resolve queries without creating tickets. The Follow-Up Widget is the more interesting piece for support ops: it tracks whether an AI answer actually solved the problem. That feedback loop could help teams measure true deflection rates rather than assuming a non-escalated interaction was successful. The expanded integrations let Brainfish pull from more product data sources, which improves answer accuracy.
Implementation Considerations: Brainfish’s value depends on the quality and freshness of your existing documentation. Their platform can ingest from videos, docs, and product data, but if source content is outdated or poorly organized, the AI answers will reflect that. Teams should review their content health before expecting high self-service resolution rates.
Gorgias Launches Chat 2.0 With In-Chat Product Pages
Gorgias, an e-commerce customer support platform, released three updates this week. Chat 2.0 redesigns the chat widget into a shopping surface with in-chat product pages, embedded AI FAQs on product pages, and an expanded desktop view. Separately, Shopping Assistant now syncs Shopify automatic discounts to surface relevant promotions during conversations.
Gorgias also added version history for AI Agent knowledge, letting teams track changes to guidances and help center articles.
Operational Impact: Chat 2.0 is a conversion play, not a support play. The in-chat product pages and automatic discount surfacing aim to keep shoppers inside the chat through discovery and purchase. For support teams, the more operationally significant update is version history for AI Agent knowledge. Teams can now see who changed what, when, and compare performance metrics across different content versions. That makes it possible to debug unexpected AI responses by identifying recent knowledge edits.
Implementation Considerations: Chat 2.0 rolls out automatically to all AI Agent customers. If your team has customized the current chat widget extensively, review the new settings immediately. The embedded AI FAQs on product pages require adding a script or Shopify app block. For version history, note that restoring a previous version creates a new draft rather than overwriting the current published version, so there is a deliberate review step before changes go live.
Guru Launches MCP Server for Knowledge Agents
Guru, a knowledge management platform, launched an MCP Server that connects its Knowledge Agents to external AI tools including Claude, ChatGPT, Cursor, and other MCP-compatible clients. The server lets external tools ask questions, perform searches, and draft knowledge articles using Guru’s governed, permission-aware content.
Operational Impact: For support teams using Guru as their internal knowledge base, this means agents can query Guru content from whatever AI tool they already use without context-switching. The MCP connection preserves Guru’s role-based permissions and verification workflows, so answers remain cited and auditable. All interactions log to Guru’s AI Agent Center, giving admins visibility into what external tools are accessing.
Implementation Considerations: Authentication options include OAuth (pre-approved for some tools) or API tokens. Some applications require whitelisting by Guru Support before they can connect via OAuth. Teams should also note that MCP is still an emerging standard. If your AI tool stack changes frequently, evaluate whether maintaining MCP connections adds integration overhead. The value proposition depends on whether your team actually uses external AI tools regularly enough to justify the setup.
Linear Expands MCP Server for Product Management
Linear, a project management tool used by product and engineering teams, expanded its MCP server with support for initiatives, project milestones, and updates. The expanded toolset lets product managers create and edit initiatives, write project updates, manage milestones, and handle project labels from AI tools like Cursor and Claude.
Operational Impact: This is relevant to CX teams that use Linear to track product requests and bug reports. Support leaders who need to communicate customer impact to product teams can now update initiatives and project milestones from their AI assistant without switching to Linear directly. The MCP server also added support for loading images and improved token usage, which reduces costs for teams using AI tools heavily.
Implementation Considerations: Linear is deprecating the /sse MCP endpoint, moving to HTTP streams at /mcp. Teams currently connected via the old endpoint need to update their configuration within two months. The deprecation will roll out gradually, but waiting until the last minute risks disruption. Verify your MCP client supports the newer HTTP streams transport.
ServiceNow Makes Claude Default for Build Agent
ServiceNow, an enterprise workflow automation platform, announced a multi-year partnership with Anthropic to integrate Claude models across its core products. Claude is now the default model powering ServiceNow Build Agent, the company’s tool for building applications and agentic workflows using natural language.
Operational Impact: For CX teams running ServiceNow, this means the Build Agent can now create more complex automated workflows without dedicated developer support. Citizen developers on support teams could build custom case-routing logic or escalation automations using plain-language prompts. ServiceNow claims early internal results show up to 95% reduction in seller preparation time using Claude-powered tools, and the company is targeting a 50% reduction in implementation timelines.
Implementation Considerations: The partnership is still in early stages. ServiceNow also announced a separate deal with OpenAI just one week prior, signaling a multi-model strategy where different models power different functions. CX leaders should watch for which specific Service Cloud workflows get Claude integration first, and whether the “50% faster implementation” claim holds for complex existing deployments or only applies to net-new rollouts. ServiceNow is also building agentic workflows for healthcare and life sciences, which suggests regulated-industry use cases will get priority attention.
Salesforce Spring ’26 Adds Case Timeline to Service Cloud
Salesforce, the enterprise CRM platform, released its Spring ’26 update with several Service Cloud improvements. The headline feature is Case Timeline, a chronological view of key case events including milestones, updates, and status changes. The release also includes bidirectional milestone visibility across parent and child records, improved voice routing through Omni-Channel Flows, and a rename of Omni-Channel Supervisor to “Command Center for Service.”
Operational Impact: Case Timeline addresses a real daily friction point. Agents currently reconstruct case history by clicking through related lists, feeds, and tabs. A linear view of milestones and key events reduces that ramp-up time, especially on complex or long-running cases. The voice routing upgrade lets admins use skills-based logic and case context to route transferred calls, which could reduce misrouted transfers.
Implementation Considerations: The bidirectional milestone feature only applies to new orgs created in Spring ’26 or later. Existing orgs keep their current milestone behavior. Salesforce is also retiring Service Setup and Service Setup Assistant for new orgs, replacing them with Salesforce Go. If your team is mid-implementation, verify whether your org was provisioned before or after the Spring ’26 cutoff, as feature availability differs.
Notion Adds Asana Connector and Claude Opus 4.6
Notion, the workspace and knowledge management platform, shipped two updates. On February 4, Notion launched an Asana AI Connector that lets Notion Agent search Asana tasks and projects using natural language queries. On February 9, Notion added Claude Opus 4.6 as an available AI model, calling it the strongest model Anthropic has released.
Operational Impact: The Asana connector is useful for support teams that track bug reports or feature requests in Asana alongside their Notion knowledge base. Agents can ask questions like “show me overdue tasks grouped by assignee” without switching tools. The Claude Opus 4.6 upgrade improves Notion AI’s ability to handle multi-step requests, which could help with generating more complex documentation or summarizing cross-functional project data.
Implementation Considerations: The Asana connector respects existing Asana permissions, so data access depends on the connected account’s role. Teams should verify which Asana projects the connected account can see before assuming comprehensive coverage. The Claude model upgrade is available on Notion’s AI-enabled plans but the feature does not specify whether users can choose between models or if Opus 4.6 replaces the previous default.
Richpanel Integrates Okendo for Reviews and Loyalty
Richpanel, an e-commerce helpdesk platform, launched an Okendo integration that brings loyalty program data and review management into the helpdesk. Agents can view customer loyalty points, VIP tier status, and manage reviews directly from the customer profile.
Operational Impact: For e-commerce support teams, seeing a customer’s loyalty tier and points balance alongside their ticket context helps agents make faster decisions about compensation or escalation. The ability to convert reviews into tickets based on star rating (for example, auto-creating tickets for 3-star and below) routes dissatisfied customers to support proactively rather than waiting for them to reach out.
Implementation Considerations: This integration requires both Richpanel and Okendo on Shopify. Teams need to configure which review ratings trigger ticket creation to avoid flooding the queue with every review. The Okendo User ID is required for setup, and the integration only covers reviews and loyalty data, not Okendo’s other products like surveys or quizzes.
Now this is:
Strategic Support for CX Leaders.
You’ve got ambitious Support targets and new metrics but you’re not sure what to prioritize first.
The list is long, the queue is getting longer, and you don’t have time to step back and think about CX strategically.
What if you could pressure-test your thinking with someone who’s spent 20 years building Customer Support operations?
No pitch, just a conversation.

