Our main story today is about Linear shipping bi-directional MCP support and what that means for how software companies are organized. With Agents that consume context from external tools and a server that exposes Linear to external Agents, the translation layer between Sales, CX, Support, and Engineering is starting to compress, and the competitive picture against Jira and Asana now hinges on architecture, not feature count.
Customer Experience News is a weekly newsletter about the most important news and discussions for Customer Experience and Customer Support Leaders.
This is all the weekly news you need in around 5 minutes.
Today’s headlines are from:
Linear, the project management platform built for software teams, shipped Linear Agent MCP support, and the move quietly redrew the lines of where issue tracking sits in the modern software org. The new release lets Linear Agent connect outward to any MCP server in the ecosystem, pulling in meeting takeaways from Granola, enterprise context from Glean, interview notes from Notion, or product analytics from PostHog, all grounded inside a Linear workflow. Combined with Linear’s existing MCP server that exposes the workspace to external AI clients, Linear is now bi-directional: it both serves data to outside Agents and consumes data from external systems on behalf of its own.
This week’s integrations from Birdie.so, a screen recording tool for support teams, and Attio, the AI-native CRM, are the first signals of what teams will build on top of that.
The architectural distinction matters because the competitive landscape looks different than it did six months ago.
- Atlassian’s Rovo MCP server connects Jira and Confluence to external AI clients, with rate limits ranging from 500 calls per hour on Free up to 10,000 per hour on Enterprise.
- Asana’s MCP server publishes a similar surface to Claude, ChatGPT, Cursor, Codex, Microsoft Teams, and a long list of other clients.
Both are well-built, both are widely deployed, and both are designed around a single direction of travel: external Agents reach in to read or write.
👉 Linear shipped that direction earlier and then shipped the other direction too, so the same backlog can now act as both a destination for AI-driven work and an originating point for AI-driven research across a team’s full toolchain.
For software companies, this changes who can put work into the engineering pipeline and what shape that work arrives in.
- A Sales lead in Attio can ask their Agent to file a feature request from a deal note, and the Agent creates a tracked Linear issue with the customer context, deal stage, and competitive notes already attached.
- A Support Specialist using Birdie.so can capture a screen recording of a customer-reported bug, and the recording, console logs, and reproduction steps land in the Linear backlog as a structured issue.
- A Product Manager can ask Linear Agent to read three weeks of #voice-of-customer Slack threads, cross-reference them with Granola meeting notes from prospect calls, and draft a spec grounded in both.
The handoff that used to require a PM, a triage meeting, and a translation step from one tool’s vocabulary to another collapses into a single Agent action. The org structure implications are larger than the workflow ones.
Most software companies still run a triage layer whose primary job is to translate between functions: a PM who turns Sales requests into engineering tickets, a Support lead who turns customer issues into bug reports, a Solutions Engineer who turns prospect questions into roadmap items. That layer exists because every function speaks a different vocabulary, uses a different tool, and lacks the context to file work the way engineering needs it.
When any function can produce a structured Linear issue with the right context attached on the first try, the translation layer compresses. The PM still owns prioritization and trade-offs, but the busywork of converting raw input into trackable work moves to the system itself.
The status meeting economy changes too. A meaningful portion of every weekly cross-functional standup is spent on the same loop: someone asks what is happening with a customer issue, someone else opens Jira, someone else opens Salesforce, someone else opens the support tool, and the group reconciles four views of the same work.
With a bi-directional MCP architecture, an Agent can read across all four systems in a single prompt and produce the reconciled view directly. That does not eliminate meetings, but it eliminates the meeting work that exists only because the data is scattered across tools that cannot talk to each other.
🔥 For CX and Support leaders, the operational implication is concrete. Your team is the largest source of structured product feedback in the company, and historically that feedback has been the hardest to get into engineering’s planning cycle because it sits in Zendesk, Intercom, Front, or Pylon while the backlog sits somewhere else.
Linear’s existing CX integrations already provide direct paths from a support conversation to an issue. The MCP layer adds a new path: an Agent in your support tool can read a week of conversations, group them by theme, deduplicate against existing Linear issues, and propose a prioritized list of feature requests with linked customer evidence. The work that took a Support manager half a day becomes a five-minute Agent run, and the engineering team gets cleaner input than they have ever received from CX.
❗️ There are real implementation considerations before a Support org leans into this. The first is that bi-directional MCP only works as well as the data on both sides; if your customer profiles in Zendesk are inconsistent, your Linear labels are unmaintained, or your team has no convention for what counts as a feature request versus a bug, the Agent will produce confidently structured work that is just as messy as what your team produced manually.
The second is that admin controls matter. Linear added workspace-level MCP permissions and allowlists alongside the launch, and any organization rolling this out needs to take that seriously, because an Agent that can read across tools is also an Agent that can leak information across permission boundaries if access is not scoped carefully. The third is that engineering tolerance is not infinite. Asking engineering to switch from Jira to Linear is a real political project, and Linear’s existing bi-directional Jira Epic sync makes the transition cleaner but does not make it free.
The thought-provoking question for any CX leader is whether your operating model still requires a translation layer between functions, or whether the architecture has caught up to the point where every function can contribute structured work directly.
The vendors that are building toward that future are doing it bi-directionally, with Agents that consume context as readily as they produce it. The vendors that are still building one-way connections are shipping useful features, but the architectural ceiling is lower.
The rest of 2026 will sort companies into two groups: the ones that rewire how their functions hand off work, and the ones that keep their existing handoffs and just speed them up. Both will get value out of MCP. Only one will see the org chart change.
ServiceNow and Google Cloud unite Agents for autonomous operations
ServiceNow, the enterprise workflow and IT service management platform, expanded its partnership with Google Cloud at Google Cloud Next to deliver Agents that work across both platforms for autonomous enterprise operations. The new solutions span 5G networking, retail, and IT systems, with ServiceNow Agents on Google’s Gemini Enterprise platform handling anomaly detection and connecting to the ServiceNow AI Platform for remediation. Underpinning the integration is a shared interoperability framework built on Agent-to-Agent (A2A), Agent-to-UI (A2UI), and Model Context Protocol (MCP) standards, with unified governance through ServiceNow’s AI Control Tower.
For CX leaders running on ServiceNow, the most operationally relevant piece is the autonomous IT operations integration featuring ServiceNow Autonomous Workforce and Gemini Enterprise for CX, currently in preview. The Level 1 Service Desk AI Specialist is the first out-of-the-box Agent, built to autonomously diagnose and resolve common IT support requests end-to-end with built-in governance and human oversight. Through the AI Control Tower integration, every Agent and MCP Server across both platforms appears in a unified registry, giving teams a continuous view of what Agents are running, what they are accessing, and how they are behaving.
Implementation is gated on enterprise scale. The full general availability target is later this year, and the partnership is built around customers already running both ServiceNow and Google Cloud at depth, with the data fabric and governance infrastructure to support cross-platform Agent operation. The promise of “self-healing” operations is contingent on clean telemetry, well-defined SLAs, and a service catalog mature enough for Agents to act on. Smaller Support orgs or those still consolidating their CX stack should treat this as a directional signal about where enterprise Agent governance is headed rather than a near-term deployment plan.
Missive launches AI Credits and time series analytics
Missive, an inbox collaboration platform built for teams that run on email, released version 11.25.0 with two changes that matter for Support operations. The first is Missive AI Credits: organizations can now purchase AI credits directly from Missive instead of bringing their own provider key, with credits shared across the team for usage monitoring. The second is time series charts in Analytics, which let users click any metric cell in a report to see how that number has trended over time for a specific user, team, account, or organization label.
For Support teams using Missive as a shared inbox, the credit model removes a real adoption barrier. Setting up a separate AI provider account, managing keys, and budgeting tokens at the team level is friction that has kept smaller support orgs from using AI features they technically have access to. The trended metrics in Analytics address a related gap: knowing whether a Specialist’s reply time, ticket volume, or resolution rate is improving or just spiking on a single bad day. Both changes shift Missive closer to a self-contained Support analytics surface rather than a tool that exports to a separate BI layer.
A note on the credit model before deploying: usage costs scale with how aggressively your team uses AI drafting, summarization, and AI Rules, and Missive has not published per-action credit costs in the changelog. Teams running high message volumes through AI Rules should pilot credit consumption against a known workload before committing to a budget. Paid organizations get five dollars of free credit to test with, which is enough to run a small pilot but not enough to project monthly cost across a full team without measurement.
Kustomer launches Signals for proactive customer intelligence
Kustomer, the AI customer experience platform, unveiled Kustomer AI Signals, a capability inside its AI for Reps product that surfaces customer context to Support Specialists the moment a conversation begins. Signals analyzes customer behavior, conversation history, and sentiment to identify what matters most before a Specialist starts typing a response. Notably, Kustomer is also offering Signals through its standalone AI offering, which means teams running on Zendesk or other helpdesks can use the capability without migrating their core platform.
The operational value depends on whether your team currently has a “first thirty seconds” problem, where Specialists spend the opening of every interaction reading the timeline, checking past tickets, and piecing together what happened before this conversation. Signals condenses that work into a pre-typed summary of relevant context. For high-volume B2C teams handling repeat issues across long customer histories, that compression saves real handle time and improves first-response quality. For low-volume B2B teams where every conversation already gets Specialist review and pre-call research, the lift is smaller.
Implementation depends entirely on the quality of the underlying customer data. Signals reads behavior, history, and sentiment, which means the surfaced context is only as accurate as your customer timeline, your conversation tagging discipline, and your sentiment scoring inputs. Teams with messy customer records, inconsistent tagging, or fragmented identity resolution will get fragmented signals. The cross-platform pitch is appealing, but the standalone version still requires a clean data feed from your existing helpdesk, which is a project most teams underestimate.
Notion adds Skills, Kimi K2.6, and new connectors for Agents
Notion, the workspace and knowledge management platform, shipped a sequence of Agent updates this month. Notion 3.4 part 2 introduced Skills, which let users save a frequently-used Agent workflow such as a weekly update draft as a reusable command, alongside AI Autofill in databases and direct integrations between the Notion Agent and Calendar, Mail, and Slack. Separately, Notion added the Kimi K2.6 open-weight model from Moonshot AI to its model picker and integrated Anthropic’s Claude Opus 4.7, which Notion says uses fewer tokens and produces three times fewer tool errors on complex workflows.
For CX teams using Notion as a knowledge base or runbook source, two of these matter directly. Skills turn institutional knowledge into Agent muscle memory: a Support lead can encode “draft the weekly escalation summary using these databases and this format” once, and any Specialist can run it from chat. AI Autofill is a quieter shift: continuous enrichment, extraction, and categorization of database rows means a customer-issue tracker, a knowledge gap log, or a feedback database can stay current without manual upkeep. The Salesforce and Box connectors fill in two large gaps that previously forced Notion users to context-switch out of the workspace to gather call prep or contract data.
Two implementation considerations are worth flagging. Custom Agents are free to try through May 3, 2026, but switch to a credit-based model on May 4, with credits available as an add-on to Business and Enterprise plans. Teams piloting Custom Agents during the free window need to track real usage now to forecast credit needs accurately, and Notion’s credits dashboard is the place to do it. The second is that Skills only get reliable when the underlying knowledge is well-structured: a Skill that points to a messy collection of pages will produce inconsistent outputs, and the cost of cleaning that up sits with the team that wants the Skill, not with Notion.
Now this is:
Strategic Support for CX Leaders.
You’ve got ambitious Support targets and new metrics but you’re not sure what to prioritize first.
The list is long, the queue is getting longer, and you don’t have time to step back and think about CX strategically.
What if you could pressure-test your thinking with someone who’s spent 20 years building Customer Support operations?
No pitch, just a conversation.

