Our main story today is about how CX leaders are reinventing everything that we’ve known about customer experience systems and processes.
Customer Experience News is a weekly newsletter about the most important news and discussions for Customer Experience and Customer Support Leaders.
This is all the weekly news you need in around 10 minutes.
Today’s headlines are from:
Unresolved.cx is built on the idea that every CX leader is in the same moment right now.
So many CX leaders are rebuilding from the foundation up, and nobody has a map.
The AI Agent is live. Tickets are resolving. And the team is now managing a support operation that looks different in ways nobody fully documented before launch.
Routing logic built for human queues, knowledge bases on weekly update cycles, metrics calibrated to a volume mix that no longer exists. What the operation actually needs now is harder to specify.
The composition of what Support Specialists handle is shifting. As AI Agents absorb lower-complexity volume, what stays in the queue gets harder. Interactions that required less skill also served as recovery time between more demanding conversations. When those are gone, the rhythm of the day changes.
Handle time on a complex case going up does not mean something went wrong. It might mean the team is doing exactly what they should be doing with the work that is left.
Most operations are not measuring it that way yet.
At the same time, projects that were impractical a year ago are moving. Behavioral data that could signal a disengaging customer before they cancel has lived in most CRM systems for years. Getting to it operationally required resources most support teams did not have. AI changes that equation. But unlocking those projects surfaces new ones: who owns the signal when multiple departments are each looking at a different slice of the same customer, how you audit resolutions you never see, and where the line sits between what an Agent can answer and what it should answer.
Brett Rush launched Unresolved.cx to document this transition as it is happening.
The podcast brings Manager and Director-level CX leaders into honest conversations about the problems they are living with right now.
Watch the first episode with Sarala Conlan from Kojo.
If you are building through this reinvention, submit to be a guest.
Every one of us have to reinvent everything that we’ve known about customer experience systems and processes.
You’re not alone, so let’s do this journey together.

HubSpot Expands Customer Agent and Help Desk AI
HubSpot, a CRM and customer service platform, announced a set of updates to Customer Agent, Help Desk, and Knowledge Vault that expand AI deployment controls, internal collaboration, and knowledge management capabilities. Customer Agent now lets service teams route a controlled percentage of conversations to the AI agent before scaling to full volume, while Help Desk gained ticket snooze, CRM-synced Notes, and a beta calling workspace with live captions and AI summaries in a single ticket thread. Knowledge Vault can now connect directly to HubSpot Knowledge Base and index up to ten inline images per article so AI can reason over visual content alongside text.
Operational Impact
The coverage controls in Customer Agent give support managers a practical way to test AI performance on a real subset of traffic before expanding deployment. Ticket snooze removes conversations waiting on external parties from the active queue, reducing noise for agents managing inboxes with mixed-priority work. The new Leads tab in Customer Agent Insights tracks conversion rate and lead qualification breakdowns from AI conversations, a metric support teams are increasingly asked to report on but rarely have tooling to track without manual tagging. Knowledge Vault image indexing means teams whose support content relies on screenshots, diagrams, or annotated visuals can now have that content accessible to AI agents rather than relying only on the surrounding text.
Implementation Considerations
Coverage controls require that teams define what acceptable AI performance looks like before increasing the percentage of traffic routed to the agent. Without that benchmark, teams tend to scale up on volume metrics alone and miss quality issues that emerge at higher traffic levels. Draft mode testing in Customer Agent Insights shows which sources and rules were used before publishing, but teams still need someone to review those outputs against real edge cases. Knowledge Vault image indexing only applies to content published or updated after the feature is live, so large static knowledge bases will not benefit until teams run a content refresh cycle.
CXperiences Launches Three Free Testing Tools for Zendesk Teams
CXperiences, a customer experience tooling practice focused on Zendesk environments, launched two free apps on the Zendesk Marketplace and opened a public beta for a third standalone product.
The AI Ticket Generator reads a team’s Help Center and produces realistic inbound tickets with varied personas, writing styles, and urgency levels. The AI Customer Simulator tags training tickets and plays the role of a live customer, responding and escalating based on what the Support Specialist writes. A third product, TestMyAgent.io, is a pre-deployment testing platform in private beta that benchmarks a Zendesk AI Agent against historical ticket data, scores it across five dimensions including groundedness and tone, runs adversarial security scenarios, and produces a containment rate estimate and ROI projection before any customer sees the agent live.
Operational Impact
The two Marketplace apps address a gap that affects most teams building out Agent workflows: the absence of realistic training volume before go-live. New hires typically either shadow live queues or work through scripted scenarios that do not reflect actual ticket variation. The AI Ticket Generator removes the dependency on real ticket volume for onboarding and routing validation. The AI Customer Simulator adds the dynamic element that static tickets cannot provide, giving Support Specialists practice with escalation paths and tone calibration before they encounter those situations with real customers.
TestMyAgent.io targets a different but related problem: teams deploying Zendesk AI Agents often commit to outcome-based pricing without a reliable performance baseline. The platform generates a scored benchmark report covering containment rate, hallucination detection, knowledge base gap analysis, and prompt injection risk, all within Zendesk’s own data environment via OAuth.
Zendesk Agentic Email AI Now Generally Available
Zendesk, a customer service software platform, announced general availability of Agentic AI for email handling and launched an Automation Potential Report that identifies which conversation types are candidates for AI automation. Agentic AI enables the platform to receive emails, understand intent, execute multi-step procedures, and escalate to a human agent when needed. The Automation Potential Report analyzes historical conversation data and produces brand-specific recommendations with sample tickets to illustrate what automation would cover.
Operational Impact
GA status removes the access restriction that previously required a Zendesk rep to enable email AI agents for individual accounts. Teams that went through pilot programs or were on waitlists can now deploy without waiting for provisioning. The Automation Potential Report gives support operations a data-backed starting point for internal conversations about AI scope, replacing the process of manually pulling ticket exports and classifying them by type to build the case. For teams already using AI for chat, extending automation to email closes the most common channel gap in a self-service strategy.
Implementation Considerations
Agentic AI for email performs best when inbound request types are predictable and internal procedures are documented clearly enough for the system to follow. Teams with highly variable email content, such as open-ended technical questions, account disputes, or regulatory correspondence, will need substantial configuration and tuning before the agent can handle those reliably. The Automation Potential Report reflects historical volume and does not account for the supervision and correction overhead required during the early deployment period, which can be significant for teams without dedicated AI operations resources.
Front Ships Four AI Accuracy and Reach Updates
Front, a customer communications platform, announced four AI updates: Fact Invalidation for AI Replies, AI Translate across chat channels, Guru and Confluence as connectable AI knowledge sources, and Front Resolve, a new embedded self-service widget. Fact Invalidation lets admins block specific facts drawn from past conversations from being reused in future AI replies, with invalidated facts marked throughout the platform and re-enablement available at any time. Front Resolve runs Autopilot Playbooks directly inside a company’s app or website, handling requests from start to finish and creating a ticket with full conversation context when a human handoff is needed.
Operational Impact
Fact Invalidation addresses a specific failure mode where AI replies surface information from old conversations that no longer reflects current policy, pricing, or product state. For teams whose products or procedures change regularly, this is a meaningful risk reduction mechanism, provided someone is actively reviewing AI output to catch the problem. AI Translate removing the channel restriction is relevant for teams handling SMS or WhatsApp at volume where language barriers have been managed manually or through separate tooling. Front Resolve’s context-capture on handoff means agents receive a ticket with the full conversation history already recorded, rather than an empty ticket and a customer who has to explain their issue again.
Implementation Considerations
Fact Invalidation requires an active QA process for AI replies. The tool provides the mechanism to block a problematic fact, but it does not surface inaccurate facts automatically. Teams without structured AI review workflows will still have stale information in use until someone spots it in a specific reply. Front Resolve is available on a waitlist as part of the Autopilot add-on, so teams evaluating it cannot deploy immediately. Connecting Guru or Confluence as knowledge sources also requires the underlying content to be organized and maintained. A poorly structured or outdated knowledge base connected to AI will produce inconsistent responses regardless of the integration quality.
Notion Agents Now Work in Private Slack Channels
Notion, a connected workspace platform, announced that custom AI agents can now be added to private Slack channels, enabling them to read and reply within those channels. The update extends AI agent reach beyond public channels into spaces where internal team conversations happen, with admins controlling access through Notion’s AI connector settings. Agents only see the specific private channels they are explicitly invited to.
Operational Impact
Support and CX operations teams that run private Slack channels for escalation coordination, on-call communication, or internal knowledge-sharing can now bring AI agents into those conversations without exposing them to the full workspace. An agent with access to Notion documentation can surface relevant articles or answer process questions in channels where that lookup would otherwise require someone to stop and search manually. The per-channel access model allows selective deployment rather than a broad grant of access across all private channels.
Implementation Considerations
Private channel access introduces a trust boundary that teams should evaluate before inviting agents. The underlying Notion content those agents draw from needs to be accurate and current. An agent with access to a sensitive escalation channel that also pulls from outdated runbooks could surface incorrect procedures at the wrong moment. Organizations with multiple teams sharing a Notion workspace will want to clarify who has permission to configure AI connector settings and which agents are authorized to join which channels before enabling the feature broadly.
Plain Connects HubSpot Ticket Creation to Workflows
Plain, a customer support platform built for technical teams, announced the ability to trigger HubSpot actions directly from its workflow builder, including creating tickets and notes associated to a customer’s company record in HubSpot. Both actions support variable interpolation using Plain’s double-brace syntax, and the built-in thread URL variable links the HubSpot record back to the originating Plain conversation. The feature is available to any Plain workspace with HubSpot already connected.
Operational Impact
Support teams that use Plain as their primary customer interface and HubSpot as their CRM can now close a gap that previously required manual handoffs or custom integrations. When a support thread opens, a HubSpot ticket can be created automatically with the thread URL attached, giving account managers and sales teams visibility into open issues without waiting for a support rep to cross-post. Notes can carry context from the conversation, such as error codes or customer-reported symptoms, directly into the HubSpot company record where those details are visible to teams outside of support.
Implementation Considerations
The feature requires HubSpot to be connected to the Plain workspace before any workflow actions are available. Variable interpolation requires that Plain threads contain the expected data fields. Teams building these workflows should test with real thread data to confirm variables resolve as expected before enabling automation at volume. Custom ticket properties are supported in both actions, but teams will need to verify that the property names in their HubSpot instance match the configuration in Plain.
Kustomer Launches AI Evals and Brand-Aware Knowledge Routing
Kustomer, a CRM-native customer service platform, announced Evals Assistant for AI agent testing and Brand-Aware AI for Customers 2.0 as part of its April 2026 releases. Evals Assistant provides a conversational interface within the AI for Customers testing screen that generates test cases, runs evaluations, surfaces coverage gaps, and explains failure reasons with procedure recommendations. Brand-Aware AI automatically selects knowledge articles and external knowledge sources by brand based on conversation context, supported by URL Filtering and Nightly Re-indexing that gives admins control over which URLs are included in AI responses, with bulk retry options for failed syncs.
Operational Impact
Evals Assistant moves AI agent testing from a manual, time-intensive process to a guided workflow. Support teams that have found it difficult to build comprehensive test suites because of the time required to write individual test cases now have a tool that generates coverage suggestions and identifies gaps in escalation scenarios. Brand-Aware knowledge routing is relevant for businesses operating multiple brands through a single Kustomer instance, where a shared knowledge base previously required agents to manually filter relevant content or risk surfacing information from the wrong brand. URL Filtering with nightly re-indexing addresses knowledge freshness without requiring manual sync triggers.
Implementation Considerations
Evals Assistant generates test suggestions based on existing procedures and conversation history. Teams deploying AI for Customers 2.0 without a well-documented procedure library will find the evaluations flag failures without enough procedure context to address them. Brand-Aware knowledge routing requires that knowledge sources be properly tagged and associated to brands within Kustomer settings before the routing logic can function. Teams that have not organized their knowledge base by brand will need to complete that work first. URL Filtering rules require teams to audit which URLs are currently indexed so they know what to exclude before enabling the feature.
Maven AGI Structures Ticket Data with Intelligent Fields
Maven AGI, an enterprise AI platform for customer support, announced Intelligent Fields, a feature that extracts and structures data from unstructured support ticket transcripts into searchable, organized records that update automatically after each interaction. The feature is designed to surface information for business decisions, product feedback loops, and upsell identification without requiring manual ticket tagging. Maven AGI positions this as a way to activate support data that would otherwise remain unused in conversation logs.
Operational Impact
Support teams that want to report on trends in customer issues, feature requests, or account health signals typically spend time manually tagging tickets or exporting data to BI tools for analysis. Intelligent Fields creates a structured layer from conversation content automatically, reducing the data preparation burden and making trend analysis available inside the support platform. Teams building QBRs, product feedback reports, or churn risk analyses from ticket data are the most direct beneficiaries, particularly those working without a dedicated analytics function.
Implementation Considerations
The accuracy of Intelligent Fields depends on the quality and consistency of conversation transcripts. Tickets with incomplete information, heavy technical jargon, or multiple unrelated topics in a single thread may produce inaccurate field values. Teams should validate field outputs against a sample of real tickets before using the data for business decisions. Maven AGI operates at the enterprise tier, so Intelligent Fields is not available to smaller teams or those evaluating the platform without a formal contract in place.
Decagon Releases Automatic AI Agent Optimization
Decagon, an AI agent platform for customer support, announced its Spring ’26 release including Automatic Optimization, Root Cause Analysis, and Proactive Agents with user memory and outbound voice capabilities. Automatic Optimization enables AI agents to identify patterns in conversation failures and refine performance over time without requiring manual intervention for each improvement cycle. Root Cause Analysis surfaces the underlying reasons for agent failures so teams can address systemic issues rather than individual conversation instances.
Operational Impact
Support teams running Decagon agents at scale spend meaningful time reviewing failure logs and manually updating agent configurations. Automatic Optimization shifts some of that maintenance work to the system, which is relevant for teams managing large volumes of edge cases without dedicated AI operations staff. Proactive Agents with user memory give the system a way to personalize interactions based on prior contact history, which is useful in subscription support environments where conversation context about a customer’s account history matters before the interaction begins.
Implementation Considerations
Automatic Optimization that runs without human review introduces the risk of the agent drifting in a direction that improves one measurable metric while creating problems that are harder to detect. Teams should confirm what constraints are in place before enabling auto-optimization and define clearly what the system is permitted to change and what requires human approval. Outbound voice capabilities require integration with existing telephony infrastructure and compliance review, particularly in regulated industries where outbound AI contact is restricted. User memory raises data retention and privacy questions that teams will need to address in their AI governance documentation before enabling the feature.
GitBook Surfaces User Questions with AI Insights
GitBook, a documentation platform, announced AI Insights to show teams what questions users are asking within their documentation sites, alongside improvements to site search speed and result relevance. AI Insights gives documentation owners visibility into where users are finding answers and where they are not, surfacing gaps that would otherwise require analysis of support ticket volume to identify. The search improvements deliver faster results with better relevance matching for documentation readers.
Operational Impact
Support teams that maintain external-facing documentation often have limited visibility into which content is working and which is sending users to contact support. AI Insights connects documentation usage to user intent, so teams can identify high-traffic questions that are not well-answered in existing content and prioritize those for documentation updates. For teams running a ticket deflection strategy, this creates a direct feedback loop between documentation quality and inbound support volume without requiring a separate analytics integration.
Implementation Considerations
AI Insights reflects what users are asking, not whether the documentation is answering those questions correctly. Teams will need to review surfaced questions alongside the corresponding content to determine whether the issue is missing content, poorly written content, or content that is difficult to find through the current site structure. The feature requires meaningful user interaction with documentation search or the AI assistant for data to accumulate. Documentation sites with lower traffic will not have enough signal to draw actionable conclusions immediately after launch.
Document360 Adds Multilingual Guides and Security Controls
Document360, a knowledge base platform, released version 12.4.1 in April 2026, adding multilingual support for step-by-step guides, improved anchor link stability across edits and language versions, advanced Content Security Policy controls, and AI search improvements in Eddy AI. The multilingual step-by-step guide feature extends language support to a content type that was previously limited to the platform’s default language, addressing a gap for teams serving global audiences with procedural documentation. Eddy AI received updates to search relevance and heading recognition to improve answer accuracy.
Operational Impact
Support teams maintaining knowledge bases for international customers often manage step-by-step guides separately from the multilingual content pipeline, creating duplicate maintenance work whenever a procedure changes. Bringing step-by-step guides into the multilingual workflow consolidates that process. The CSP controls are relevant for teams hosting the knowledge base on corporate portals with strict security policies, where misconfigured CSP settings can silently break portal functionality or block required scripts without clear error messages to diagnose the problem.
Implementation Considerations
Multilingual guide support requires translation workflows to be in place for the new content type. Teams that do not yet have a translation process, whether managed in-house or through a vendor, will need to resource that before the feature delivers value. Anchor link stability improvements apply to content edited after the update. Teams with large volumes of static articles published before the update may still encounter anchor link issues in that older content until those articles are reviewed and re-saved.
Now this is:
Strategic Support for CX Leaders.
You’ve got ambitious Support targets and new metrics but you’re not sure what to prioritize first.
The list is long, the queue is getting longer, and you don’t have time to step back and think about CX strategically.
What if you could pressure-test your thinking with someone who’s spent 20 years building Customer Support operations?
No pitch, just a conversation.

