CX-News: Apr 2, 2026 – Guest Intelligence for Live Events Going Beyond the Helpdesk


Posted by:

|

|

|

Customer Experience News is a weekly newsletter about the most important news and discussions for Customer Experience and Customer Support Leaders.

This is all the weekly news you need in around 5 minutes. Our top story is about Guest Intelligence for Live Events going beyond the Helpdesk.

Today’s 10 headlines are from:


ASQR |ˈaskər|, a guest intelligence platform built for venues, festivals, conferences, and sports teams, published a detailed breakdown of its platform category this week. The argument is direct: live events organizations have outgrown general-purpose helpdesks, and the architecture that works for SaaS support does not work when your ticket volume goes from 20 to 2,000 in 48 hours around a single on-sale.

Every event has its own policies, FAQs, venue logistics, and brand voice. A general helpdesk flattens all of that into a single ticket stream, which means generic AI responses, manual tag taxonomies, and post-event reporting assembled by hand.

The platform’s answer is per-event AI configuration. Each conversation is tied to the specific event it belongs to, and the Agent pulls answers only from that event’s knowledge base. When a guest asks about parking at a music festival, AI is not pulling from a combined pool that also contains arena policies or conference schedules.

Intent and sentiment analysis runs automatically across all conversations in real time → scoped to the individual event, not a blended aggregate. That means operational problems surface before a post-event debrief, not after.

Read more →


Intercom, a customer service platform, launched Fin Apex 1.0, a purpose-built large language model trained in-house by its AI team, now powering the Fin Agent across all existing customers. Apex replaces the frontier model previously at the core of Fin’s answers and is already handling over one million conversations per week. For existing Fin customers, the upgrade is automatic and included at no additional cost.

Operational Impact

Support teams running Fin should see a higher share of conversations resolved without escalation to a Support Specialist. Fewer escalations mean lower queue load for human teams and less time spent on contacts that could have been handled without intervention. Apex also reduces hallucinations, which has a direct quality impact: fewer incorrect answers reaching customers, fewer follow-up contacts to correct bad information, and less risk of reputation damage from confidently wrong responses. Faster response times reduce the likelihood that customers abandon a conversation mid-session before getting an answer.

Read more →


Decagon, an AI Agent platform for enterprise customer experience, announced Decagon Labs, the internal research team responsible for its in-house model stack. Rather than relying on a single foundation model, Decagon runs a system of specialized models, each responsible for a specific function in the conversation: detecting end-of-speech, executing workflows, identifying hallucinations, and others. The company states that over 80% of its production model traffic now runs on models trained in-house.

Operational Impact

For support teams evaluating or already running Decagon, the announcement signals that the platform’s performance is increasingly decoupled from the general-purpose model releases that every other platform has access to. Specialized models trained on enterprise CX interactions are designed to behave more predictably on the types of conversations support teams actually handle: ambiguous questions, multi-step account issues, edge cases that fall outside clean knowledge base coverage. The practical effect, if the architecture delivers as described, is an Agent that fails less often at the moments that matter in real customer interactions.

Implementation Considerations

Decagon Labs is an organizational and research announcement, not a new feature customers can configure or enable. It reflects infrastructure investment, not a near-term capability change. Teams currently evaluating Decagon should not treat this as evidence of a deployment-ready improvement. Decagon’s model stack runs entirely under the hood; customers do not select or configure individual models. The core evaluation question remains unchanged: how does the Agent perform on your specific ticket types, with your knowledge base, at your support volume?

Read more →


Pylon, a B2B customer support platform, launched native phone support , adding inbound and outbound calling directly alongside its existing email, SMS, chat, and Slack channels. Inbound calls automatically create tickets and surface full account context in the issue sidebar the moment the phone rings: health scores, previous conversations, and customer details. Call transcripts and recordings feed into Pylon’s AI fields and notebooks. Pricing is $35 per seat per month with phone and SMS included and no per-minute fees, overage caps, or extra charges for AI features.

Operational Impact

For B2B support teams that still handle calls but route them through a separate phone system, this closes a real context gap. A Support Specialist picking up an inbound call from an enterprise account currently has to pull that account up manually while the customer is already talking. Pylon’s approach surfaces that context automatically before the conversation starts. Outbound calling from inside an issue removes the step of copying a number to a dialer. The flat per-seat pricing is the other operational story: teams running separate voice tooling on per-minute billing can model a straightforward cost comparison.

Implementation Considerations

Pylon is a B2B-focused platform, so teams evaluating this for high-volume consumer support should assess whether the platform fits their scale. The phone feature is gated behind a demo, which means there is no self-serve trial. Teams with existing phone infrastructure and number portability requirements should confirm migration path before committing. The AI transcript features are only as useful as the downstream fields and notebooks they feed into — teams with underdeveloped Pylon configurations will see less value from that layer on day one.

Read more →


Notion, a connected workspace platform, released version 3.4 with Dashboard view and AI Skills as the headline additions for operational teams. Dashboard view assembles charts, KPIs, and multiple database views into a single configurable layout without requiring separate pages of linked views. AI Skills package repeatable AI tasks into reusable commands that any team member can trigger from the text menu or call directly inside an Agent chat.

Operational Impact

Support and CX operations teams that use Notion to manage knowledge, track projects, or run reporting now have a native way to assemble a live operational view without building and maintaining workarounds. A team tracking ticket volume, team workload, and open issues across multiple Notion databases can consolidate those into one dashboard that stays current. AI Skills reduce the manual overhead of recurring AI-assisted tasks: a weekly queue summary, a ticket categorization pass, or a knowledge gap review can be written once and reused by anyone on the team with Notion AI access, without re-prompting from scratch each time.

Implementation Considerations

Dashboard view and AI Skills are both limited to Business and Enterprise plans. Teams on Plus are excluded. Dashboard views cap at 12 widgets, and Notion’s documentation warns that performance degrades with large, unfiltered datasets. Support teams managing high-volume queues should filter dashboard widgets deliberately rather than pulling full unfiltered databases. AI Skills are only as consistent as the prompts behind them. A vague or poorly scoped skill page will produce inconsistent output across team members.

Read more →


Sierra, a customer experience AI Agent platform, announced Agents as a Service, introducing Ghostwriter, an Agent that builds and improves other Agents autonomously. Rather than requiring teams to manually configure conversation journeys, write integrations, or triage issues in a product interface, Ghostwriter accepts plain-language instructions, uploaded SOPs, support call transcripts, process documentation, and audio recordings, then produces a production-ready Agent across voice, chat, email, and over 30 languages. A companion feature called Explorer continuously analyzes real customer conversations to surface trends and identify areas for Agent improvement.

Operational Impact

For CX leaders, the most direct implication is a reduction in the time and technical overhead required to build and iterate on AI Agents. Teams that have deferred Agent improvements because configuring journeys requires a developer can now describe changes in plain language and let Ghostwriter handle the translation to production logic. Explorer addresses a persistent gap in Agent management: understanding why conversations fail. By analyzing interactions directly and surfacing improvement opportunities automatically, it moves teams from reactive fixes to a continuous optimization loop without requiring manual conversation review at scale.

Implementation Considerations

Ghostwriter is a significant rearchitecting of Sierra’s platform. Teams already running Sierra Agents should expect a learning curve in how they interact with and manage the platform. The shift from clicking through menus to prompting an Agent is genuinely different, and teams will need to develop skill in writing clear, bounded instructions to get reliable output. The autonomous improvement cycle is only as good as the signal from real customer conversations. Teams with low volume or highly variable use cases may see slower iteration cycles. Sierra has not published detailed plan availability or pricing for Ghostwriter at this time.

Read more →


Zendesk, a customer service platform, launched the AI Agents – Essential tab in its AI agents dashboard, rolling out from March 26, 2026. The tab consolidates conversation volume, automated resolutions, escalation rates, drop-off rates, customer satisfaction with AI responses, and knowledge content reference data into a single view. It is available at no additional cost to any Zendesk customer using AI agents, with no setup required.

Operational Impact

Support leaders on Zendesk’s Essential AI tier previously had no centralized view of how their AI Agents were performing. This closes the most basic visibility gap. The content reference data is the most operationally significant signal: it shows which help articles the Agent is pulling when generating answers. That points directly to which content is doing work and which topics are generating escalations because the knowledge base does not cover them adequately. Teams that have been managing knowledge base updates reactively now have a more targeted starting point for identifying what to write or fix first.

Implementation Considerations

This tab is available for AI agents – Essential, included in Suite plans. Teams on the Advanced add-on already have the Conversation Journey Explorer, which provides a more detailed visualization of how conversations flow through dialogue paths. The Essential tab is a summary view. If the tab appears empty after enabling, Zendesk notes that AI Agents need to be actively configured and generating data before metrics populate. The tab updates once daily, so it does not reflect real-time shifts during a high-volume period or incident.

Read more →


Linear, a project and issue tracking platform, launched Linear Agent in public beta. The Agent is built directly into Linear and has full access to the workspace, including issues, roadmaps, backlogs, and customer requests. It can synthesize context, surface patterns across issues, draft specs, create issues from Slack discussions, and answer questions about what is at risk or behind schedule. A Skills feature lets teams save successful workflows as reusable commands. Automations, which trigger Agent workflows when issues enter triage, are available on Business and Enterprise plans.

Operational Impact

For support teams that route customer feedback and bug reports into Linear, the most immediate value is in the triage automation. When new issues arrive from support tooling, Linear Agent can automatically summarize customer impact, group related requests, and surface recurring themes without a team member manually reviewing each item. For CX leaders attending cross-functional planning, the Agent’s ability to pull patterns from the backlog and draft a starting point for prioritization reduces the preparation work that usually falls on whoever is managing the customer feedback loop.

Implementation Considerations

Linear Agent is in public beta, which means behavior and pricing are subject to change. Linear has indicated that core chat functionality will remain in the base seat price at general availability, but high-volume features like Automations may move to usage-based pricing above a threshold. Automations, the feature most directly relevant to support-to-engineering workflows, are currently limited to Business and Enterprise plans. The quality of Agent output depends heavily on how well the Linear workspace is structured. Teams with inconsistent issue labeling, sparse descriptions, or unmaintained backlogs will get less useful synthesis.

Read more →


Plain, a composable support platform, shipped Workflow Triggers in Customer Cards. The feature adds workflow buttons directly to customer cards, controlled via API, that let Support Specialists trigger a Plain workflow from inside a thread with a single click. The button’s presence and the workflow it runs are defined by the team’s own system, and the button shows a loading state while the workflow executes before surfacing the result.

Operational Impact

Customer cards in Plain have historically surfaced context but required Support Specialists to switch tools to act on it. Workflow triggers remove that step. A Support Specialist reviewing a customer account mid-conversation can now initiate an incident workflow, escalate to a Slack channel, or call an internal API without leaving the thread. For B2B support teams handling enterprise accounts, where the right response often requires coordinating across multiple systems, this reduces context switching and the time between identifying what needs to happen and making it happen.

Implementation Considerations

This is a developer-dependent feature. Setup requires adding a componentWorkflowButton to the customer card API response and pointing it at an existing workflow ID in Plain’s Workflows settings. Teams without a technical resource maintaining their customer card API will need engineering support to implement it. Teams with existing customer cards pulling live account data are the most immediate candidates. Teams using minimal or static customer cards will need foundational work before this feature adds value.

Read more →


Zipchat, an AI chat and customer engagement platform for e-commerce, launched Follow-ups for Messenger, Instagram, and WhatsApp. The feature lets the Zipchat AI Agent automatically reconnect with customers after a configurable delay, based on the context of the prior conversation, without requiring a new inbound message from the customer to trigger the follow-up.

Operational Impact

For support and CX teams, outbound follow-ups on social and messaging channels have historically required either manual effort or a separate campaign tool. This closes that gap for teams already using Zipchat. A customer who asked about an order status but did not get a resolution, or whose conversation dropped off before completion, can be re-engaged automatically in the same channel and thread without a Support Specialist initiating it. For e-commerce teams, this is directly relevant to post-conversation recovery scenarios: carts that stalled, issues that were escalated and resolved, or inquiries that went unanswered.

Implementation Considerations

Follow-ups are context-driven, meaning the relevance of the outbound message depends on the quality of the prior conversation and how well Zipchat’s AI interpreted customer intent. If the original conversation was ambiguous or the AI misread the customer’s state, the follow-up may land as irrelevant or intrusive. Teams should test follow-up timing and message framing across different conversation types before rolling out broadly. Messenger, Instagram, and WhatsApp each have platform-level policies governing outbound messaging. Teams should verify that their Zipchat configuration aligns with Meta’s messaging window rules for each channel.

Read more →


Now this is:

Strategic Support for CX Leaders.

You’ve got ambitious Support targets and new metrics but you’re not sure what to prioritize first.

The list is long, the queue is getting longer, and you don’t have time to step back and think about CX strategically.

What if you could pressure-test your thinking with someone who’s spent 20 years building Customer Support operations?

No pitch, just a conversation.

Clarity starts with a conversation

In 30 minutes, we can discuss your biggest support opportunities and outline what to do next.