Our main story today is about Intercom shipping Fin for Sales, turning the playbook I built as a custom workflow last year into a native capability.
Customer Experience News is a weekly newsletter about the most important news and discussions for Customer Experience and Customer Support Leaders.
This is all the weekly news you need in around 5 minutes.
Intercom, the company behind Fin, released Fin for Sales, extending the Fin AI Agent from customer service into inbound sales. Fin for Sales engages prospects the moment they land on the site, answers pricing and feature questions, handles objections, qualifies leads the way an SDR would, and either books meetings, starts trials, or hands qualified buyers off to the sales team with full conversation context.
Intercom CEO Eoghan McCabe called the launch the most significant new customer service agent technology the company has shipped in some time, and pointed to case studies from Attio, Brightwheel, Fellow, and Breathe where Fin is already driving pipeline and closing deals, including one Attio customer that returned after a year, had a first real conversation with Fin, and converted to a paying customer at six times the average contract value. The Fin for Sales role is available now through the standard Intercom and Fin setup, with no separate product to install for existing Fin customers.
I built a Fin Sales Qualifier workflow last year to do exactly this.
The goal was to automate small subscription sign-ups and route enterprise leads directly to the sales team: building out the qualification logic inside Fin’s workflows, wiring up the handoff to sales for anything that met the enterprise threshold, and letting Fin close the simple conversions on its own.
What Intercom has shipped with Fin for Sales takes that pattern and makes it native, which means the qualification logic, objection handling, personalized plan recommendations, data enrichment from CRM and real-time research, and context-rich handoff to live reps are all first-class product capabilities now, not workflow hacks.
For CX leaders who have been running custom sales-qualification logic on top of Fin or another AI Agent, the operational improvement is significant. The heavy lifting moves from your playbook into the platform, which frees the team to focus on the higher-value judgment calls (what qualifies as enterprise for your business, what the handoff threshold should be, which plan recommendations to surface) rather than the mechanics of the conversational flow.
The implementation consideration that matters most is the unified customer profile. Fin for Sales is not a separate product, it is a role that the same Fin Agent can move between. The Agent knows whether the person on the other end is a prospect or an existing customer and switches between sales and support context accordingly. That is powerful when the CRM and the Support platform are properly connected, because the Agent can deliver truly seamless experiences across the full customer journey.
It is also risky when those systems are not in sync, because the Agent will make confident decisions based on whatever data it has, even if that data is incomplete or out of date. Before rolling out Fin for Sales broadly, CX leaders should audit the completeness of the contact and account data Fin will rely on, validate the qualification criteria against recent deals won and lost, and define clear escalation paths for the moments when Fin’s confidence should trigger a human handoff rather than an automated close. The pricing model for Fin for Sales is outcome-based consistent with Fin’s customer service pricing, so teams should also validate the cost structure against expected volume before committing to broad deployment.
Pylon launches Memories for account context at the top of every ticket
Pylon, a B2B customer support platform built for companies that serve complex accounts, launched Memories, an account intelligence feature that surfaces important context at the top of every ticket. Memories capture things like a recent severe incident the account experienced, an active upsell motion, legacy infrastructure or unusual setup details, or a renewal-risk flag that should trigger CSM involvement on specific question types. Pylon’s AI automatically extracts memories from the team’s interactions with an account, pulling context from issues, call recordings, and other sources, and keeps that context up to date without manual entry. Teams can also pin specific memories to ensure they always appear at the top of the list and on every new issue the account opens.
For Support leaders running on Pylon, this closes a gap that every B2B Support organization feels. Context about an account lives in the heads of specific Support Specialists, the CSM, the AE, or the engineering team that handled a prior escalation. When a new issue comes in and lands with a Specialist who was not part of those earlier conversations, the context is lost and the response quality suffers. Memories turn that tribal knowledge into something the platform surfaces automatically. The AI extraction matters because the hardest part of context management is keeping it current, not creating it the first time. A feature that required Specialists to manually record and update account context would decay fast, which is exactly why prior attempts at account notes and tags have not worked well in practice. The citation behavior is also worth noting: when Specialists ask questions through Pylon’s Ask AI or other AI features, memories get cited as sources, which means the context does not just sit in a panel, it actually influences how the Agent responds to related queries.
The implementation consideration is what the AI chooses to remember versus what it should forget. Pylon’s AI extracts memories automatically from team interactions, which is powerful but raises two practical questions. First, does the memory reflect the current state of the account, or a historical state that should no longer influence how the team engages? A memory extracted from an incident six months ago may be stale, even if it remains technically accurate. Second, what happens when a customer’s situation changes in a way the AI has not yet detected? Support leaders should establish a review cadence on pinned memories in particular, confirm who on the team has permission to edit or delete memories, and align with CSM and account management on how the memory layer should reflect current account health rather than just historical events. Teams that already have account-context workflows in other systems (CRM account notes, CSM briefing docs) should plan how Memories interacts with or replaces that data, to avoid having two sources of truth that drift apart.
Zendesk redesigns Academy with 20+ new AI-focused learning experiences
Zendesk, a customer service platform, redesigned the Zendesk Academy with a refreshed experience, more engaging course formats, over 20 new learning experiences, and more than 10 hours of AI-focused content. The new catalog includes learning paths built specifically for an AI-first support world, covering how to prepare help-center content for AI retrieval, how to configure and deploy AI Agents at the Essential and Advanced tiers, how to use Copilot alongside Support Specialists for real-time guidance, and how to build skills in empathy, de-escalation, and complex problem-solving that AI does not handle well. The Academy also includes a personalized 30-day action plan exercise for AI-first service rollouts, a Relate 2026 badge, and a Zendesk AI Practitioner certification path for administrators.
For Support leaders, the value here is not the platform-specific training, it is the framing. Zendesk has structured the AI content around the actual questions CX leaders need answers to: how do I prepare my knowledge base for AI retrieval, how do I configure AI Agents that resolve without breaking trust, how do I coach Support Specialists to work alongside Copilot rather than resist it, and how do I build the human skills that matter most in an AI-first operation. Teams that are not Zendesk customers can still use the public Academy catalog as a reference for what a well-structured AI enablement program looks like, including the sequencing of knowledge readiness before Agent deployment and the separation of administrator skills from Support Specialist skills. The empathy and de-escalation track is particularly worth attention, because it reflects a shift in what human Support Specialists are expected to be exceptional at when AI handles the straightforward cases.
The implementation consideration for Zendesk customers is fitting the Academy into actual Support Specialist and administrator schedules. Ten hours of AI content is meaningful, but Support Specialists rarely have ten consecutive hours of training time available without pulling them off the queue. Leaders should map the learning paths against a multi-week rollout plan, not expect completion in a single training week. The certification paths (AI Practitioner for administrators, Agent Practitioner for Specialists) are worth investing in for leads and senior Specialists who will carry AI rollouts, but not necessary for every Specialist on the team. For non-Zendesk teams, the Academy is useful as a reference model for structuring internal training, but the specific course content will not map directly to other platforms, so do not expect the courses themselves to transfer. The broader pattern (AI readiness of content, then Agent deployment, then human skill refinement) is the takeaway regardless of platform.
Ada ships interstitial audio for Voice and smarter auto-reply detection for email
Ada, an AI-native customer experience platform for enterprise support, shipped two updates that address specific friction points in Voice and email AI Agents. The first adds configurable interstitial audio to Voice, replacing the silence that happens when an Agent is retrieving information from external systems. Callers previously heard nothing after an initial acknowledgement message, which caused some to speak over the Agent or hang up assuming the call had dropped. Admins can now select from four sound options (Calm Contour, Bright Motif, Pure Cascade, or Slow Gallop), preview each one, and set a delay threshold between 2.0 and 4.0 seconds before the audio begins playing. The second update adds AI-powered auto-reply detection to email Agents, catching out-of-office messages and delivery confirmations that lack the standard email headers (Auto-Submitted, X-Autoreply, Precedence) that detection systems have historically relied on. When an auto-reply is detected, the Agent stops responding, the conversation closes automatically, CSAT surveys are suppressed, and webhook events continue to fire normally.
For CX leaders running Voice Agents, the interstitial audio change closes a gap that shows up in call abandonment data but rarely gets diagnosed as a product problem. Callers interpret silence as a dropped line, which generates repeat calls, longer average handle times on the retry, and lower CSAT on what should have been a resolved interaction. The delay threshold control matters because it lets teams tune the audio to their specific Agent behavior. Agents that routinely hit 3-plus-second processing windows during identity verification or order lookups will behave differently than Agents handling mostly knowledge-base queries. On the email side, auto-reply loops are one of the more embarrassing failure modes for AI Agents. A customer’s out-of-office response triggers a reply from the Agent, which then triggers another auto-reply, and the loop continues until someone notices. Header-based detection catches the well-formed cases but misses a surprising share of automated email, particularly from older systems and consumer email clients. Closing those conversations automatically and suppressing the CSAT survey prevents the Agent’s resolution metrics from being polluted by non-customer interactions.
Both features are enabled through configuration rather than a migration, which reduces deployment risk. For interstitial audio, the practical consideration is sound selection. Support Specialists reviewing calls will hear the chosen audio repeatedly, and callers on extended processing cycles will hear it multiple times per call, so the choice should be tested with a sample of calls before rolling out broadly. The delay threshold should be calibrated against actual Agent processing time data rather than set at the default. For auto-reply detection, the main consideration is visibility into what the Agent is classifying as auto-reply versus legitimate customer message. Ada surfaces this in the Conversation Resolution Analysis event in the Coaching UI, which is where CX managers should validate that the model is not over-indexing on short or formatted messages. Teams that rely on webhook events for downstream automation should confirm that the change in conversation close timing does not break any SLA calculations or routing logic that assumed a human-driven close.
Plain adds real-time presence indicators to thread view
Plain, a developer-first B2B support platform, added Presence Indicators to the thread view. When multiple teammates have the same thread open, their avatars appear in the thread header, with a count overflow when more than a few people are active. When a teammate starts typing a reply or an internal note, their avatar appears in the dock alongside an ellipsis indicator. Presence updates automatically as teammates move between tabs: navigating away removes the avatar, returning brings it back. The feature requires no configuration.
Collision on a single thread is a real problem for B2B support teams, particularly on Slack-connected accounts where multiple teammates may monitor the same channel and open the same conversation. Two Specialists drafting replies to the same customer issue produces duplicate effort at best and conflicting responses at worst, and the collision is often invisible until both messages go out. Presence Indicators surface that collision before the reply sends, which lets teams coordinate in real time rather than reconcile after the fact. The typing indicator in the dock is the more operationally useful of the two signals, because it tells a Specialist that someone else is actively working on the same thread, not just looking at it. For Support leaders, this reduces the need for explicit claiming conventions where one Specialist announces ownership of a thread before anyone else picks it up.
The feature works without setup, but getting value from it depends on team habits. Presence is most useful when multiple Specialists actually open the same thread, which is more common in B2B operations with shared Slack channels and account-based routing than in traditional ticket queues with assignment logic that enforces single ownership. Teams with strict round-robin assignment may see presence indicators less often, which is fine. The privacy consideration is worth flagging: presence makes every thread-open event visible to teammates, which some Specialists may experience as monitoring. A clear internal communication about what the indicators show and why the team is using them will reduce friction. There is no reporting layer yet, so leaders cannot use presence data to analyze collaboration patterns or identify threads where collision happens most often.
Birdie ships MCP connector for Claude, ChatGPT, Gemini
Birdie, a customer insights platform that unifies feedback from reviews, NPS, CSAT, support tickets, and research interviews, released an MCP Connector that exposes structured customer intelligence to Claude, ChatGPT, Gemini, Glean, and Cursor. The connector uses OAuth 2.1 with PKCE for authentication and gives AI tools read-only, scoped access to themes, sentiment signals, verbatim quotes, feedback volume, segments, areas, opportunities, and impact metrics across connected data sources. Setup is done through each AI tool’s native connector settings, with a single server URL and an OAuth sign-in that resolves the organization from the user’s Birdie account. The server includes a get_schema tool that tells the AI which fields, metrics, and filter values are available before running queries, which Birdie uses to reduce hallucinated outputs.
The practical value for CX leaders is that natural-language queries against customer feedback stop requiring an analyst in the loop. Questions like “what are the top five complaints this month, with verbatims” or “show me NPS trends over the last six months, broken down by segment” get answered against structured, pre-processed feedback rather than raw text extracted into a prompt window. For weekly leadership updates, board briefings, and ad-hoc investigations into emerging themes, this compresses the time from question to answer from hours to minutes. The citation model matters more than the speed: Birdie returns evidence with each answer, which addresses the standard concern about LLMs fabricating patterns that are not in the underlying data. Executives who have been skeptical of AI-generated customer insights have a concrete reason to reconsider when every claim ties to specific feedback.
The implementation dependency is having a functional Birdie workspace with feedback sources already connected. The MCP connector is a surface over existing Birdie data, so teams that have not yet unified their feedback channels into Birdie will not get immediate value from the connector. The access model is scoped per user, which means every teammate who connects their AI tool gets access to the same data their Birdie account permits, and leaves no administrator audit trail of what questions are being asked of the AI or what customer data is flowing into third-party AI tools. Teams with compliance requirements around customer feedback handling should work with Legal and Security to confirm that read-only access to themes and verbatims through an MCP connector fits within existing data-handling policies. The OAuth 2.1 with PKCE authentication model is consistent with current best practices, but the broader question of what gets logged on the Anthropic, OpenAI, or Google side is worth reviewing before broad rollout.
HubSpot launches AEO to track AI search visibility
HubSpot, a CRM and marketing automation platform, launched HubSpot AEO, an Answer Engine Optimization tool that tracks and analyzes how brands appear in ChatGPT, Gemini, and Perplexity responses. The tool provides a brand visibility score with sentiment analysis, competitor share-of-voice tracking, citation analysis showing which domains and content types drive AI mentions, and prioritized recommendations for what to create or update. HubSpot AEO is available as a standalone product at $50 per month or embedded in Marketing Hub Pro and Enterprise plans, with additional prompt capacity sold in packs of 1,000 answers for $20 per month. The launch sits on top of HubSpot’s October 2025 acquisition of XFunnel, an answer engine optimization platform.
For CX leaders, AEO is adjacent to support but increasingly relevant. Help Center articles, customer reviews, community discussions, and product documentation are among the sources AI answer engines cite when buyers research vendors. A Support organization that publishes clear, well-structured troubleshooting content is already producing AEO-friendly material, which means visibility into how that content shows up in AI answers has strategic value beyond the marketing team. Citation analysis specifically surfaces which content types and source channels drive mentions in a given category, which helps CX leaders make the case internally that Support documentation investments produce measurable brand outcomes. The sentiment dimension matters too: if AI answers are describing a brand negatively when Support is mentioned, that is a signal worth surfacing to Support leadership before it shows up in pipeline data.
Implementation for Support leaders is lightweight but requires coordination with Marketing. The standalone product at $50 per month is priced low enough to experiment with, but the embedded version in Marketing Hub Pro and Enterprise uses CRM data to generate prompts specific to the business, which produces more useful results than generic prompt libraries. The prompt tracking feature depends on defining which questions matter, so a Support leader participating in the AEO rollout should contribute the questions customers ask during onboarding, troubleshooting, and escalation, not just top-of-funnel research questions. One realistic limitation: AEO tracking cannot see the actual prompts buyers use because those conversations are private to the AI platforms. The tool generates representative prompts based on business context, which means the measurement is directional rather than precise. Teams should treat week-over-week trend changes as more reliable than absolute visibility numbers.
Now this is:
Strategic Support for CX Leaders.
You’ve got ambitious Support targets and new metrics but you’re not sure what to prioritize first.
The list is long, the queue is getting longer, and you don’t have time to step back and think about CX strategically.
What if you could pressure-test your thinking with someone who’s spent 20 years building Customer Support operations?
No pitch, just a conversation.

