Customer Experience News is a weekly newsletter about the most important news and discussions for Customer Experience and Customer Support Leaders.
Next week is Spring Break, so I’m covering a lot in this post and will return the following week.
Today’s 14 headlines are from:
- Zendesk – Adds Auto Assist Pre-Approvals and AI Writing
- Gorgias – Ships Content Versioning and Instagram Rules
- Linear – Links Issues Directly to AI Coding Tools + Notion Agents
- ServiceNow – Launches Autonomous Workforce with Moveworks
- Front – Adds AI Knowledge Sources and Reply Monitoring
- Pylon – Adds Cross-Platform Issue Comments
- Plain – Launches MCP and Introduces AI Tone of Voice Rules
- Dixa – Launches Conversation Redaction and Auto-Translations
- Notion – Adds Fullscreen Page Presentations
- Birdie.so – Integrates Screen Recording with Pylon
- Zipchat – Generates Monthly Conversation Reports
- Miro – Adds AI-Powered Slide Creation
- Salesforce – Ships Self-Learning Knowledge and Role-Based Summaries
- Hubspot – Ships MCP Server for App and CMS Development
Several vendors this week shipped features that let AI act without waiting for agent approval. Zendesk now lets admins pre-approve low-risk actions that auto assist executes on its own. ServiceNow launched an Autonomous Workforce where AI specialists resolve L1 IT cases end-to-end, reporting 99% faster resolution in their own internal deployment. Salesforce made Self-Learning Knowledge generally available, which analyzes historical cases to surface where knowledge base articles are missing or outdated.
When AI starts acting on behalf of your team, you need to see what it is doing. Front released an AI Replies Hub that shows every message its AI generated across Copilot and Autopilot. Salesforce added a dashboard that ties AI feature usage to Average Handle Time. Plain introduced tone of voice rules that govern how AI sounds across all customer-facing replies, alongside an MCP server with 30 tools that lets external AI assistants read, respond to, and manage Plain threads directly.
Knowledge and escalation workflows got faster across several platforms. Gorgias added content diff for AI agent guidances so teams can compare changes before publishing. Linear lets engineers launch coding tools directly from an issue with full context loaded. Pylon added cross-platform commenting into Linear and Jira. Dixa released conversation redaction that removes PII without anonymizing full messages, plus one-click knowledge article translation. HubSpot also made its Developer MCP Server generally available, connecting AI coding tools to its developer platform for building apps and CMS assets.
Fourteen updates this week.
The full breakdown with operational impact and implementation considerations for each is below.

Zendesk Adds Auto Assist Pre-Approvals and AI Writing
Zendesk, a customer service platform, shipped three notable updates in its March 2026 release. Admins can now pre-approve certain low-risk custom actions for auto assist to execute without agent confirmation. The same release added generative AI writing tools with five free uses per agent per month on Professional plans and above, and a theme updater tool that adds generative search to help center themes without custom code.
Pre-Approved Actions
Admins define which custom actions and action flows auto assist can execute automatically. When a qualifying scenario appears, auto assist handles it without waiting for the agent to click approve. This targets repetitive, low-risk actions where agent confirmation adds time without adding value.
Copilot AI Writing Tools
Generative writing tools are now available at no extra cost on Suite and Support Professional plans and above. Each agent gets five uses per month to enhance ticket comment content, with usage pooled across agents. A dashboard tracks consumption. Teams needing unlimited access can upgrade to the full Copilot add-on.
Generative Search in Help Center
The quick answers theme updater tool lets admins implement generative search in their help center theme without writing or modifying code. Admins can preview what users will see before publishing, and can deselect the helper to leave the current experience unchanged.
Operational Impact:
Pre-approved actions reduce the number of times agents manually confirm routine steps during assisted conversations. For teams handling high volumes of password resets, account lookups, or status checks, this removes a repeated click from each interaction. The pooled AI writing usage means a team of 20 agents shares 100 monthly uses, which favors teams where a few agents handle most of the complex writing. Generative search addresses the common complaint that help center keyword search returns irrelevant results, but only for teams using Zendesk’s help center product.
Implementation Considerations:
Pre-approved actions require admins to evaluate which actions are genuinely low-risk in their specific environment. An action that is low-risk for one business (updating a shipping address) may not be for another (modifying an order). Start with a small set of actions and monitor outcomes before expanding. The five-use-per-agent writing cap is pooled, which means heavy users can exhaust the team’s allocation. Track the dashboard early to understand actual consumption patterns before deciding whether to upgrade. Generative search requires the Zendesk help center theme and may not work with heavily customized themes.
Gorgias Ships Content Versioning and Instagram Rules
Gorgias, an ecommerce customer support platform, released two features: content diff comparison for knowledge hub articles and Instagram profile-driven rule conditions. Content diff lets teams compare draft versus published versions and historical versus current versions of Guidances and Help Center Articles before publishing or restoring.
Content Diff for Guidances and Articles
Teams can now compare versions for Guidances and Help Center Articles in the Knowledge Hub. Draft-to-published comparison shows additions, deletions, and edits inline before going live. Historical-to-current comparison lets teams validate older content against what is currently published before restoring from version history. A quick compare toggle provides instant side-by-side review directly in the editor.
Instagram Profile-Driven Rule Conditions
Teams using the Instagram integration can now build Rules using attributes from a customer’s Instagram profile across DMs, comments, ad comments, and mentions. Available conditions include verified status, follower count thresholds, whether the customer follows your business, whether your business follows them, and username pattern matching. These conditions enable routing, tagging, prioritization, and assignment based on who the customer is on Instagram.
Operational Impact:
Content diff addresses a real pain point in knowledge management: publishing changes without being certain what actually changed. Teams maintaining AI agent guidances can now catch errors before they affect automated responses. For Instagram, the profile-driven rules let teams automatically route messages from verified accounts or high-follower creators to priority queues, which matters for brands where influencer relationships drive revenue.
Implementation Considerations:
Content diff only covers Guidances and Help Center Articles. If your team maintains content in other systems, this creates a partial solution that does not cover the full knowledge lifecycle. For Instagram rules, follower count and verified status are evaluated at the time the rule runs. Accounts that gain verification or followers after initial contact will not retroactively trigger rules set for earlier interactions. Teams should also consider that prioritizing by follower count could mean slower response times for loyal customers who happen to have small followings.
Linear Links Issues Directly to AI Coding Tools
Linear, a project management platform, added the ability to launch AI coding tools directly from any issue with a single keyboard shortcut. The feature prefills a prompt with the issue ID, description, comments, updates, linked references, and images. Supported tools include Claude Code, Cursor, GitHub Copilot, Codex, Conductor, OpenCode, Replit, v0, and Zed.
Deeplink to AI Coding Tools
Engineers press Cmd+Option+. (Mac) or Ctrl+Alt+. (Windows/Linux) to launch their most recently used coding tool with full issue context loaded. Prompt templates can be customized with standing instructions for how the agent should approach issues.
Linear in Notion Custom Agents
The same release introduced integration with Notion’s Custom Agents, enabling users to create and update Linear issues and projects directly from Notion’s autonomous agent system.
Operational Impact:
For support teams that escalate bugs to engineering through Linear, the deeplink feature shortens the handoff. Engineers no longer need to read a Linear issue and then separately paste the details into their coding environment. The context arrives pre-formatted. The Notion integration benefits teams that run project planning in Notion but track engineering work in Linear, reducing the manual sync between the two systems.
Implementation Considerations:
The feature requires engineers to configure their preferred coding tool in Linear preferences. Prompt template customization is per-user, so teams that want consistent agent behavior across engineers will need to coordinate templates manually. The Notion Custom Agents integration depends on Notion’s agent system, which is a new feature with its own learning curve. Teams should evaluate whether the Linear-Notion connection adds genuine workflow value or just another integration to maintain.
ServiceNow Launches Autonomous Workforce with Moveworks
ServiceNow, an enterprise workflow platform, launched Autonomous Workforce, a suite of AI specialists that execute jobs alongside human teams, and ServiceNow EmployeeWorks, a conversational interface built on Moveworks technology acquired for $2.85 billion. The first AI specialist available out-of-the-box is a Level 1 Service Desk AI Specialist that autonomously diagnoses and resolves common IT support requests.
Autonomous Workforce
Unlike individual AI agents that complete single tasks, Autonomous Workforce orchestrates teams of AI specialists with defined roles such as Level 1 Service Desk AI Specialist, Employee Service Agent, and Security Operations Analyst. These specialists follow established organizational processes and policies, escalate when human intervention is needed, and improve over time based on outcomes and employee feedback. The L1 Service Desk AI Specialist autonomously handles password resets, software access provisioning, and network troubleshooting using enterprise knowledge bases and historical incident data. ServiceNow reports its own deployment handles over 90% of internal employee IT requests. The specialist is in controlled release, with general availability targeted for Q2 2026.
ServiceNow EmployeeWorks
EmployeeWorks combines Moveworks’ conversational AI and enterprise search with ServiceNow’s workflow automation. Available in Teams, Slack, or any browser, it turns natural language requests into governed, end-to-end execution across systems. The platform understands organizational structure, approvals, and authorization, executing tasks that require multi-system coordination while maintaining audit trails. EmployeeWorks is generally available now.
Operational Impact:
The L1 Service Desk AI Specialist targets the highest-volume, lowest-complexity IT tickets: password resets, software access provisioning, and network troubleshooting. ServiceNow claims 99% faster resolution on assigned cases in their own internal deployment. EmployeeWorks puts a conversational front end on workflows that previously required navigating the ServiceNow portal, which lowers the barrier for employees who avoid submitting tickets through formal channels.
Implementation Considerations:
The 90% handling rate and 99% speed improvement come from ServiceNow’s own environment, where data quality, workflow maturity, and system integration are likely more advanced than most customer deployments. Teams evaluating this should assess their own knowledge base completeness, historical incident data quality, and existing workflow automation before expecting similar results. The L1 Specialist is in controlled release with GA targeted for Q2 2026. EmployeeWorks depends on how well Moveworks’ technology handles your specific organizational terminology and approval chains.
Front Adds AI Knowledge Sources and Reply Monitoring
Front, a customer operations platform, released three updates: the ability to connect Notion, Google Drive, and SharePoint as AI knowledge sources; an AI replies hub for monitoring all AI-generated messages; and branching rule testing before publishing. All knowledge sources can be managed in a new section within Front AI workspace settings.
AI Knowledge Sources from Notion, Google Drive, and SharePoint
Teams can now connect external knowledge repositories as sources for Front’s AI. The new knowledge sources section in workspace settings lets admins add sources, filter by type, and see the last synced date. This extends Front’s AI beyond content stored directly in Front.
AI Replies Hub
A centralized view of every message generated by Front AI, including Copilot suggested reply drafts, Copilot suggested replies that were sent, and Autopilot replies. This gives teams visibility into what AI is producing across conversations.
Branching Rule Testing
Teams can now test branching rules to preview how conditions and actions will work before turning them on or saving updates. Testing supports multiple rule paths in a single run and provides actionable feedback on each condition and action.
Operational Impact:
The knowledge sources integration solves a common problem: teams that maintain SOPs and troubleshooting guides in Notion or Google Drive but have no way to surface that content in their support tool’s AI. The AI replies hub addresses the oversight gap that emerges as teams automate more responses. Managers can review AI output quality without reading every individual conversation. Branching rule testing prevents the situation where a misconfigured rule routes tickets to the wrong queue during business hours.
Implementation Considerations:
External knowledge sources need to stay current, and sync frequency matters. Check whether changes in Notion or Google Drive propagate to Front in minutes or hours, since stale knowledge feeding AI responses creates a worse experience than no AI at all. The AI replies hub is a monitoring tool, not a quality control gate. Teams still need to define what good looks like and build review habits around the data. Branching rule testing only covers rule logic; it does not simulate real ticket volume or edge cases that appear at scale.
Pylon Adds Cross-Platform Issue Comments
Pylon, a B2B customer support platform, released updates in its February 23 changelog including the ability to leave comments in external systems like Linear and Jira directly from a Pylon issue. The same release added richer context on Linear ticket completion, showing more detail on linked issues when Linear tickets are marked done.
Operational Impact:
Support agents can now comment on engineering tickets without leaving Pylon. This reduces context switching during escalation follow-ups and keeps the communication trail connected between support and engineering workflows. The richer completion context helps agents understand what was actually fixed when an engineering ticket closes, rather than getting a generic notification with no detail.
Implementation Considerations:
Cross-platform commenting requires active integrations with Linear or Jira. Teams should establish guidelines for what belongs in Pylon comments versus external system comments to avoid fragmenting the conversation history. The richer completion context depends on engineers writing meaningful resolution notes in Linear.
Plain Launches MCP and Introduces AI Tone of Voice Rules
Plain, a customer support platform, shipped two updates: an MCP server that connects AI tools directly to Plain workspaces, and configurable tone of voice rules for AI-generated replies.
MCP Server
The Plain MCP server lets teams connect tools like Claude, ChatGPT, and Cursor to their Plain workspace. Once connected, the AI assistant can read threads, look up customers, check tenant details, and browse help center content. It can also take action: reply to threads, assign them, change priority, add labels, create notes, and mark threads as done. The server includes 30 tools across threads, customers, tenants, help center articles, and workspace data. Authentication uses your existing Plain account, so the MCP server inherits your user permissions.
AI Tone of Voice Rules
Teams can now define tone of voice rules that control how AI sounds in customer-facing replies. Plain-language instructions are organized across five categories (Empathy, Language, Formality, Personality, and Warmth) and apply to all AI-generated messages, including Ari responses, suggested responses, Sidekick draft replies, and Help Center AI chat. Teams can write their own rules or generate suggestions from an existing style guide.
Operational Impact:
The MCP server lets teams use external AI tools with full workspace context without copying data between systems. An agent using Claude or Cursor can pull up a customer’s thread history, check their tenant details, and draft a reply without leaving their AI tool. Combined with tone rules, teams can enforce brand voice consistency across both Plain’s native AI and any connected external tool. Instructions like “Use simple language and avoid jargon” propagate to every AI-generated message, reducing the need for agents to manually edit drafts before sending.
Implementation Considerations:
The MCP server authenticates with individual user credentials, which means each connected AI tool operates with that user’s permission level. Teams should audit who connects external AI tools and what access those accounts have. For tone rules, results depend on how precisely the instructions are written. Vague rules like “be friendly” will produce inconsistent results compared to specific rules like “address the customer by first name and acknowledge their wait time.” The five-category structure may not map cleanly to every brand’s voice framework, and teams should test rules against sample tickets before applying them workspace-wide.
Dixa Launches Conversation Redaction and Auto-Translations
Dixa, a customer service platform, made Conversation Redaction and Knowledge Auto-Translations available to all customers. Conversation Redaction lets teams remove PII from conversations without anonymizing the entire message, preserving full context while ensuring sensitive data is not stored. Knowledge Auto-Translations provide one-click localization of knowledge articles into multiple languages.
Operational Impact:
Redaction at the field level rather than the message level means teams can comply with data privacy requirements without losing the conversational context needed for quality reviews, training, and dispute resolution. A conversation about a billing error can have the credit card number removed while keeping the rest of the exchange intact. Auto-translations reduce the time and cost of maintaining multilingual knowledge bases, which is particularly relevant for teams supporting customers across language regions with a single content team.
Implementation Considerations:
Teams should establish clear policies for what constitutes PII in their context and who has permission to redact. Over-redaction destroys useful data; under-redaction creates compliance exposure. For auto-translations, one-click convenience does not guarantee accuracy in domain-specific or technical content. Teams should have native speakers review translated articles for critical topics before relying on them for customer-facing support.
Notion Adds Fullscreen Page Presentations
Notion, a productivity platform, added the ability to turn any page into a fullscreen presentation. Users add dividers to separate slides, and content auto-scales to fit the screen with a clean title slide and fully functional toggles.
Operational Impact:
Support leaders who already document processes, training materials, or team updates in Notion can now present that content directly without exporting to a separate presentation tool. This eliminates the duplication step where content lives in Notion but gets reformatted into slides for team meetings.
Implementation Considerations:
Presentation formatting depends entirely on how dividers are placed within existing pages. Content designed for reading may not translate well to slide format without restructuring. Teams should test with existing pages before committing to Notion as a presentation tool for recurring meetings.
Birdie.so Integrates Screen Recording with Pylon
Birdie.so, a screen recording platform, launched an integration with Pylon that embeds screen recording directly into the Pylon issue sidebar. Support agents can access Birdie Screen Recording from within a Pylon issue without switching tools.
Operational Impact:
Agents working tickets in Pylon can now capture and attach screen recordings without navigating to a separate application. This speeds up bug reproduction documentation and reduces the back-and-forth where engineering asks support to demonstrate what happened.
Implementation Considerations:
The integration requires active accounts on both Birdie and Pylon. Teams should evaluate whether the embedded sidebar experience provides enough recording functionality for their needs or whether they still need Birdie’s standalone features for complex reproduction scenarios.
Zipchat Generates Monthly Conversation Reports
Zipchat, an AI-powered ecommerce chat platform, added automated monthly conversation reports that aggregate customer interactions into structured categories. The reports organize insights into brand perception, common product questions, marketing angles, website UX problems, customer search patterns, and growth action items.
Operational Impact:
Instead of manually reviewing chat transcripts, ecommerce teams get a structured summary of what customers are asking about, complaining about, and looking for. The categorization into actionable buckets like website UX problems and marketing angles routes insights to the right teams without requiring a support manager to interpret raw data.
Implementation Considerations:
Report quality depends on conversation volume and diversity. Teams with low chat volume may get reports that reflect a handful of interactions rather than meaningful trends. The automated categorization uses AI interpretation, so teams should validate the first few reports against actual conversations to gauge accuracy before using them for business decisions.
Miro Adds AI-Powered Slide Creation
Miro, a visual collaboration platform, added the ability to create presentation slides using AI. Users describe what they need or select content already on their Miro board as context, and AI generates a Miro Slides deck that can be edited, customized, and presented.
Operational Impact:
Teams that brainstorm and plan on Miro boards can now convert that content into presentation format without rebuilding it in a separate tool. This reduces the preparation time for meetings where board content needs to be shared with stakeholders who do not work directly on the board.
Implementation Considerations:
AI-generated slides inherit the structure and terminology from the source board content. Boards with disorganized or incomplete content will produce slides that need significant editing. Teams should treat the AI output as a first draft rather than a finished product, especially for presentations going to external audiences.
Salesforce Ships Self-Learning Knowledge and Role-Based Summaries
Salesforce, a CRM and service platform, released several AI-driven features in its Spring 2026 update, headlined by Self-Learning Knowledge (now generally available), Role-Based Enhanced Summaries, and a Service AI Adoption and Analytics Dashboard. Self-Learning Knowledge analyzes past cases, chats, voice interactions, and surveys to identify gaps in the knowledge base and surface where articles need to be created or updated.
Self-Learning Knowledge (Generally Available)
The feature shifts knowledge base maintenance from a manual editorial process to a data-driven one by analyzing historical case and conversation data to identify where articles are missing or outdated. For teams struggling to keep knowledge current, this targets the root problem: KB gaps are invisible until an agent cannot find the answer.
Role-Based Enhanced Summaries
AI-generated case summaries now surface different information depending on the viewer’s role. Support specialists see different context than managers reviewing the same case. This replaces the Work Summaries beta, which retires on June 30, 2026, creating a migration deadline.
Service AI Adoption and Analytics Dashboard
A new dashboard correlates AI feature usage directly with operational KPIs like Average Handle Time. For CX leaders justifying AI investment or identifying where adoption is stalling, this provides visibility into which Einstein features are producing measurable results.
Translation for Case Comments and Service Replies
Teams supporting multilingual customers can now translate case comments and service replies without a separate translation tool in the stack.
Operational Impact:
Self-Learning Knowledge addresses the most common complaint about knowledge bases: they go stale because no one has time to audit them. By surfacing gaps based on actual case data, the feature tells content teams exactly where to focus. Role-Based Summaries solve a real friction point where supervisors and frontline reps need different context from the same case. The AI Adoption Dashboard gives leadership the data to answer whether their AI investment is working with specifics rather than anecdotes.
Implementation Considerations:
Self-Learning Knowledge requires sufficient historical case and conversation data for the analysis to be meaningful. Teams that recently migrated to Salesforce or have incomplete case data will see limited value initially. Role-Based Enhanced Summaries replace the Work Summaries beta retiring June 30, 2026, so teams currently using that feature need to plan the migration now. The AI Adoption Dashboard only generates useful data if your team is already using Einstein features at sufficient volume. Translation accuracy depends on the underlying model’s handling of your specific domain vocabulary.
Hubspot Ships MCP Server for App and CMS Development
HubSpot, a CRM and marketing platform, made its Developer MCP Server generally available after a beta period. The server connects AI coding tools like VS Code, Claude Code, Cursor, OpenAI Codex, and Gemini CLI directly to HubSpot’s developer platform, with full access to HubSpot’s developer documentation as context.
Once connected, developers can scaffold and iterate on HubSpot apps, create and manage CMS themes, templates, and modules, manage serverless functions, query app analytics, and troubleshoot issues using natural language. The server runs locally and handles tasks that previously required manually parsing documentation, running CLI commands, or implementing APIs directly.
Operational Impact:
For support and CX teams that rely on custom HubSpot integrations or CMS-based help centers, this lowers the barrier to building and maintaining those assets. A developer troubleshooting a broken serverless function or updating a CMS template can work through their AI tool with HubSpot context already loaded. Teams that have avoided custom development because of the overhead involved in learning HubSpot’s APIs may find this reduces the initial ramp-up.
Implementation Considerations:
This is a developer tool, not an end-user feature. Support teams will only benefit if they have engineering resources building on HubSpot’s platform. The MCP server runs locally, so each developer needs to install and authenticate it individually. Teams should verify that the documentation context the server provides stays current with HubSpot’s API changes, since stale documentation feeding an AI coding tool will produce code that fails silently.
Now this is:
Strategic Support for CX Leaders.
You’ve got ambitious Support targets and new metrics but you’re not sure what to prioritize first.
The list is long, the queue is getting longer, and you don’t have time to step back and think about CX strategically.
What if you could pressure-test your thinking with someone who’s spent 20 years building Customer Support operations?
No pitch, just a conversation.

