CX-News: Feb 19, 2026 – Software sheds its interface and Support needs to govern AI agents


Posted by:

|

|

|

Customer Experience News is a weekly newsletter and video about the most important news and discussions for Customer Experience and Customer Support Leaders.

Our main story today is an opinion about the direction that software is heading and how Customer Support become operational authority.

Today’s headlines are from:

  • Gorgias – launches native ticket translations & adds voice SLA tracking
  • Help Scout – adds Google Docs/Sheets as knowledge sources for AI Agents
  • Linear – launches project creation from mobile devices & private sharing views
  • Pylon – adds AI-powered issue tagging & integrates Cursor
  • Forethought – launches instant content snippets
  • Zipchat – integrates Facebook and Instagram DMs

I spent years teaching my Support team to memorize our software. They knew every shortcut, every filter, every workaround. They could navigate a platform faster than the Product and Engineers that built it. That expertise took months to build and it was the standard for hiring, evaluation, and promotion.

I started using Photoshop 5.5 back in 2001. When I opened it for the first time, I was overwhelmed by all of the buttons. Toolbars stacked on toolbars. Menus nested inside menus. It’s 2026, and there are still buttons everywhere.

Most people figure out maybe 10% of what’s available. They learn the crop tool, the text tool, a few filters. The rest sits there unused because the learning curve is steep and the interface assumes you already know what you’re looking for.

๐Ÿค” Now picture a version of Photoshop where ALL of the buttons are gone.

You select your files. You type what you want. The software executes, step by step, on its own layers. You can see what it did, refine any step, or undo a specific action without touching anything else.

You didn’t need to know where the magnetic lasso lives. You didn’t need to Google “how to create a hex pattern.” You described the outcome. The tool executed.

That’s the shift people were talking about a year ago, when the conversation was still about chatbots replacing FAQs.

Answering “where’s my order” or “how do I reset my password” is not the frontier anymore. Chatbots have been doing that for years. Any team still treating AI-powered FAQ deflection as a strategic achievement is measuring the wrong thing.

Intercom’s 2026 Customer Service Transformation Report puts the gap in plain terms: only 10% of teams have reached mature AI deployment, where AI is fully integrated into operations and working at scale. The other 90% have launched AI at a surface level. They’ve automated the easy questions and called it transformation.

Mature deployment looks different. It means AI agents that take action.

  • Refunds processed without a human in the loop.
  • Discount thresholds applied based on customer context.
  • Bug tickets created, categorized, and routed to Engineering with a structured reproduction path.
  • Feature requests logged, tagged to customer segments, and surfaced to Product with usage data attached.
  • Customer sentiment captured systematically across every interaction, not just the ones someone remembered to tag.

This is where the operational authority question becomes unavoidable. An AI agent that can issue a refund is not a chatbot. It’s a team member with access to your systems, your policies, and your customer relationships. Who trains it? Who audits its decisions? Who defines what “good judgment” looks like when a high-value customer threatens to churn at 11pm on a Saturday?

The MIT finding that most leaders missed

An MIT Sloan study found that nearly half of the performance improvement from upgraded AI models came from how users adapted their prompts. The model upgrade alone wasn’t enough. People had to learn how to describe what they wanted.

Most coverage of that study focused on the prompting angle. The more important insight for support leaders is this: if half the performance gain comes from how well the human articulates the task, then your team’s ability to write clear, structured, outcome-oriented instructions is a direct lever on your AI’s effectiveness.

That’s not a soft skill anymore. That’s your core infrastructure.

What AI agents actually need from your team

The 2026 Intercom report found that 40% of teams now report Customer Specialists spending more time training and optimizing AI systems. That number will grow. The question is whether teams are building the right capabilities to do that work well.

Training an AI agent to handle a refund request is not the same as writing a refund policy. The agent needs to understand customer intent when it isn’t clearly stated. It needs to know when a situation falls outside its authority. It needs a decision tree that doesn’t assume the customer will present their problem cleanly.

Writing those instructions requires support professionals who can think from the customer’s outcome backward. What is the customer actually trying to accomplish? What would a poor resolution look like, even if the policy was technically followed? Where does the edge case live, and what should happen there?

๐Ÿ‘‰ These are not questions that require less expertise than the old model. They require more. The expertise just looks different.

VOC is no longer a quarterly slide deck

Voice of the Customer has historically been a reporting function. Someone pulls CSAT data, tags a sample of conversations/tickets, and presents quarterly themes to Product leadership.

AI agents change this entirely. Every interaction becomes a structured data point. Refund requests map to product failure categories. Feature request language, across thousands of conversations, identifies terminology gaps between how customers describe outcomes and how your product describes features. Bug patterns cluster before your engineering team has seen a single ticket.

๐Ÿ‘€ The support team that governs the AI agents governing those interactions becomes the primary intelligence function for the business. A continuous reporting feed.

Intercom’s report found that 32% of customer service teams are already leading their organization’s AI transformation strategy. In mature deployments, 66% of senior leaders view support as a value driver, compared to 51% in early-stage teams. The strategic position is available. Most teams haven’t claimed it because they’re still focused on ticket volume.

A reality check before the projection slides

Gartner, Cisco, and BCG all have projections in this space. Some of those numbers are aggressive. Vendors with skin in the game produced several of them. The direction is consistent even if the exact figures are contested.

What’s not contested is the operational gap. Most support teams do not have the knowledge infrastructure required to train AI agents well. Documentation is scattered. Tagging is inconsistent. Policies live in someone’s head or in a Notion page nobody maintains. An AI agent trained on that foundation will perform exactly as well as the foundation deserves.

Before your team can govern AI agents, it needs structured knowledge. Before it can measure AI resolution quality, it needs a definition of what resolution actually means. Before it can write agent instructions that hold up across edge cases, it needs people who understand customer intent deeply enough to anticipate those edges.

That’s where most of the real work sits right now.

Five roles your support team should be developing

  • AI Agent Architects who write decision frameworks for complex, action-oriented workflows
  • Customer Outcome Analysts who translate interaction data into product intelligence
  • Knowledge Engineers who build and maintain the structured content AI agents rely on
  • Resolution Quality Reviewers who audit agent decisions against defined standards
  • AI Escalation Specialists who handle the cases agents flag as outside their authority

Some companies are hiring for these now. Most haven’t updated a job description.

Mature AI deployment, as Intercom’s data defines it, produces better metrics, clearer ROI, and freed capacity that can move toward revenue-generating work. Getting there requires treating your support team as the operational layer that makes AI agents reliable, not just the group that answers what the agents miss.

Ask whether your team can write agent instructions that hold up at 2am when your best human agent isn’t available. Ask whether your AI is generating structured VOC data or just deflecting tickets. The ROI calculation changes significantly depending on the answer.

Audit your team’s writing and critical thinking skills. The ability to anticipate how a customer will present a problem, and write instructions that account for that variation, is now a core competency. Start building it deliberately.

Create a rotation where agents review AI decisions against outcomes, not just policy compliance. That feedback loop is how agent quality improves. Without it, you’re deploying AI and hoping it holds.

Where this lands

The best support professionals I’ve worked with share a trait that translates directly into this model: they understand what a customer is trying to accomplish, even when the customer can’t articulate it. That skill doesn’t go away. It becomes the foundation of every AI agent instruction they’ll ever write.

The shift isn’t from support to automation. It’s from reactive response to proactive governance. The teams that understand that now will be the ones building the infrastructure everyone else buys later.

The question is whether you’ll prepare your team before the shift decides for them.

Brett Rush
RushtoResolution.com


Gorgias, an ecommerce customer support platform, released two features this week: native ticket translations and Voice SLAs.

Automatic Translations

Inbound and outbound messages now translate automatically into each agent’s preferred language, directly inside the ticket. Agents configure their own settings under Settings > Your Profile > Ticket Translation Settings, choosing a default translation language and marking languages they already speak. When replying, a single click in the editor translates the draft into the customer’s language. Agents can hover over any message to see the original text and re-translate before sending. The feature covers any language recognized by LLMs, including right-to-left languages. Internal notes and macros are not translated. Available on all helpdesk plans at no extra cost.

Voice SLAs

Teams can now set a call answer target (for example, 90% of calls answered within 1 minute) and track performance on the SLA Dashboard, Live Voice Dashboard, and Voice Overview Dashboard. Wait time measurement starts when a caller enters the queue, aligning with industry-standard contact center methodology. Only inbound calls during business hours count. Cancelled calls and callbacks are excluded.

Operational Impact: Multilingual support teams no longer need external translation tools or separate browser tabs. Each agent controls their own language settings, so a German-speaking agent on a team that defaults to English only sees translations on tickets they actually need help with. For voice, the SLA dashboard gives managers a clear metric to identify coverage gaps by hour and day, which feeds directly into staffing decisions.

Implementation Considerations: Translation accuracy varies by language pair and domain-specific terminology. Teams selling technical products should spot-check translations for vocabulary that may not convert cleanly. The per-agent configuration model means managers have no centralized control over translation quality or language coverage. For Voice SLAs, policy changes do not apply retroactively, so define targets before relying on historical data. Teams running 24/7 support should verify their business hours configuration reflects actual operating schedules, since after-hours calls are invisible to the SLA tracker.

Read more โ†’


Help Scout, a customer support platform, now supports Google Docs and Google Sheets as knowledge sources for its AI Agents. The integration syncs content in real time as documents change, keeping AI-generated answers current without manual updates. Google Docs and Sheets join existing supported sources including websites, PDFs, Word documents, and Help Scout Docs sites.

Operational Impact: Teams that already maintain SOPs, internal wikis, or troubleshooting guides in Google Docs can feed that content directly to AI Answers without reformatting or duplicating it. When someone updates a Google Doc, the AI picks up those changes automatically. This removes a common bottleneck where support content sits in shared drives but never makes it into the help desk knowledge base.

Implementation Considerations: Each Google Doc or Sheet must be shared with “anyone who has the link” and added individually. There is no bulk import. For teams with dozens of internal documents, the one-by-one setup creates real onboarding friction. Initial sync time scales with document size and can take several minutes for large files. Teams should audit which documents contain customer-facing accuracy versus internal-only notes before connecting them, since the AI will treat all connected content as equally authoritative.

Read more โ†’


Linear, a project management platform, shipped two updates: advanced filters with private issue sharing, and mobile project creation.

Advanced Filters and Private Sharing

Filters now support multiple AND/OR conditions across searches, views, and dashboards. An AI filter option lets users describe criteria in natural language. The same release added the ability to share individual issues from private teams with specific users outside the team, available on Enterprise plans. Shared issues display a visibility banner so recipients know the context.

Mobile Projects and Initiatives

Android and iOS apps now support creating projects and initiatives directly from mobile. Users can write project summaries and set properties on the go, then build out full descriptions and milestones later on desktop. iOS also added the ability to compose project and initiative updates from Pulse.

Operational Impact: Support-adjacent teams using Linear for bug tracking or escalation management can build targeted views that match real workflows. Filtering by Priority, Label, and Customer status in a single view lets a support escalation manager see only high-priority bugs tied to enterprise accounts. Private issue sharing solves a recurring problem where compliance or security teams need to loop in an individual contributor without exposing the full team backlog. Mobile project creation lets support leaders capture project ideas during meetings immediately rather than losing them to a reminder note.

Implementation Considerations: Private issue sharing requires an Enterprise plan. Teams should establish clear norms about what gets shared externally to prevent accidental exposure of sensitive context in comments or linked issues. The AI filter depends on how well Linear’s model interprets natural language, so validate results against manual filter configurations during initial adoption. Mobile project creation is limited to summaries and basic properties; full descriptions, milestones, and team assignments still require the desktop app.

Read more โ†’


Pylon, a B2B customer support platform, shipped a series of updates across its February 9 and February 16 changelogs, headlined by AI Suggest Mode for issue fields and a Cursor IDE integration.

AI Suggest Mode

Teams using AI-powered issue fields (for tagging question type, priority, product area, and similar categories) can now switch from automatic fill to “Suggestions only” mode. When a new issue arrives, AI proposes a tag that the agent can accept, reject, or edit. Each decision gets logged as training data to improve accuracy over time. Admins write descriptions for each field value to guide the model, and the feedback loop means tagging quality should improve the more the team uses it.

Cursor IDE Integration

Engineers can now @mention Cursor in a Pylon issue copilot, which feeds full issue context directly into Cursor’s AI agent. This lets engineers start investigating a reported bug without manually copying details from the support ticket into their development environment.

Additional Updates

Pylon also released omnichannel NPS surveys with drip campaigns and audience preview, SLA breach warning triggers that fire before a commitment is missed, first-class call recording objects with custom fields, feature request notebook blocks, linked feature requests in the issue sidebar, and AI report duplication.

Operational Impact: Manual ticket tagging is one of the most inconsistent parts of support operations. Suggest mode keeps humans in the loop while building a continuous improvement cycle: agents review tags, and their corrections train the model. The Cursor integration shortens the handoff between support and engineering by eliminating the context-copying step. Linked feature requests in the issue sidebar give agents visibility into related product requests while working a ticket, helping them set accurate expectations with customers.

Implementation Considerations: AI suggestions require at least 100 manually tagged issues before activating, so new fields or new teams face a ramp-up period. Writing clear descriptions for each field value directly affects suggestion quality. The Cursor integration only benefits teams already using Cursor as their primary IDE. For NPS surveys, teams should use the audience preview feature to control drip campaign frequency and avoid survey fatigue across email, chat, Slack, and Microsoft Teams channels.

Read more โ†’


Forethought, an AI-powered customer support platform, released Content Snippets, short pieces of focused content that go live instantly across all support channels. Unlike traditional knowledge base articles that require writing, review, and publishing cycles, Snippets skip that process and deploy immediately. The feature targets urgent updates and edge-case answers that agents need quickly.

Operational Impact: When a product team pushes a breaking change or a billing issue surfaces at scale, support teams typically scramble to update KB articles through a review queue. Snippets let a team lead write a targeted answer and make it available to agents and AI across every channel in minutes. This addresses the gap between when a support team knows the answer and when that answer becomes findable in their tools.

Implementation Considerations: Skipping review cycles is the feature’s selling point, but it is also its risk. Teams without governance around who can publish Snippets may end up with conflicting or outdated content fragments scattered across their knowledge base. Consider who gets publish access and how Snippets get audited or retired after the initial urgency passes. Without a clear deprecation process, Snippets could become the new version of sticky notes that never get cleaned up.

Read more โ†’


Zipchat, an AI-powered ecommerce chat platform, added Facebook Page DMs and Instagram DMs as supported channels. Both integrations allow Zipchat’s AI to respond automatically to social media messages and centralize conversations alongside website chat, WhatsApp, and email. Setup requires connecting the relevant account from the Channels settings.

Operational Impact: Ecommerce brands fielding purchase questions through Instagram and Facebook DMs can now route those conversations through the same AI and workflow engine as their website chat. This eliminates the need to staff a separate social media inbox or manually check platforms throughout the day. Responses stay consistent because the AI draws from the same product catalog and knowledge base across all channels.

Implementation Considerations: Social media conversations tend to be more casual and context-dependent than website chat. Teams should review how their AI handles informal language, slang, and image-based questions common on Instagram. Facebook and Instagram DM integrations also mean the AI is operating in public-facing channels where a poor response could be screenshotted and shared. Testing the AI on sample social messages before going live is worth the effort.

Read more โ†’


Now this is:

Strategic Support for CX Leaders.

Your competitors are deploying AI agents that handle refunds, route bug tickets, and capture customer intelligence around the clock. Most support leaders know it’s coming. Few know where to start without breaking the customer experience they’ve spent years building.

If you’re trying to figure out how to move from where your team is today to a support operation that runs on governed AI, that’s exactly the kind of thinking I can help pressure-test.

No pitch, just a conversation.

Clarity starts with a conversation

In 30 minutes, we can discuss your biggest support opportunities and outline what to do next.