The challenge
Peloton's international support organization was growing faster than its self-service surface could absorb. Members were bouncing out of help articles into live agent queues for issues that were, on inspection, already answered in content that nobody could find. Each of the 8 markets had its own localization requirements, its own ticket mix, and its own expectations about what self-service should feel like. The help surface had been built for a smaller, simpler Peloton. It had not kept up with the product.
The symptom the business named was rising support cost. The real problem was a self-service experience that had been accumulated rather than designed. Navigation fought with search. Procedural content sat in the middle of conceptual content. The same member question had three competing answers depending on which article they landed on first. None of these were agent problems. They were design problems that had compounded.
Constraints
- Eight markets, one voice. Global consistency was a brand requirement. Local relevance was a usability requirement. Every design decision had to reconcile both.
- No support downtime. Members could not feel the rebuild. All redesign work landed behind feature flags or in phased rollouts.
- Regional teams, not a central team. Decisions had to be made collaboratively with market leads who owned their local content and tone.
- Evidence over opinion. Every redesign claim had to be defensible against real ticket data, usability observations, or both. No we think.
My approach
I led the redesign as a two-track program. One track rebuilt the self-service surface itself. The other track mined what members asked and fed product-gap recommendations back into the core roadmap. Both tracks ran continuously for two years, with improvements shipped in phases by market.
- Support-ticket mining. I built a ticket analysis pipeline that classified inbound volume by intent, market, and resolution path. The top quintile of intents accounted for a disproportionate share of volume. That quintile became the first redesign target.
- Conversation analysis. Beyond ticket classification, I read. Actual transcripts. What members said when the automated path failed. Where the vocabulary mismatch lived. Which questions had an obvious product-level answer that was not documented anywhere self-service could reach.
- Information architecture rebuild. I redesigned the self-service IA around member-facing tasks rather than internal product structure. Procedural content and conceptual content got separated. Titles became informational. Canonical answers replaced the three-competing-answers problem.
- Decision-tree redesign. For the automated paths, I rebuilt the decision trees based on the ticket mining classification. Branches that never fired were pruned. Branches that misrouted were rewritten. Fallbacks were made explicit rather than implicit.
- Regional rollout. I partnered with market leads across the 8 regions on localization decisions that balanced global consistency with cultural and linguistic specificity. Each region tested a candidate redesign with a sample of members before promotion.
- Product-gap loop. The support-ticket mining pipeline also produced a product-gap backlog: questions that no amount of content design could answer, because the answer required a product change. That backlog was sent to product management and drove the 20% support-volume reduction we eventually attributed to core product improvements.
Global framework, local adaptation. The temptation in an 8-market rollout is to either centralize everything (which kills local nuance) or decentralize everything (which kills consistency). I designed a framework that specified what could and could not vary per market: taxonomy was global, titles were translated and localized, and the top of the information architecture matched across all 8 markets. Tone, examples, and procedural content were locally owned. The framework made the tradeoff legible instead of political.
Artifacts I authored or led
- Support-ticket taxonomy classifying inbound volume across all 8 markets
- Information architecture for the redesigned self-service surface
- Decision-tree workflows covering the top quintile of support intents
- Localization decision framework (global consistency vs local relevance)
- Product-gap backlog with prioritization framework, integrated into the product roadmap
- Usability-testing protocols and member-research reports used to validate each redesign phase
Results
The member-facing redesign drove the three direct CX metrics. The product-gap feedback loop drove the fourth and arguably most durable outcome: a 20% reduction in total support volume, because the product itself got better at the most common failure points. Support became a better source of product signal, not just a place where signal went to die.
About these numbers
The figures on this page are drawn from internal program reporting I authored or co-authored as the practitioner on the engagement. They are reproduced here in rounded form. They were not produced by an independent third party, and proprietary detail has been omitted where required by the engagement.
Lift figures (CSAT, accuracy, handle time, hallucination rate) reflect pre/post comparisons against a matched baseline using the cohort, time window, and measurement instrument noted in the case study. Volume and adoption figures come from production analytics dashboards. Cost figures reflect either avoided spend or unlocked budget in the named fiscal period.
- 40% faster resolution: median time to resolution on the redesigned self-service surface vs. pre-redesign baseline, measured on the top quintile of intents that drove the rebuild.
- 35% self-service adoption lift: share of inbound demand that resolved within self-service without an agent, post-rollout vs. pre-rollout, measured per market.
- 20% support-volume reduction: attributed to product changes shipped from the product-gap backlog over the two-year program window. This is a multi-cause attribution; the product team contributed the engineering work.
- 90%+ regional CSAT: post-interaction CSAT averaged across the 8 markets after the per-market rollout completed.
- Ticket-mining classification was the source of intent volumes; usability-testing protocols and member research validated each redesign phase before promotion.
What I would do differently
Invest in the product-gap pipeline on day one, not in month six. Every week the loop went uninstrumented was a week of member friction that could have been captured as product signal instead. The second lesson was about pacing the rollout: I learned to promote per market rather than per feature. Market by market isolated the variables; feature by feature entangled them.
Collaborators
Worked with market leads across 8 international regions on localization decisions and regional rollouts. Partnered with engineering on the decision-tree infrastructure and the product-gap pipeline. Partnered with product management on roadmap prioritization of gap-derived features. Partnered with research on the usability protocols that validated each phase.
Skills demonstrated
- Workflow optimization
- Information architecture
- Decision-tree design
- Support-ticket mining and classification
- Conversation analysis
- International localization
- Product-gap feedback loops
- Usability testing and member research
- Cross-regional stakeholder alignment