All work

Case study · 2024-2025 · 10 min read

Operational Excellence Inside an AI Product Org

Rebuilt how a cross functional AI product team plans, ships, and reviews. Converted reactive firefighting into predictable, measurable delivery and enabled 200+ non technical employees to leverage agentic AI tooling without a drop in quality.

  • 200+ employees enabled on agentic AI
  • 4 mo to predictable delivery cadence
  • 0 rituals retired post engagement
  • $500K+ budget unlocked via business cases
Role
Product Strategy Lead, Certified Scrum Master, AI adoption lead
Timeline
~4 months embedded, rituals persist post engagement
Organization
Cross-market AI product organization
Stack
  • Scrum with adapted rituals
  • Notion
  • Confluence
  • Jira
  • LangFuse (delivery instrumentation)
  • Thoughtful Execution framework
  • Async first documentation patterns

The challenge

The AI product org at GoDaddy was shipping, but nobody could tell you what was shipping, why, or whether it was working. Tickets lived in three tools. Context lived in direct messages. Handoffs happened in meetings that should have been documents and in documents that should have been conversations. The team had the talent. The system around the team had run out of coherence.

The symptom everyone named was velocity. The real problem was visibility. Work was not scattered because people were disorganized. It was scattered because the system they were working inside had no single source of truth. Worse, the broader organization outside the AI team, the 200+ support, sales, and operations colleagues who would eventually run the AI tooling themselves, had no reliable path from curiosity to competence.

Constraints

  • No new tools. The org already had Notion, Jira, Confluence, and three Slack canvases per team. New process had to use what was there.
  • Multiple time zones. Rituals had to work async first, sync second.
  • No authority to mandate. New process had to earn adoption, not demand it.
  • Preserve team strengths. The team shipped. I was not there to replace what worked. I was there to make what worked visible and repeatable.
  • Serve a broader audience. Any enablement design had to scale to the 200+ non technical employees who would be the AI tooling's eventual operators.

My approach

I applied the Thoughtful Execution framework I use on every engagement: identify multiple friction points, test different approaches against real work, keep what moves the metric. Specifically:

  1. Audit, then ask. Two weeks of watching how work actually flowed before proposing a single change. I mapped every artifact touched by a single feature, end to end, across product, engineering, data science, and support operations.
  2. One source of truth per artifact type. Roadmap lives here. Decisions live here. Specs live here. Everything else links in. The discipline was not the tool choice. It was the commitment to a canonical location.
  3. Async first rituals. Written standups replaced verbal ones. Decision logs replaced we talked about this. Sync time was reserved for things that needed faces, not updates.
  4. Rituals designed to survive me. Every recurring meeting had an owner who was not me, a template that did not need me, and a deprecation plan so unused rituals died on schedule.
  5. Change management for the 200+. In parallel with the internal process rebuild, I authored a training curriculum, product documentation, and an internal enablement program that moved 200+ non technical employees from curiosity to competent operators of the agentic AI tooling.
  6. Business case literacy. I wrote and co wrote the business cases that justified new AI features using cost benefit analysis, ROI projections, and total cost of ownership modeling. $500K+ in strategic budget was unlocked through documents the team could reuse as templates.
Architecture decision

Async first, sync only when necessary. The default across the industry is sync first. I inverted it. The reason was mathematical: across three time zones and a 40 person org, sync meetings were imposing a fifteen hour per person per week tax on the wrong people. Async first rituals cut that tax and, more importantly, produced a written record the rest of the company could read later. The sync time that remained was about decisions, not updates.

Rituals that survive without me. A consultant who builds rituals only they can run has optimized for their own indispensability. I optimized for the opposite. Every template came with an owner who was not me, a rotation plan, and a kill switch.

Artifacts I authored or led

  • Decision log template, adopted across product, engineering, and data science
  • Roadmap artifact with explicit links to prompt specifications and evaluation results
  • Async standup and retrospective templates, adopted beyond the original team
  • Ownership matrix for every recurring artifact (who owns it, who is backup, when it deprecates)
  • Business case template with cost benefit analysis, ROI projection, and TCO model sections
  • AI enablement curriculum for 200+ non technical employees: hands on training, product documentation, stakeholder communication templates

Results

Within a quarter, product, engineering, and data science were all pulling the same roadmap artifact. Decision logs were being linked to in pull requests and prompt reviews. Retrospectives stopped surfacing the same five we should document this complaints. The 200+ employee training program ran twice, then continued without me.

The team was never the problem. The system they were working inside was. Fix the system, and good people ship faster without anyone working harder.

Post engagement, the rituals kept running. The decision log template is still in use. The async standup format is still in use. That is the metric I care about most. Not whether something works while I am in the room, but whether it works when I leave.

About these numbers

The figures on this page are drawn from internal program reporting I authored or co-authored as the practitioner on the engagement. They are reproduced here in rounded form. They were not produced by an independent third party, and proprietary detail has been omitted where required by the engagement.

Lift figures (CSAT, accuracy, handle time, hallucination rate) reflect pre/post comparisons against a matched baseline using the cohort, time window, and measurement instrument noted in the case study. Volume and adoption figures come from production analytics dashboards. Cost figures reflect either avoided spend or unlocked budget in the named fiscal period.

  • 200+ employees enabled: counted as unique participants who completed the AI enablement curriculum across cohorts during and immediately after the engagement.
  • $500K+ unlocked: aggregate of strategic budget approved against business cases I authored or co-authored during the engagement, in the named fiscal period.
  • 4 months to predictable delivery cadence: measured as the point at which retrospectives stopped surfacing process-coherence complaints and roadmap, decision log, and spec artifacts converged on canonical locations.
  • 0 rituals retired post-engagement: tracked at the time of writing, based on direct check-ins with the team. Subject to organic change as the team evolves.
  • Thoughtful Execution: a framework I authored and use across engagements; not a third-party methodology.

What I would do differently

Start the owners who are not me conversation in week one, not week four. The longer I held an artifact, the harder it was to hand off cleanly, even with a clean template. A related note: build the 200+ enablement curriculum in parallel with the internal rituals rather than after. Internal process and external enablement are two faces of the same question, and they are strongest when they are designed together.

Collaborators

Worked directly with product, engineering leads, data scientists, and L2 support operations. Coordinated with scrum masters in adjacent teams to keep rituals compatible at the program level. Partnered with enablement and L&D on the 200+ employee training curriculum and its follow on cohorts.

Skills demonstrated

  • Scrum with adapted rituals
  • Async first process design
  • Documentation architecture
  • Thoughtful Execution framework
  • Cross functional facilitation
  • Change management without mandate
  • Business case writing (ROI, TCO, build vs buy)
  • Enterprise AI enablement curriculum design
  • Measuring what persists post engagement

Let's build

Seriously, let's chat about your next AI project.

I take a small number of engagements each quarter through Intelligent CX Consulting . If what you're reading here sounds like the thing you need, get in touch.