Making HR Leaders Feel Like Data Scientists (With AI)
Designed an AI-powered self-service platform to scale Alioth's proven organizational health consulting from 70 manually-served clients to hundreds—automating qualitative analysis that previously required expert analysts.
The platform didn't ship before company closure, but internal analyst tool validated the approach and proved the core product hypothesis.
Business Context
Alioth had proven its OrgDx consulting model, growing to 70 clients and achieving a NPS of 100, using entirely manual processes. Analysts would survey employees via SurveyMonkey, manually analyze responses in spreadsheets, and deliver insights as beautifully designed PDFs created in InDesign. Each engagement took weeks.
My role was to design the software platform that would automate this proven service—enabling clients to explore their own data while maintaining the quality of insights that made the manual process successful. With one developer and limited runway, every design decision had to balance ambitious vision with pragmatic execution.
The Challenge
Clients loved the expert analysis and actionable recommendations, but wanted faster delivery, real-time data access, and the ability to ask their own questions of the data.
The hardest problem was automating qualitative insight synthesis. Analysts would read hundreds of open-text responses, identify recurring themes, find representative quotes, and write observations like "There are morale issues at the California location concentrated in the Engineering team." We needed AI that could find these patterns while maintaining the quality and nuance that made the manual analysis valuable.
With one full-stack developer, we had to choose what to automate first and what to keep human.
Before and After
My Approach
Designed for progressive capability, not big-bang launch:
Phase 1: Internal analyst tool (automated quantitative rollups)
Phase 2: Customer-facing exploration (filtering, demographic comparisons)
Phase 3: AI-generated observations with analyst oversight
This let us ship value immediately while building toward the full vision
Made strategic trade-offs with one developer:
Kept survey authoring in SurveyMonkey initially rahter than rebuilding what worked
Focused on quantitative data first. Users could explore this while the analysis was being written.
Designed full vision but built in phases based on value delivered
Designed AI-assisted analysis to augment, not replace, expert judgment:
AI would auto-generate observations from qualitative data using theme detection and sentiment analysis
Analysts could edit, approve, or reject AI-generated insights before clients saw them
System would flag significant demographic deltas and surface representative quotes as evidence
This hybrid approach let us ship faster than waiting for AI to be "perfect" while maintaining quality
Prioritized "superpower" feeling over comprehensive features:
Designed filtering UI that let HR leaders instantly compare segments: "How do millennials at our Boston office feel vs the company overall?"
Auto-surfaced significant deltas ("Female employees scored -4% on this dimension")
Made qualitative responses explorable, not buried in PDF appendices
Lo-Fi Exploration
Key Decision
Why We Automated Quantitative Analysis First
The analysts spent hours rolling up Likert scale responses, calculating NPS, and breaking down demographics in spreadsheets—mechanical work that delayed delivery of their real value: qualitative insight synthesis.
By automating the quantitative analysis first, we could:
Give clients instant access to basic metrics (response rates, scores, demographic breakdowns)
Free analysts to focus on the hard problem: finding themes in open-text responses
Validate the platform with internal users before exposing clients to it
Learn what clients actually wanted to explore in their data
This meant the customer-facing platform could launch with real analyst-written insights but self-service exploration—rather than waiting for AI to be "good enough" to replace expert analysis entirely.
The trade-off: slower path to full automation, but higher quality insights throughout the transition.
Hi-Fidelity Survey Progress
Hi-Fidelity Deep Dives
Validation & Outcome
What shipped:
Internal analyst tool went live, automating quantitative analysis
Designed complete customer-facing platform across all states (survey running, processing, analyzed)
Created a design system for rapid exploration and iteration
Full UI for data exploration with filtering, demographic comparisons, and quote browsing
Customer-facing platform didn't launch before company closure
What this validated:
Internal analysts immediately adopted the tool—proved the automation worked
AI successfully performed sentiment analysis and identified statistically significant demographic differences
Clients consistently asked about response rates and wanted to "see the data themselves" during manual engagements—validated the self-service need
Learned that AI quality in 2020 wasn't ready for unsupervised qualitative insight generation—analyst oversight was essential
What I Learned
Designing AI features in 2019-2020 taught me that the most practical approach is augmentation, not replacement. The analyst-approval workflow we designed is similar to how modern AI tools work: AI does the mechanical work, humans provide judgment. That design instinct—knowing when AI needs human oversight—is more relevant now than ever.
The failure to ship taught me the importance of prioritization at resource-constrained startups. We were a team of three trying to build out two products simultaneously, while still producing the manual version of OrgDx.
OrgDx reinforced the design philosophy I'd developed with SearchDx: the best automation doesn't replace experts—it frees them to do what only humans can do. For SearchDx recruiters, that meant nurturing client relationships. For OrgDx analysts, that meant synthesizing insights from messy qualitative data. Software should handle the mechanical work so people can focus on the irreplaceable human work.











