Problem
Analytically related record combinations were difficult to locate and configure consistently with their component elements. Specialists had to wait for a large table to load, search manually with ctrl+F, and manage multiple browser tabs simultaneously. My team lacked robust automated analytics to measure A/B/C test results due to adaptive challenges in the company
My scope
  • Leading and carrying out problem discovery
  • Product Design, testing method ideation, setup, and analysis
Solutions
  • A/B/C testing of design variants: numbered list vs. sidebar vs. both of them, followed by interviews with each cohort after ~1 month of usage
  • Google Forms survey to measure CSAT and CET attached to related Jira tasks
  • Instruction for users to walk through their impressions and challenges with their variant after completing related Jira tasks – using Loom clips
Results
  • Adaptive challenges: due to the high workload in specialist teams, 2 out of 3 methods of non-automated data collection (post-task survey, Loom recordings with the post-task walkthrough) couldn't be applied, and we received feedback only after post-launch interviews
  • Users compared their versions against the lack of solution whatsoever. Each entailed a big positive change for their workflows: “Record combinations ceased to be a hot potato in our team”
  • Users preferred a scroll-to list (higher CSAT and CET, better discoverability, and more familiar navigation patterns). Still, they later complained that identifying similar combinations was difficult, and scrolling fatigue became an issue
🔥💡
Arrow right icon - opening a drawer