Leading Foundational Research
More user feedback is better, right? Not when it's siloed and poorly understood. You need a foundation. I started with a meta-analysis of all the information we had. Then, I backfilled knowledge gaps with desk research and created Archetypes. We soon put it all to the test for a critical product launch.
DELIVERABLES
Archetypes, testing plans, weekly research share-outs, findings presentation, videos
METHODS
Ethnographic study, moderated and unmoderated interviews, remote usability testing, affinity diagrams, surveys, concept testing
ROLE
Lead I was a team of one but consulted with partners Product, Data Science, and CS.
Objectives
Start a qualitative research practice
Foster a deeper understanding of customers—especially outside the problem space
Support product beta testing for a poorly-understood user segment
Execution
I studied 2nd year and 3rd year students’ behaviors and emotional experiences during prep for crucial medical exams—the product’s core use case.
I analyzed existing data and feedback (like Mode Analytics and marketing surveys) There was a lot we didn’t know.
Filling Knowledge Gaps: Answering three big questions
What are people saying? (about the tests generally and OME specifically)
Where are they saying it? (e.g., forums where OME and competitors are discussed)
How do they feel during their journey and how can we help?
Collaboration
Discovering answers by mining scattered data across departments.
Marketing: Customer experience feedback in SurveyMonkey, Net Promoter Score responses in HubSpot, and interview access to AdvoCats (our student loyalty program).
Customer Support: Feedback from our Teamwork account.
Data Science: Segmentation of email lists to recruit testers from various subscription types and demographics.
Social Listening: To strengthen user archetypes, I used tools like BuzzSumo and Boardreader to discover patterns of study habits, tool preferences, techniques, and most importantly, emotional highs/lows.
Impact
Invaluable Archetypes:
Proto-archetypes are defined as "assumptive behavioral constructs of a market profile to be validated with future user research." Mine focused on one key question:
What happens when a med student’s struggle to determine what to study becomes a greater barrier to success than the exam itself?
We used these insights about patterns and preferences to form robust screener questions for user testing.
Roadshow: Everyone’s invited
Taking a page from the Facebook research team, I showcased my initial findings in a mini-gallery in our training room.
Beta Testing:
I created a task force with partners from Product, Data Science, and CS to help with testing execution. Everyone was assigned roles. We used the HEART framework to establish goals for the beta tests.
Reflection
Testing uncovered product shortcomings that led us to postpone launch and refine—likely avoiding financial and reputational damage. The templates I created here (test plans, discussion guides, email templates, incentive structure) formed the foundation of the fully-functioning research program I oversee today.