Stories
How Glassdoor Gained Confidence in Integral AI Product Decisions with Outset
The Challenge: A New Pressure to Innovate
Glassdoor, a go-to resource for millions of job seekers that includes over 200 million employer reviews, salaries, and insights, had an ambitious goal: transform from a job-search destination that people have historically visited when they're considering switching employers into a comprehensive career advice hub that professionals would engage with regularly.
At the same time, like many companies, Glassdoor was under pressure to explore where AI made most sense in its business.
"We have high-profile competitors that are a lot bigger than us," explained Athena Petrides, Senior Lead User Experience Researcher. "They have a lot more people, they have a lot more resources, and we know that they are leveraging AI, and AI is rapidly expanding the rate at which they're able to innovate."
Glassdoor wanted to learn how website visitors would interact with AI before committing to a full AI rollout strategy. They started with a chatbot because it could give professionals career guidance in between job searches, while also giving Glassdoor a chance to experiment with AI.
But with the MVP came a slew of questions. Would users trust AI-generated advice? Would the tone feel too robotic? Would it be able to pull the right insights from Glassdoor's content-heavy site? Was the information useful and actionable? Were the responses too long, too short, or just right?
The team was already running chat log analysis, but it couldn’t answer deeper questions about tone, trust, or resonance. Qualitative research was a must, but traditional human-moderated interviews would be too slow to keep up with the pace of development. And surveys alone wouldn’t capture the nuance that product, design, and engineering needed to move forward with confidence.
"Our leadership and development team had opinions on how this chatbot was responding to people," said Petrides. "We wanted to make sure that we were getting feedback from a great representation of people, not just people who work for Glassdoor.”
To get high-quality feedback directly from users, without bottlenecks, they turned to Outset.
The Solution: Deep UX Research at Unparalleled Speed and Scale
Petrides partnered with Outset to launch a study powered by AI-moderated interviews. She admits to being a little hesitant at first. What would the participant experience be like? Was it going to take her a long time to learn a new tool?
"As a researcher, sometimes you can get used to the tools that you're constantly using, and the thought of using another tool is really intimidating. The thing that I love about Outset is that it's very, very easy to just pick up and use."
In less than a day, Outset completed 50 in-depth interviews with chatbot users in a variety of job roles and regions — a feat that would have been impossible with traditional interviews. In fact, conducting any interviews on this timeline would have been impossible for Petrides to tackle manually because she was on a plane to Japan while the study was in progress.
"I would have never been able to moderate 50 sessions in a reasonable amount of time, but with Outset, you can run 50 interviews while you’re on a 20-hour flight,” said Petrides.
It wasn't just fast interviews, though. Outset's instant synthesis and analysis capabilities armed Petrides with an initial batch of insights before her plane landed.
"I was able to analyze sessions and play with the data in little chunks of time. Four hours here, two hours there. So we were saving time and moving faster in a number of different ways."
She was able to pair qualitative depth with broad quantitative insights, exactly what the team needed to make confident product decisions.
“Before, we always had to choose between scale and depth. That’s where we feel that Outset really shines. You're able to get both the qualitative depth and quantitative scale."
The Results: Surprising Roadmap-Shaping Insights
When the results of the study were shared widely across the organization — including with some of Glassdoor’s top leaders — there were surprising insights that immediately influenced the product roadmap.
For example, one feature had been intentionally excluded from the MVP, but the research revealed it was table stakes for users. That finding immediately shifted Glassdoor’s product roadmap.
Another insight revealed an unintended consequence of the chatbot's content strategy. In an attempt to provide users with hyper-specific responses, the chatbot was sometimes giving no answer at all, leaving users confused or unsure how to proceed. That feedback prompted the team to shift toward fallback responses that offered broader guidance and next steps, rather than leaving users in a dead end.
The research also gave the team the confidence to define tone, length, and quality standards for future AI responses, all grounded in what real users wanted and expected.
For a team building its first generative AI experience, Outset didn’t just deliver answers — it created a new way to work. Now Petrides and her team can launch and synthesize dozens of interviews in days and capture rich, conversational insights that surveys can’t generate. When speed and scale come together, it's easy to align cross-functional teams with confidence and clarity.
"I was really impressed, because for the first time in a long time, I really felt like this company was doing something different," said Petrides. "Outset is filling a gap we didn't have a tool for."
She also learned another valuable lesson: AI-moderated research is here to augment her work, not eliminate it.
“It’s not about AI replacing researchers. It’s about what you can do when AI takes care of the logistics, so you can focus on meaning.”
Interested in learning more? Book a personalized demo today!
Book Demo