
Business
Five Things We Learned From Indeed's AI Research Playbook
—

Aaron Cannon

For most insights teams, qualitative research means a familiar trade-off: rich consumer feedback takes time and budget that not every project can justify. AI-moderated research is changing that equation, and Indeed is one of the companies that has been testing how far that change can go.
Indeed's Brand Research and Insights Lead Angela Kesselman has been using Outset across some of the company's most consequential projects. That includes the messaging framework behind Career Scout — Indeed's biggest product launch in 20 years — as well as iterative creative testing for a new brand campaign launching in Germany and research into the naming for a new product feature.
In a recent webinar with Outset’s CEO Aaron Cannon, Kesselman walked through what her team learned about how the research process changes with AI-moderation. Here are five takeaways worth bringing back to your team.
1. AI-Moderated Research Holds Up Against Traditional Methods — And the Data Backs It Up
Indeed ran a deliberate parallel test: the same messaging study, same stimulus, same recruit criteria, run through traditional video focus groups and through Outset simultaneously. Both modalities arrived at the same core insights, but the focus groups cost $80,000 and took five weeks while Outset was dramatically cheaper and faster.
One underappreciated reason: AI-moderated sessions cut the dead air. Focus groups carry a lot of logistical friction, from group dynamics and moderator small talk to waiting for someone to respond. In a 1:1 AI session you get to deep insights faster. Angela barely had to cut content when compressing a 90-minute focus group guide into a 40-minute AI session.
2. The Tool Unlocks Research That Would Never Have Otherwise Happened
Cost reduction is an easy sell internally, but it's not the most important outcome. The bigger shift is that when research is faster and cheaper, more questions can get answered before decisions are made.
Angela shared a telling example. Indeed needed to finalize a messaging framework by the end of March, and multiple teams across brand, product marketing, and category marketing were all weighing in on it. A product marketing manager asked almost as an afterthought whether they should validate the new messaging with consumers before locking it in. Normally the timeline would have made that impossible, but with Outset they were able to get 100 completed interviews in just three days.
Those results turned out to have a large impact: one of the three core messages they tested performed significantly better with active job seekers, while another resonated more with people who weren't actively looking. That's a finding with real implications for how you roll out messaging, and it would have been completely invisible if the team hadn't had a fast enough tool to squeeze in the research.
3. Iterative Testing Is Where the Methodology Really Earns Its Keep
The most instructive case study involved two rounds of creative testing for a new brand campaign launching in Germany. Indeed's agency had developed a concept that tested well on the surface — people responded to it emotionally — but both rounds of research kept surfacing the same underlying problem: the spot didn't make a clear case for Indeed's role in helping job seekers get to the positive outcome it was depicting. That was enough for the insights team to push stakeholders toward a different concept that they felt had more upside, even though it was the riskier choice.
The broader lesson is that because AI-moderated research is fast and affordable enough to run multiple rounds, you can actually test a revised concept after addressing feedback rather than treating every study as a single evaluative pass. That makes for a fundamentally different relationship with creative partners — one where problems get caught earlier and fewer hours get spent developing work in the wrong direction.
4. Knowing What to Do With the Added Volume Is the Hard Part
A common concern with AI-moderated research is that you end up buried in data from dozens or hundreds of interviews and lose the thread of what actually matters. Angela's view is that the platform's synthesis tools help a lot here, but researcher judgment is still what separates a useful insight from noise. She uses Outset's proportional theme breakdowns to tell the difference between a finding that's coming from a broad swath of respondents and one that's coming from a vocal few.
She also uses the platform's "chat with your data" feature to pressure-test her own conclusions before writing them up. In one recent project, a theme around negative reactions to an illustration style turned out to be coming from just 5 out of 145 participants — which meant it wasn't worth flagging as a real finding. The tool made that easy to confirm rather than leaving her to wonder.
5. AI-Moderated Research Rewards Researcher Involvement, Not Just Researcher Oversight
AI-moderated research is sometimes pitched as a way to hand off the execution of research so your team can focus on other things. Angela's experience at Indeed is actually closer to the opposite. The researchers who get the most out of the tool are the ones who stay closely involved — sweating the guide design, configuring how the AI probes on specific questions, and owning the synthesis and stakeholder framing at the end. What the AI takes off your plate is facilitation and first-pass analysis.
The harder judgment calls — deciding what's a drumbeat versus an outlier, figuring out how to frame a finding for a skeptical stakeholder, or knowing when the data is telling you to walk away from a concept everyone loves — still require someone who understands the business well enough to know what the answer should actually mean.
The Main Takeaway
AI-moderated research isn't a replacement for research rigor. It's a way to apply that rigor to a much wider surface area. The teams getting the most out of it aren't using it to do research more cheaply. They're using it to do research more often, earlier in the process, and on questions that previously wouldn't have made the cut. If you're curious what that could look like for your team, we'd love to show you.
About the author

Aaron Cannon
CEO - Outset
Aaron is the co-founder and CEO of Outset, where he’s leading the development of the world’s first agent-led research platform powered by AI-moderated interviews. He brings over a decade of experience in product strategy and leadership from roles at Tesla, Triplebyte, and Deloitte, with a passion for building tools that bridge design, business, and user research. Aaron studied economics and entrepreneurial leadership at Tufts University and continues to mentor young innovators.
Interested in learning more? Book a personalized demo today!
Book Demo






