Powered by RND
PodcastsTechnologyThe User Research Strategist: UXR | Impact | Career

The User Research Strategist: UXR | Impact | Career

Nikki Anderson
The User Research Strategist: UXR | Impact | Career
Latest episode

Available Episodes

5 of 75
  • Reframing Democratization | Ned Dwyer (Great Question)
    Listen now on Apple, Spotify, and YouTube.—Ned Dwyer is the Co-Founder and CEO of Great Question, the all-in-one UX research platform designed to democratize research at scale.After two successful exits as a founder, Ned launched his biggest idea to date: helping enterprise teams better understand their users. Ned has led Great Question in empowering UX researchers, designers, and product teams to collaborate seamlessly and uncover the insights needed to build something great.With over a decade of experience at the intersection of product, design & research; Ned has driven innovation and scaled businesses that solve complex challenges for enterprises.Outside of his professional pursuits, Ned loves spending time in sunny Oakland, California with his wife, two kids and three cats.In our conversation, we discuss:* What democratization really means and why it’s not just about “everyone doing research.”* The shift in sentiment and adoption—from early-stage startups to 16,000-person enterprises.* How researchers can avoid being sidelined by becoming facilitators, not gatekeepers.* The role of tools, policies, and AI in scaling high-quality research safely across teams.* Strategies for building the business case for tools and training—especially in resource-limited orgs.Some takeaways:* Democratization is already happening whether you’re involved or not. Ned emphasizes that research is already being done across organizations by non-researchers, just not always well. The opportunity for researchers is to step into a facilitator role: setting standards, defining guardrails, and ensuring quality without hoarding control.* Big orgs are leading the way, not just scrappy startups. Contrary to early assumptions, the most aggressive adopters of democratization aren’t just startups, they’re enterprises with thousands of employees. The difference? These organizations invest in scalable infrastructure, permissions, and training to empower safe, responsible research at scale.* Guardrails matter more than gatekeeping. With the right systems, democratization doesn’t have to mean chaos. Great Question includes features like eligibility criteria, access controls, incentive limits, study approval flows, and AI-powered report validation. These guardrails enable research at scale without compromising integrity or participant experience.* Make your case by speaking leadership’s language. To advocate for democratization tools or training, tie your request to business goals: reduced legal risk, better participant experience, efficiency gains, and fewer headcount needs. Use the “researcher effort score” to quantify pain points and show progress over time.* Want more influence? Get close to the money. Strategic researchers don’t wait for requests, they go to sales, marketing, and product to understand pain points and proactively solve them. Running win/loss research or unblocking customer access helps build trust, grow research demand, and elevate your role beyond usability testing.Where to find Ned:* Website* LinkedIn: Great Question* LinkedIn: Ned* Twitter/XInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at [email protected] to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    34:53
  • Resume critique series - Part one
    Hi all - this is a free series where I critique anonymized resumes that were submitted to me. If you love the work I do, please consider becoming a paid subscriber to this newsletter. It helps me continue what I do and putting this kind of work out into the community.Check out part two here.Land your dream jobWant even more help on your resume, case studies, and job hunt. Take a look at my UXR job bundle, created to help you land your dream job!Formulas:* [Verb] + [what you did] + [quantifier] which resulted in + [measurable or strategic impact]Example: Ran 4 onboarding interviews with new clients, which resulted in redesigned setup steps and a 25% drop in support tickets.* [Verb] + [insight you generated] + by [method], leading to + [decision/outcome]Example: Uncovered usability issues by synthesizing 12 support calls, leading to a streamlined payment flow.* [Verb] + [collaboration/project] + across [team/org], resulting in + [alignment/change]Example: Facilitated quarterly review across Product and Ops, resulting in better prioritization and fewer miscommunications.* [Verb] + [process/tool/project you led or improved] + [how many/who/what] which resulted in + [business/user impact]Example: Improved onboarding workflow used by 3 teams, which resulted in a 25% reduction in support queries.* [Verb] + [insight or decision you contributed to] + by [action taken] + leading to + [impact on project/team/metric]Example: Informed product roadmap by synthesizing 30 customer interviews, leading to launch of 2 new features.* [Verb] + [communication or output you created] + that influenced + [stakeholders/team] + to [do what]Example: Created user insight brief that influenced PMs to prioritize accessibility fixes.* [Verb] + [collaboration you facilitated] + across [teams/functions] + to [goal], resulting in + [change or outcome]Example: Facilitated weekly cross-functional syncs across Design and Ops to align on support triage, resulting in 30% faster escalation resolution.* [Verb] + [project or task] + within [timeline or budget], resulting in + [measurable business or user value]Example: Delivered usability testing project within 2 weeks, resulting in simplified checkout flow and 15% conversion uplift.* [Verb] + [problem you solved] + by [how you solved it],which [impact/result]Example: Resolved data duplication issue by implementing a shared tracking template, which reduced manual rework by 80%. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    44:59
  • Inside Games User Research | Steve Bromley (Games User Research)
    Listen now on Apple, Spotify, and YouTube.—Steve is a games user research consultant, helping teams use player insight to create successful games. He works with publishers, platforms and studios of all sizes to transform their game development process, and build product strategies that combines player data with creativity. He work from ideation to post-launch in order to de-risk game development, and make games players love.Prior to this he was a senior user researcher for PlayStation and worked on many of their top European titles, including Horizon Zero Dawn, SingStar, the LittleBigPlanet series and the PlayStation VR lineup.Steve started the Games User Research mentoring scheme, which has linked hundreds of students with industry professionals from top games companies such as Sony, EA, Valve, Ubisoft and Microsoft. He wrote the bestselling book How To Be A Games User Researcher to share the expertise needed to work in the games industry.He regularly speaks at games industry conferences and on podcasts about games user research + playtesting, and has been recognised as a member of BAFTA. He also wrote the bestselling book Building User Research Teams, and helps teams build impactful research practice in-house.In our conversation, we discuss:* The evolution of Steve’s career from early days at PlayStation to running his own games UX consultancy.* The difference between research in games vs. traditional tech, especially around the lack of discovery work.* How to measure subjective experiences like “fun,” and why that starts by redefining what “fun” even means.* The influence of secrecy, creative ownership, and marketing pressure on research methods in the games industry.* Real-world methods used in games UX, like mass playtesting labs and segment-based multiplayer analysis.Some takeaways:* Research in games is heavily evaluative. Unlike traditional UX, which often starts with uncovering user needs, games UX usually kicks in once there’s a playable prototype. Because the “user need” in games is often just “make it fun,” research is focused more on assessing emotional impact and usability than on early-stage exploration.* Measuring fun is both subjective and contextual. Teams often ask, “Is this fun?”—but that question is too broad to act on. Steve explains that researchers must first help define what kind of fun is intended, whether that’s emotional engagement, replay behavior, or challenge. Only then can appropriate metrics or qualitative signals be applied.* Creative ownership adds complexity to stakeholder management. Games are seen as artistic work. Designers may be deeply emotionally invested in their ideas, which can make it harder to embrace critical feedback. This makes relationship-building, empathy, and framing feedback constructively especially important in games UX.* Secrecy shapes everything, from methods to sampling. Due to high financial stakes and aggressive marketing timelines, games researchers often can’t test publicly. This leads to lab-based studies with high participant control. Mass playtesting labs (20–80 people at once) are common for running controlled, large-scale tests without leaking content.* Toxicity and matchmaking need research too. Games with multiplayer or social components must test how players interact, especially when strangers are thrown together online. Teams look at voice/chat features, segmentation by playstyle, and matchmaking fairness to reduce toxicity and create balanced experiences.Where to find Steve:* Website* LinkedIn* Twitter/X* BlueSkyInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at [email protected] to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    31:17
  • Inside Insight: How I use Optimal to set up a prototype test
    In this episode, I cover:* Common mistakes teams make when prototype testing becomes routine or rushed.* A method for deciding whether a prototype test is even the right approach.* Clear goal-setting techniques that make your test focused and relevant.* How to define metrics that show both research quality and product value.* Writing user tasks that reflect real behavior and reveal friction points.Key Takeaways:* Low-fidelity prototypes limit learning. If your design doesn’t give people room to explore, or fail, you won’t see how they truly interact with it. Higher fidelity versions are much more effective for unmoderated studies.* Not every question needs a usability test. If you’re looking to understand motivations or needs, observing task flows may not be the right method. Start by asking what kind of data you’re actually trying to gather.* Goals guide everything. Strong prototype tests begin with clear goals. They shape the tasks, help with team alignment, and create a direct line between what you learn and what changes.* Track outcomes that matter to your team. Define a few ways you’ll measure success before the test begins, such as friction points found, task completion behaviors, or whether changes from the study affect real usage.* Write tasks people can relate to. Use short, specific scenarios rooted in familiar behavior. Instead of vague prompts, give people a purpose and context so their actions reflect how they’d use the product in real life.The prototype guide:Grab the full prototype guide with all the examples and formulas here and try it out with your next project (or with a project you recently did!).Try Optimal:Want to try this out on Optimal? You can grab a 20% discount using code Prototype2025 at checkoutInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Reach out to me at [email protected] to learn more about sponsorship opportunities! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    42:56
  • Designing for the Real World | Erik Stoltenberg Lahm (The LEGO Group)
    Listen now on Apple, Spotify, and YouTube.—Erik is a behavioral scientist with a passion for understanding how people, especially kids, interact with digital experiences. He works at The LEGO Group, where he leads behavioral research to create safer, more inspiring, and more playful digital spaces for children. He specializes in using behavioral science, experimentation, and innovative research methodologies to uncover what kids need and love in digital play.Beyond his professional role, he is a self-proclaimed research methodology nerd, always exploring better ways to understand and test how kids engage with the digital world.In our conversation, we discuss:* Why ecological validity is critical to meaningful product testing and what it means in practice.* How Erik approaches testing with kids at LEGO, including the need for playful environments and cognitive load considerations.* The pitfalls of lab-based research and why researchers must move beyond “zoo-like” conditions to see real-world behavior.* Ways to mitigate social desirability and authority bias, especially when conducting research with children.* How remote research, diary studies, and mixed methods can provide deeper behavioral insights—if done with context in mind.Some takeaways:* Validity is about realism. Erik defines ecological validity as the extent to which research reflects real-world behavior. While traditional labs optimize for internal validity, in product development, what matters is whether your findings will translate when people are distracted, tired, or juggling multiple tasks.* Don’t study lions at the zoo. One of Erik’s standout metaphors urges researchers to avoid overly sanitized environments. Testing products in sterile labs might remove variables, but it also strips away the chaotic, layered reality where your product must actually succeed. Aim for the “Serengeti”—not the zoo.* Researching with kids requires creativity, play, and caution.Kids aren’t small adults, they process and respond differently. Erik emphasizes using play as a language, minimizing cognitive load, and focusing on behavioral observation over verbal responses. A child saying “I loved it” means little if they looked disengaged the whole time.* Remote testing can work if grounded in real-life context. Remote methods like diary studies and follow-up interviews can capture valuable insights, especially if paired with contextual in-person research first. The key is triangulating methods and validating self-reports with observed behavior.* Think beyond usability, map the behavior chain. A product’s ease of use in isolation means little if the behavior it enables is derailed by real-life obstacles. Erik illustrates this with a simple example: refilling soap sounds easy until you’re cold, wet, and have other priorities. Designing for behavior means understanding the entire chain around your product.Where to find Erik:* LinkedInInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at [email protected] to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    33:03

More Technology podcasts

About The User Research Strategist: UXR | Impact | Career

Interviews with amazing user researchers to uncover concrete, actionable, and tactical advice to help you maximize your user research impact and excel in your career https://userresearchacademy.substack.com/ userresearchacademy.substack.com
Podcast website

Listen to The User Research Strategist: UXR | Impact | Career, All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.18.3 | © 2007-2025 radio.de GmbH
Generated: 5/31/2025 - 11:43:26 PM