Powered by RND
PodcastsTechnologyThe User Research Strategist: UXR | Impact | Career

The User Research Strategist: UXR | Impact | Career

Nikki Anderson
The User Research Strategist: UXR | Impact | Career
Latest episode

Available Episodes

5 of 74
  • Resume critique series - Part one
    Hi all - this is a free series where I critique anonymized resumes that were submitted to me. If you love the work I do, please consider becoming a paid subscriber to this newsletter. It helps me continue what I do and putting this kind of work out into the community.Check out part two here.Land your dream jobWant even more help on your resume, case studies, and job hunt. Take a look at my UXR job bundle, created to help you land your dream job!Formulas:* [Verb] + [what you did] + [quantifier] which resulted in + [measurable or strategic impact]Example: Ran 4 onboarding interviews with new clients, which resulted in redesigned setup steps and a 25% drop in support tickets.* [Verb] + [insight you generated] + by [method], leading to + [decision/outcome]Example: Uncovered usability issues by synthesizing 12 support calls, leading to a streamlined payment flow.* [Verb] + [collaboration/project] + across [team/org], resulting in + [alignment/change]Example: Facilitated quarterly review across Product and Ops, resulting in better prioritization and fewer miscommunications.* [Verb] + [process/tool/project you led or improved] + [how many/who/what] which resulted in + [business/user impact]Example: Improved onboarding workflow used by 3 teams, which resulted in a 25% reduction in support queries.* [Verb] + [insight or decision you contributed to] + by [action taken] + leading to + [impact on project/team/metric]Example: Informed product roadmap by synthesizing 30 customer interviews, leading to launch of 2 new features.* [Verb] + [communication or output you created] + that influenced + [stakeholders/team] + to [do what]Example: Created user insight brief that influenced PMs to prioritize accessibility fixes.* [Verb] + [collaboration you facilitated] + across [teams/functions] + to [goal], resulting in + [change or outcome]Example: Facilitated weekly cross-functional syncs across Design and Ops to align on support triage, resulting in 30% faster escalation resolution.* [Verb] + [project or task] + within [timeline or budget], resulting in + [measurable business or user value]Example: Delivered usability testing project within 2 weeks, resulting in simplified checkout flow and 15% conversion uplift.* [Verb] + [problem you solved] + by [how you solved it],which [impact/result]Example: Resolved data duplication issue by implementing a shared tracking template, which reduced manual rework by 80%. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    44:59
  • Inside Games User Research | Steve Bromley (Games User Research)
    Listen now on Apple, Spotify, and YouTube.—Steve is a games user research consultant, helping teams use player insight to create successful games. He works with publishers, platforms and studios of all sizes to transform their game development process, and build product strategies that combines player data with creativity. He work from ideation to post-launch in order to de-risk game development, and make games players love.Prior to this he was a senior user researcher for PlayStation and worked on many of their top European titles, including Horizon Zero Dawn, SingStar, the LittleBigPlanet series and the PlayStation VR lineup.Steve started the Games User Research mentoring scheme, which has linked hundreds of students with industry professionals from top games companies such as Sony, EA, Valve, Ubisoft and Microsoft. He wrote the bestselling book How To Be A Games User Researcher to share the expertise needed to work in the games industry.He regularly speaks at games industry conferences and on podcasts about games user research + playtesting, and has been recognised as a member of BAFTA. He also wrote the bestselling book Building User Research Teams, and helps teams build impactful research practice in-house.In our conversation, we discuss:* The evolution of Steve’s career from early days at PlayStation to running his own games UX consultancy.* The difference between research in games vs. traditional tech, especially around the lack of discovery work.* How to measure subjective experiences like “fun,” and why that starts by redefining what “fun” even means.* The influence of secrecy, creative ownership, and marketing pressure on research methods in the games industry.* Real-world methods used in games UX, like mass playtesting labs and segment-based multiplayer analysis.Some takeaways:* Research in games is heavily evaluative. Unlike traditional UX, which often starts with uncovering user needs, games UX usually kicks in once there’s a playable prototype. Because the “user need” in games is often just “make it fun,” research is focused more on assessing emotional impact and usability than on early-stage exploration.* Measuring fun is both subjective and contextual. Teams often ask, “Is this fun?”—but that question is too broad to act on. Steve explains that researchers must first help define what kind of fun is intended, whether that’s emotional engagement, replay behavior, or challenge. Only then can appropriate metrics or qualitative signals be applied.* Creative ownership adds complexity to stakeholder management. Games are seen as artistic work. Designers may be deeply emotionally invested in their ideas, which can make it harder to embrace critical feedback. This makes relationship-building, empathy, and framing feedback constructively especially important in games UX.* Secrecy shapes everything, from methods to sampling. Due to high financial stakes and aggressive marketing timelines, games researchers often can’t test publicly. This leads to lab-based studies with high participant control. Mass playtesting labs (20–80 people at once) are common for running controlled, large-scale tests without leaking content.* Toxicity and matchmaking need research too. Games with multiplayer or social components must test how players interact, especially when strangers are thrown together online. Teams look at voice/chat features, segmentation by playstyle, and matchmaking fairness to reduce toxicity and create balanced experiences.Where to find Steve:* Website* LinkedIn* Twitter/X* BlueSkyInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at [email protected] to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    31:17
  • Inside Insight: How I use Optimal to set up a prototype test
    In this episode, I cover:* Common mistakes teams make when prototype testing becomes routine or rushed.* A method for deciding whether a prototype test is even the right approach.* Clear goal-setting techniques that make your test focused and relevant.* How to define metrics that show both research quality and product value.* Writing user tasks that reflect real behavior and reveal friction points.Key Takeaways:* Low-fidelity prototypes limit learning. If your design doesn’t give people room to explore, or fail, you won’t see how they truly interact with it. Higher fidelity versions are much more effective for unmoderated studies.* Not every question needs a usability test. If you’re looking to understand motivations or needs, observing task flows may not be the right method. Start by asking what kind of data you’re actually trying to gather.* Goals guide everything. Strong prototype tests begin with clear goals. They shape the tasks, help with team alignment, and create a direct line between what you learn and what changes.* Track outcomes that matter to your team. Define a few ways you’ll measure success before the test begins, such as friction points found, task completion behaviors, or whether changes from the study affect real usage.* Write tasks people can relate to. Use short, specific scenarios rooted in familiar behavior. Instead of vague prompts, give people a purpose and context so their actions reflect how they’d use the product in real life.The prototype guide:Grab the full prototype guide with all the examples and formulas here and try it out with your next project (or with a project you recently did!).Try Optimal:Want to try this out on Optimal? You can grab a 20% discount using code Prototype2025 at checkoutInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Reach out to me at [email protected] to learn more about sponsorship opportunities! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    42:56
  • Designing for the Real World | Erik Stoltenberg Lahm (The LEGO Group)
    Listen now on Apple, Spotify, and YouTube.—Erik is a behavioral scientist with a passion for understanding how people, especially kids, interact with digital experiences. He works at The LEGO Group, where he leads behavioral research to create safer, more inspiring, and more playful digital spaces for children. He specializes in using behavioral science, experimentation, and innovative research methodologies to uncover what kids need and love in digital play.Beyond his professional role, he is a self-proclaimed research methodology nerd, always exploring better ways to understand and test how kids engage with the digital world.In our conversation, we discuss:* Why ecological validity is critical to meaningful product testing and what it means in practice.* How Erik approaches testing with kids at LEGO, including the need for playful environments and cognitive load considerations.* The pitfalls of lab-based research and why researchers must move beyond “zoo-like” conditions to see real-world behavior.* Ways to mitigate social desirability and authority bias, especially when conducting research with children.* How remote research, diary studies, and mixed methods can provide deeper behavioral insights—if done with context in mind.Some takeaways:* Validity is about realism. Erik defines ecological validity as the extent to which research reflects real-world behavior. While traditional labs optimize for internal validity, in product development, what matters is whether your findings will translate when people are distracted, tired, or juggling multiple tasks.* Don’t study lions at the zoo. One of Erik’s standout metaphors urges researchers to avoid overly sanitized environments. Testing products in sterile labs might remove variables, but it also strips away the chaotic, layered reality where your product must actually succeed. Aim for the “Serengeti”—not the zoo.* Researching with kids requires creativity, play, and caution.Kids aren’t small adults, they process and respond differently. Erik emphasizes using play as a language, minimizing cognitive load, and focusing on behavioral observation over verbal responses. A child saying “I loved it” means little if they looked disengaged the whole time.* Remote testing can work if grounded in real-life context. Remote methods like diary studies and follow-up interviews can capture valuable insights, especially if paired with contextual in-person research first. The key is triangulating methods and validating self-reports with observed behavior.* Think beyond usability, map the behavior chain. A product’s ease of use in isolation means little if the behavior it enables is derailed by real-life obstacles. Erik illustrates this with a simple example: refilling soap sounds easy until you’re cold, wet, and have other priorities. Designing for behavior means understanding the entire chain around your product.Where to find Erik:* LinkedInInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at [email protected] to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    33:03
  • Making Continuous Discovery Work | Petra Kubalcik (Omio)
    Listen now on Apple, Spotify, and YouTube.Petra Kubalcik is an accomplished user research professional with over two decades of international experience. Originating from Australia, she has honed her research skills across Japan, Hong Kong, the UK, Czech Republic, and most recently, Germany. Petra has led research teams at Dyson, Cookpad and currently serves as Head of User Research at Omio. She is a champion of user-centricity, ensuring that user perspectives remain central to strategy, innovation and development. Petra has personally conducted research in over 40 countries, bringing a global perspective to her work. Outside of her professional endeavors, she is dedicated to volunteering, sailing, woodworking and supporting the Wallabies.In our conversation, we discuss:* Why continuous discovery is often misunderstood and how separating continuous from discovery can clarify your goals.* What makes a strong foundation for setting up a continuous discovery program, including the importance of stakeholder goals and UX maturity.* How to design effective cadences and role-sharing models depending on whether you’re doing discovery or continuous touchpoints.* The artifacts and outputs that make these programs sustainable and useful, from pathway playbooks to Miro boards.* Red flags that indicate you shouldn’t implement continuous discovery and what to do instead.Some takeaways:* Continuous discovery is not always discovery. Petra emphasizes that many stakeholders use the term continuous discovery when they really mean frequent customer touchpoints. Researchers need to clarify whether the goal is to explore new insights (discovery) or simply maintain regular user input and adjust the program accordingly.* Start with a crystal-clear ‘why.’ Without a well-defined reason for starting continuous discovery, the effort can quickly become unsustainable or directionless. Petra urges researchers to treat these programs like any other research project: define the objective, understand stakeholder needs, and forecast what success looks like. Your “why” will be your compass when things get difficult.* Programs must match UX maturity and resources. Continuous discovery isn’t right for every organization. Petra warns against starting these programs in low-maturity teams with limited resources, unclear goals, or minimal stakeholder buy-in. If you’re fighting at every step, you risk burnout and low-impact work.* Cadence and involvement should flex by context. A one-size-fits-all cadence doesn’t work. For light-touch programs with PMs or designers leading sessions, weekly or biweekly cadences might work. For true discovery efforts, a slower pace is essential to allow for iteration, depth, and evolution in the research plan.* Build reusable frameworks and artifacts to lighten the load. To scale continuous discovery, Petra recommends investing in repeatable templates such as objective-setting docs, note-taking guides, playbooks, and pre-aligned outputs. For example, a “pathway playbook” outlines flows users will walk through and provides a structured format for collecting and analyzing data. These tools ensure quality while keeping researchers sane.Where to find Petra:* LinkedInInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at [email protected] to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    --------  
    34:23

More Technology podcasts

About The User Research Strategist: UXR | Impact | Career

Interviews with amazing user researchers to uncover concrete, actionable, and tactical advice to help you maximize your user research impact and excel in your career https://userresearchacademy.substack.com/ userresearchacademy.substack.com
Podcast website

Listen to The User Research Strategist: UXR | Impact | Career, Search Engine and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.18.2 | © 2007-2025 radio.de GmbH
Generated: 5/22/2025 - 1:44:34 PM