In this episode, Scott Hanselman and Mark Russinovich unpack how AI systems actually behave beneath the surface, pushing past hype into the messy reality of how models are trained, aligned, and deployed.
They explore whether AI systems are inherently benevolent or simply shaped by incentives, training data, and reinforcement learning, and why behaviors like deception can emerge under certain conditions. The conversation moves from philosophical questions about human nature versus machine behavior into the practical mechanics of large language models, including how reinforcement learning with human feedback shapes outputs and why alignment is far from perfect.
Along the way, they ground the discussion in a real engineering challenge, stitching a scrolling panorama from screen captures, to show how complex systems come together through heuristics, edge cases, and iteration.
Takeaways:
AI behavior is shaped by training and incentives, not built-in intent or morality
AI can accelerate coding, but testing, edge cases, and reliability require human oversight
Reinforcement learning pushes models to be helpful and agreeable, sometimes at the cost of accuracy
Who are they?
View Scott Hanselman on LinkedIn
View Mark Russinovich on LinkedIn
Watch Scott and Mark Learn on YouTube
Listen to other episodes at scottandmarklearn.to
Discover and follow other Microsoft podcasts at microsoft.com/podcasts
Hosted on Acast. See acast.com/privacy for more information.