Powered by RND
PodcastsTechnologyLock and Code
Listen to Lock and Code in the App
Listen to Lock and Code in the App
(398)(247,963)
Save favourites
Alarm
Sleep timer

Lock and Code

Podcast Lock and Code
Malwarebytes
Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulner...

Available Episodes

5 of 125
  • The new rules for AI and encrypted messaging, with Mallory Knodel
    The era of artificial intelligence everything is here, and with it, come everyday surprises into exactly where the next AI tools might pop up.There are major corporations pushing customer support functions onto AI chatbots, Big Tech platforms offering AI image generation for social media posts, and even Google has defaulted to include AI-powered overviews into everyday searches.The next gold rush, it seems, is in AI, and for a group of technical and legal researchers at New York University and Cornell University, that could be a major problem.But to understand their concerns, there’s some explanation needed first, and it starts with Apple’s own plans for AI.Last October, Apple unveiled a service it is calling Apple Intelligence (“AI,” get it?), which provides the latest iPhones, iPads, and Mac computers with AI-powered writing tools, image generators, proof-reading, and more.One notable feature in Apple Intelligence is Apple’s “notification summaries.” With Apple Intelligence, users can receive summarized versions of a day’s worth of notifications from their apps. That could be useful for an onslaught of breaking news notifications, or for an old college group thread that won’t shut up.The summaries themselves are hit-or-miss with users—one iPhone customer learned of his own breakup from an Apple Intelligence summary that said: “No longer in a relationship; wants belongings from the apartment.”What’s more interesting about the summaries, though, is how they interact with Apple’s messaging and text app, Messages.Messages is what is called an “end-to-end encrypted” messaging app. That means that only a message’s sender and its recipient can read the message itself. Even Apple, which moves the message along from one iPhone to another, cannot read the message.But if Apple cannot read the messages sent on its own Messages app, then how is Apple Intelligence able to summarize them for users?That’s one of the questions that Mallory Knodel and her team at New York University and Cornell University tried to answer with a new paper on the compatibility between AI tools and end-to-end encrypted messaging apps.Make no mistake, this research isn’t into whether AI is “breaking” encryption by doing impressive computations at never-before-observed speeds. Instead, it’s about whether or not the promise of end-to-end encryption—of confidentiality—can be upheld when the messages sent through that promise can be analyzed by separate AI tools.And while the question may sound abstract, it’s far from being so. Already, AI bots can enter digital Zoom meetings to take notes. What happens if Zoom permits those same AI chatbots to enter meetings that users have chosen to be end-to-end encrypted? Is the chatbot another party to that conversation, and if so, what is the impact?Today, on the Lock and Code podcast with host David Ruiz, we speak with lead author and encryption expert Mallory Knodel on whether AI assistants can be compatible with end-to-end encrypted messaging apps, what motivations could sway current privacy champions into chasing AI development instead, and why these two technologies cannot co-exist in certain implementations.“An encrypted messaging app, at its essence is encryption, and you can’t trade that away—the privacy or the confidentiality guarantees—for something else like AI if it’s fundamentally incompatible with those features.”Tune in today.You can also find us on Apple Podcasts,
    --------  
    47:06
  • Is nowhere safe from AI slop?
    You can see it on X. You can see on Instagram. It’s flooding community pages on Facebook and filling up channels on YouTube. It’s called “AI slop” and it’s the fastest, laziest way to drive engagement.Like “click bait” before it (“You won’t believe what happens next,” reads the trickster headline), AI slop can be understood as the latest online tactic in getting eyeballs, clicks, shares, comments, and views. With this go-around, however, the methodology is turbocharged with generative AI tools like ChatGPT, Midjourney, and MetaAI, which can all churn out endless waves of images and text with little restrictions.To rack up millions of views, a “fall aesthetic” account on X might post an AI-generated image of a candle-lit café table overlooking a rainy, romantic street. Or, perhaps, to make a quick buck, an author might “write” and publish an entirely AI generated crockpot cookbook—they may even use AI to write the glowing reviews on Amazon. Or, to sway public opinion, a social media account may post an AI-generated image of a child stranded during a flood with the caption “Our government has failed us again.”There is, currently, another key characteristic to AI slop online, and that is its low quality. The dreamy, Vaseline sheen produced by many AI image generators is easy (for most people) to spot, and common mistakes in small details abound: stoves have nine burners, curtains hang on nothing, and human hands sometimes come with extra fingers.But little of that has mattered, as AI slop has continued to slosh about online.There are AI-generated children’s books being advertised relentlessly on the Amazon Kindle store. There are unachievable AI-generated crochet designs flooding Reddit. There is an Instagram account described as “Austin’s #1 restaurant” that only posts AI-generated images of fanciful food, like Moo Deng croissants, and Pikachu ravioli, and Obi-Wan Canoli. There’s the entire phenomenon on Facebook that is now known only as “Shrimp Jesus.”If none of this is making much sense, you’ve come to the right place.Today, on the Lock and Code podcast with host David Ruiz, we’re speaking with Malwarebytes Labs Editor-in-Chief Anna Brading and ThreatDown Cybersecurity Evangelist Mark Stockley about AI slop—where it’s headed, what the consequences are, and whether anywhere is safe from its influence.Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
    --------  
    38:38
  • A day in the life of a privacy pro, with Ron de Jesus
    Privacy is many things for many people.For the teenager suffering from a bad breakup, privacy is the ability to stop sharing her location and to block her ex on social media. For the political dissident advocating against an oppressive government, privacy is the protection that comes from secure, digital communications. And for the California resident who wants to know exactly how they’re being included in so many targeted ads, privacy is the legal right to ask a marketing firm how they collect their data.In all these situations, privacy is being provided to a person, often by a company or that company’s employees.The decisions to disallow location sharing and block social media users are made—and implemented—by people. The engineering that goes into building a secure, end-to-end encrypted messaging platform is done by people. Likewise, the response to someone’s legal request is completed by either a lawyer, a paralegal, or someone with a career in compliance.In other words, privacy, for the people who spend their days with these companies, is work. It’s their expertise, their career, and their to-do list.But what does that work actually entail?Today, on the Lock and Code podcast with host David Ruiz, we speak with Transcend Field Chief Privacy Officer Ron de Jesus about the responsibilities of privacy professionals today and how experts balance the privacy of users with the goals of their companies.De Jesus also explains how everyday people can meaningfully judge whether a company’s privacy “promises” have any merit by looking into what the companies provide, including a legible privacy policy and “just-in-time” notifications that ask for consent for any data collection as it happens.“When companies provide these really easy-to-use controls around my personal information, that’s a really great trigger for me to say, hey, this company, really, is putting their money where their mouth is.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
    --------  
    33:44
  • These cars want to know about your sex life (re-air) (Lock and Code S05E25)
    Two weeks ago, the Lock and Code podcast shared three stories about home products that requested, collected, or exposed sensitive data online.There were the air fryers that asked users to record audio through their smartphones. There was the smart ring maker that, even with privacy controls put into place, published data about users’ stress levels and heart rates. And there was the smart, AI-assisted vacuum that, through the failings of a group of contractors, allowed an image of a woman on a toilet to be shared on Facebook.These cautionary tales involved “smart devices,” products like speakers, fridges, washers and dryers, and thermostats that can connect to the internet.But there’s another smart device that many folks might forget about that can collect deeply personal information—their cars.Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what types of data modern vehicles can collect, and what the car makers behind those vehicles could do with those streams of information.In the episode, we spoke with researchers at Mozilla—working under the team name “Privacy Not Included”—who reviewed the privacy and data collection policies of many of today’s automakers.To put it shortly, the researchers concluded that cars are a privacy nightmare. According to the team’s research, Nissan said it can collect “sexual activity” information about consumers. Kia said it can collect information about a consumer’s “sex life.” Subaru passengers allegedly consented to the collection of their data by simply being in the vehicle. Volkswagen said it collects data like a person’s age and gender and whether they’re using your seatbelt, and it could use that information for targeted marketing purposes. And those are just the highlights. Explained Zoë MacDonald, content creator for Privacy Not Included: “We were pretty surprised by the data points that the car companies say they can collect… including social security number, information about your religion, your marital status, genetic information, disability status… immigration status, race.”In our full conversation from last year, we spoke with Privacy Not Included’s MacDonald and Jen Caltrider about the data that cars can collect, how that data can be shared, how it can be used, and whether consumers have any choice in the matter.Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
    --------  
    45:00
  • An air fryer, a ring, and a vacuum get brought into a home. What they take out is your data
    The month, a consumer rights group out of the UK posed a question to the public that they’d likely never considered: Were their air fryers spying on them?By analyzing the associated Android apps for three separate air fryer models from three different companies, a group of researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone.“In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason,” the group wrote in its findings.While it may be easy to discount the data collection requests of an air fryer app, it is getting harder to buy any type of product today that doesn’t connect to the internet, request your data, or share that data with unknown companies and contractors across the world.Today, on the Lock and Code pocast, host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
    --------  
    26:59

More Technology podcasts

About Lock and Code

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.
Podcast website

Listen to Lock and Code, Lex Fridman Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.2.0 | © 2007-2025 radio.de GmbH
Generated: 1/16/2025 - 3:49:13 PM