Article

5 Things Microsoft’s “Red Team” Has Learned About AI (Video)

Video Transcript Whether it relates to a consumer or business application, it seems now that not a day goes by that we don’t hear about artificial int...

Video Transcript

Whether it relates to a consumer or business application, it seems now that not a day goes by that we don’t hear about artificial intelligence (AI). And while it appears as though tools are cropping up left and right, top companies like Microsoft have had a finger on the pulse of this emerging tech for years.

At the heart of Washington-based Microsoft’s AI legacy sits the “Red Team.” According to the software giant, “red teaming” is essentially the practice of seeking vulnerabilities within a technology, emulating the approach of a nefarious actor, for example, in order to patch potential failures.

Microsoft has its own Red Team specifically for AI and says this group has been working behind the scenes since 2018 in order to make AI more safe and secure. Now the company says it hopes to share some of the insights this group gained after several years of probing AI in search of failures, security gaps, or the potential for the technology to create harmful content.

The fear surrounding these risks applies to nearly every industry. Could AI in a healthcare setting lead to the exploitation of private information? Could it be deadly for patients if errors were to take place? And when it comes to manufacturing, the concerns center more on potential cyber-attacks, the need for sophisticated skills to support them, and, of course, high costs.

Despite these barriers, there are a lot of possibilities for AI, and Microsoft believes that what it’s learned throughout its analysis should be shared — including these five key points:

  1. AI red teaming is more expansive than a traditional red team in the sense that it explores not only security but uniquely outcome-based concerns like bias.
  2. Failures can come from both nefarious actors or the actions of regular users, and Microsoft learned good troubleshooting includes them both.
  3. AI systems appear to change at a faster rate than traditional software, meaning reviews must establish systematic analysis that monitors the applications over time.
  4. Generative AI monitoring may require multiple rounds and attempts when you consider the content can change over time.
  5. Response to failures within AI often requires multiple solutions, something Microsoft refers to as a “defense-in-depth approach.”

Ultimately, Microsoft calls its AI Red Team “front and center” in an effort to safeguard AI products and earn the trust of its customers. This all comes down to adhering to the company’s principles that its AI tools feature “fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.”

Ray Diamond
Ray Diamond
Ray is an expert in grinding polycrystalline diamond (PCD) and cubic boron nitride (CBN) tools. He works with technologies like laser machining, EDM, and CBN wheels to deliver ultra-precise results for hard and brittle tool materials.