This Microsoft security team stress-tests AI for its worst-case scenarios |
This Microsoft security team stress-tests AI for its worst-case scenarios
The company’s Red Team simulates attacks to uncover risks before bad actors do.
[Photo: Panitan/Adobe Stock]
As soon as new AI products are released, security researchers and pranksters begin probing them for weaknesses, trying to push systems to violate their own safety precautions and coax them into producing anything from offensive content to instructions for building weapons.
After all, AI risks are not just theoretical. In recent months, various AI companies have faced criticism for their software allegedly contributing to mental illness and suicide, nonconsensual fake nude images of real people, and aiding hackers in cybercrime. At the same time, techniques........