Microsoft has reiterated its position on facial recognition, saying US law enforcement may not use Azure OpenAI Service for that purpose.
Facial recognition has been a controversial topic, with many issues surrounding bias and racial profiling, not to mention privacy concerns. AI has supercharged facial recognition, but that doesn’t mean Microsoft is backing down on its stance.
In its Code of conduct for Azure OpenAI Service, Microsoft makes clear that the service cannot be used for surveillance or law enforcement.
Integration with Azure OpenAI Service must not:
- without the individual’s valid consent, be used for ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal information, including biometric data; or
- be used for facial recognition purposes by or for a police department in the United States; or
- be used for any real-time facial recognition technology on mobile cameras used by any law enforcement globally to attempt to identify individual in uncontrolled, “in the wild” environments, which includes (without limitation) police officers on patrol using body-worn or dash-mounted cameras using facial recognition technology to attempt to identify individuals present in a database of suspects or prior inmates.
Microsoft’s stance illustrates the ongoing changes with fledgling technology, as companies and governments try to balance adopting new tech while still protecting users.