The U.S. Federal Trade Commission has repeatedly warned businesses not to use biased artificial intelligence — citing the ways they could violate consumer protection laws. And a recent blog post from the FTC explains how AI tools can reflect "troubling" gender and racial biases, especially when applied in employment or housing while advertised as unbiased, or trained on suspiciously gathered data.
From now on, whenever the FTC perceives this happening, it may intervene, according to the recent blog post on the agency's official website. But what does this mean for the future of AI in a constantly evolving digital infrastructure?
The FTC discourages falsely advertising AI as 'unbiased'
"In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver," wrote Elisa Jillson, an FTC attorney, in the blog post. She emphasized the importance of this advice when companies promise decisions unimpeded by gender or racial bias. "The result may be deception, discrimination — and an FTC law enforcement action." The FTC's Chair Rebecca Slaughter recently said bias in algorithms constitutes "an economic justice issue," according to a Protocol report.
Both Jillson and Slaughter have said companies may face prosecution under the Fair Credit Reporting Act or the Equal Credit Opportunity Act, if they are found guilty of using unsatisfactory AI-powered decisions — which could fall under Section 5 of the FTC Act. "It's important to hold yourself accountable for your algorithm's performance," said Jillson. "Our recommendations for transparency and independence can help you do just that."
"But keep in mind that if you don't hold yourself accountable, the FTC may do it for you," Jillson warned.
The FTC may have its sights on big tech
In case you missed it, AI is a Catch-22 — possessing the capacity to mitigate human bias in hiring processes while also potentially reproducing or exacerbating prejudice already in play — especially if it's trained on data that takes a skewed status quo as normal. Facial Recognition, for example, is known to misinterpret the faces of Black people — which can lead to incorrect identifications and even arrests by police. In 2019, Google's "hate speech" detector was revealed to be twice as likely to flag a Black person than other racial backgrounds — which can illustrate inequalities of a larger system. Nonbinary and transgender people are also often misclassified by automated gender recognition software.
And recently, the European Union said it might be less forgiving of some AI applications — and may ban some for using "indiscriminate surveillance" and social credit scores, according to an initial Politico report. Right or wrong, the FTC's next steps are coming, but some critics are skeptical about the agency's ability to enforce new AI rules in future confrontations with tech companies. A recent Senate hearing saw Rohit Chopra — the FTC Commissioner — say that "time and time again, when large firms flagrantly violate the law, the FTC is unwilling to pursue meaningful accountability measures," pushing for Congress and others to "turn the page on the FTC's perceived powerlessness." In the tech world, where AI is built and marketed, this could hint at stronger responses to corporate juggernauts like Microsoft, Google, Facebook, and even Amazon.