Digital Innovation
3 mins read

Artificial Intelligence: US Federal Trade Commission warns publishers

Getting your Trinity Audio player ready...

In a blog post last week, the US Federal Trade Commission singled out ‘media companies’ as one of the sectors where biases in artificial intelligence (AI) technology could be to the detriment of wider society.

Ominously, the Commission reminded media companies of their obligation to hold themselves accountable – otherwise ‘be ready for the FTC to do it for you’. The FTC also added it has ‘decades of experience enforcing three laws important to developers and users of AI’.

The FTC is particularly concerned that biases in well-intentioned algorithms could result in discriminatory outcomes that ‘perpetuate racial inequity’. The Commission reminded companies that it was essential to test AI algorithms on an ongoing basis to make sure they didn’t ‘discriminate on the basis of race, gender, or other protected class’.

The FTC held up, by way of example, its PrivacyCon 2020 showcase where researchers presented work showing that algorithms developed for benign purposes like advertising actually resulted in racial bias.

To assist companies innovating within artificial intelligence, the FTC laid out a series of concerns and guidelines that businesses should adhere to, abridged below:

  • Start with the right foundation: ‘If a data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.’
  • Watch out for discriminatory outcomes: ‘It’s essential to test your algorithm – both before you use it and periodically after that – to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.’
  • Embrace transparency and independence: ‘Use transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection.’
  • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results: ‘In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver. The result may be deception, discrimination – and an FTC law enforcement action.’
  • Tell the truth about how you use data:  ‘In the FTC’s guidance on AI last year, we advised businesses to be careful about how they get the data that powers their model. We noted the FTC’s complaint against Facebook, which alleged that the social media giant misled consumers by telling them they could opt-in to the company’s facial recognition algorithm, when in fact Facebook was using their photos by default.’
  • Do more good than harm: ‘To put it in the simplest terms, under the FTC Act, a practice is unfair if it causes more harm than good. Let’s say your algorithm will allow a company to target consumers most interested in buying their product. Seems like a straightforward benefit, right? But let’s say the model pinpoints those consumers by considering race, color, religion, and sex – and the result is digital redlining.’
  • Hold yourself accountable – or be ready for the FTC to do it for you: ‘For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA. Whether caused by a biased algorithm or by human misconduct of the more prosaic variety, the FTC takes allegations of credit discrimination very seriously.’

As your company launches into the new world of artificial intelligence, keep your practices grounded in established FTC consumer protection principles.

Federal Trade Commission blog post

The FTC’s full blog post can be read here.