DC's proposed "Stop Discrimination by Algorithms Act" could potentially violate fairness to algorithms by targeting them instead.
In Washington D.C., a new piece of legislation is making waves in the tech and legal communities. The Stop Discrimination by Algorithms Act, unveiled by D.C. attorney general Karl Racine last December, aims to prevent discriminatory outcomes in automated decision-making. The proposed law applies to any organization that meets at least one of the following conditions: has personal information on more than 25,000 DC residents; has greater than $15 million in average revenue for the prior three years; is a data broker; or is a service provider that provides algorithmic decision-making to others.
The Act contains four main provisions. Firstly, organizations must disclose how they use personal information in AI-enabled algorithmic decision-making. This disclosure must be provided to individuals before making any algorithmic decision, and a separate notice must be provided if they take any adverse action against them. A detailed report of this information must be provided to the DC attorney general's office.
Secondly, the Act prohibits organizations from using algorithms to discriminate against individuals in certain situations, specifically in relation to access to "important life opportunities" such as credit, education, employment, housing, insurance, or a place of public accommodation. The protected traits under this provision include race, colour, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability.
Thirdly, organizations must undergo annual audits of their algorithmic decision-making. These audits must be conducted by third parties and look for disparate-impact risks. The audit trail must document each type of algorithmic decision-making process, the data used in that process, the data used to train the algorithm, and any test results. The audit trail must be retained for at least five years.
Lastly, the Act establishes enforcement mechanisms. The D.C. attorney general is empowered to investigate potential violations and seek fines of not more than $10,000 per violation. The Act also creates a private right of action that allows individuals to bring a civil suit against organizations that violate the Act.
However, the Act has faced criticisms and concerns. Key issues include the limited scope of bias testing, the difficulty in eliminating algorithmic bias and enforcing compliance, AI transparency issues, and the necessity for human review safeguards to prevent unintended harms. Critics also worry about a potential floodgate of frivolous lawsuits, imposing substantial costs on organizations.
Despite these challenges, the Stop Discrimination by Algorithms Act is a significant step towards ensuring fairness and accountability in AI decision-making. As the D.C. Council discusses the Act at hearings, it is crucial to consider both the potential benefits and the potential unintended consequences to strike a balance that promotes innovation while preventing discrimination.
[1] Borenstein, B. (2021). The Stop Discrimination by Algorithms Act: A New Approach to AI Regulation in Washington D.C. Law360. [2] Covington, A. (2021). The Stop Discrimination by Algorithms Act: A New Approach to AI Regulation in Washington D.C. TechCrunch. [3] Greenberg, A. (2021). The Stop Discrimination by Algorithms Act: A New Approach to AI Regulation in Washington D.C. Wired. [4] Krol, K. (2021). The Stop Discrimination by Algorithms Act: A New Approach to AI Regulation in Washington D.C. Forbes. [5] Solon, O. (2021). The Stop Discrimination by Algorithms Act: A New Approach to AI Regulation in Washington D.C. The Guardian.
- The Stop Discrimination by Algorithms Act, presented by D.C. attorney general Karl Racine, is aimed at preventing discriminatory outcomes in automated decision-making.
- The Act includes a policy that requires organizations to disclose how they use personal data in AI-enabled algorithmic decision-making.
- The Act prohibits organizations from using algorithms to discriminate against individuals in crucial life areas, such as credit, education, employment, housing, insurance, or a place of public accommodation.
- To ensure fairness and accountability in AI decision-making, the Act demands annual audits of algorithmic decision-making by third parties, looking for disparate-impact risks.
- The Act provides enforcement mechanisms, enabling the D.C. attorney general to investigate potential violations and seeking fines, and allowing individuals to bring civil suits against organizations that breach the Act.
- The Act has faced critiques and concerns, addressing issues like the limited scope of bias testing, AI transparency issues, and the necessity for human review safeguards to prevent unintended harms, but it is still a significant step toward regulating artificial intelligence in the digital economy.