Senators Ron Wyden, Cory Booker, and Representative Yvette Clarke have introduced the Algorithmic Accountability Act of 2023, aiming to provide protection for individuals affected by artificial intelligence (AI) systems that impact critical decisions such as those related to housing, credit, and education.
The bill not only applies to new generative AI systems used for critical decisions but also covers other AI and automated systems. It seeks to address the issue of biased algorithms in AI systems that have led to discriminatory outcomes in various sectors. These biased algorithms have resulted in flawed decisions, such as automated processes in hospitals understating the health needs of Black patients, discriminatory recruiting and hiring tools, and facial recognition systems with higher error rates among people with darker skin.
By requiring companies to conduct impact assessments for effectiveness, bias, and other factors, the bill aims to ensure accountability and transparency in AI systems. It also calls for the creation of a public repository at the Federal Trade Commission (FTC) to store information about these systems. To enforce the law, 75 staff members will be added to the FTC.
The Algorithmic Accountability Act has garnered immense support from various Democratic senators and representatives. They see the need for regulations and standards to protect people from bias and discrimination as AI and algorithmic decision-making become more prevalent in sectors like healthcare, finance, housing, and education.
The bill has also garnered endorsements from experts and civil society organizations who recognize the importance of safeguards in AI systems. Transparency, accountability, and monitoring the impacts of these systems are crucial for ensuring their responsible use and preventing harm.
In conclusion, the Algorithmic Accountability Act of 2023 aims to address the issues of biased AI systems and algorithms by requiring impact assessments, establishing a public repository, and adding staff to the FTC. With the support it has received, it has the potential to create a safer and more inclusive future for AI technology.