It’s been nearly a year since the release of ChatGPT, a milestone that sparked the AI frenzy we see today. While the hype surrounding AI may sometimes be dampened by reality and other obstacles, top corporate leaders are now rallying behind AI regulation, which has traditionally been a major challenge for new technologies. Sundar Pichai (CEO of Alphabet), Brad Smith (President of Microsoft), and Sam Altman (CEO of OpenAI) are among the supporters of AI regulation, and there are several reasons for their stance.
Firstly, well-thought-out regulations would ensure that AI companies can invest in products without the fear of them being regulated out of existence in the future. With clear rules in place, companies can confidently invest millions, or even billions, of dollars in new technologies, knowing that their investments won’t end up being outlawed.
Additionally, regulations would provide companies with a unified set of rules instead of a patchwork of disparate laws across different states. Having 50 different sets of rules at the state level would be costly and inefficient for businesses. Therefore, a consistent regulatory framework would make it easier for companies to operate across different locations.
By actively calling for regulations, companies can also have a hand in shaping the rules governing AI. This allows them to ensure that the regulations are reasonable and cost-effective from their perspective. They aim to avoid heavy-handed regulations that would require significant financial resources to remain compliant.
Regulations not only benefit companies but also provide consumers with greater confidence in the safety of AI products. Without regulations, consumers are unsure whether the technology they use is safe. By establishing clear rules, AI regulations can help address issues such as AI-driven phone scams, AI-based discrimination in financial services, and bias in areas like the justice system and the housing market.
Looking abroad, the European Union (EU) has already made significant progress in crafting AI legislation. In April 2021, the EU proposed AI rules that focus on different levels of risk associated with the technology. The proposed rules outlaw certain types of AI applications, establish assessment requirements for high-risk applications, and set transparency requirements for other AI systems. China has also introduced its own AI regulations, requiring companies to submit their algorithms for review.
Although US lawmakers and the Biden administration have started engaging with major AI companies, they need to act quickly to keep up with the pace of innovation. There is precedent and momentum from other countries, and it’s crucial for the US to establish its own regulations to ensure that AI can flourish responsibly and ethically.
In conclusion, AI leaders are advocating for regulations because they offer a sense of stability and ensure that companies can invest in AI without fear of future obsolescence. Regulations also provide consumers with confidence in the safety of AI products and address potential risks and biases associated with AI applications. By learning from the progress made by other countries, the US has an opportunity to shape its own regulations and foster the growth of AI while protecting societal interests.