Congress Must Exercise Caution in AI Regulation
Artificial intelligence technologies (AI) are all the rage in Washington D.C. these days. Policymakers are hearing stories of utopian opportunities and certain doom from technologists, CEOs, and public interest groups and trying to figure out when and how Congress should intervene.
Congress should be paying attention to AI technologies. Many are tools with extraordinary potential. They can help users distill large volumes of information, manage numerous tasks more efficiently, and change how we work – for good and for ill, depending on where you sit. Influential corporate and government actors recognize the ability of AI to redistribute power in ways they can’t control, which is one reason so many are seeking Congressional intervention now.
But Congress should regulate with extreme caution, if at all, and focus on use of the tools rather than the tools themselves. If policymakers are worried about privacy, they should pass a strong privacy law. If they are worried about law enforcement abuse of face recognition, they should restrict that use. And so on. Above all, they must reject the binary thinking that AI technologies are going to lead to either C-3PO or the Terminator.
Unfortunately, policymakers seem more inclined to move fast and break things.
AI Technologies Should Not Be Regulated by a Commission
At recent hearings, several Members of Congress proposed creating an independent government commission with extraordinary powers over AI technology, including the ability to license AI technology development.
This is a bad idea. Historically agencies like these are created when an industry has reached a center level of maturity and is an essential part of our society and economy. For example, independent commissions oversee telecommunications, medicine, energy, and financial securities. AI technologies are in early stages of development and are integrated in many industries. As a practical matter, it’s hard to imagine how a single agency could operate effectively.
What is worse, forcing developers to get permission from regulators is likely to lead to stagnation and capture. An army of lobbyists and access to legislators through campaign contributions and revolving doors will ensure that such an agency will favor only the most well-connected corporations with licenses.
Expanding Copyright Will Undermine AI Potential
The same holds true for another set of proposals focused on copyright reform. Rightsholders insist that they are owed compensation for things like the use of training data, even though the use of training data is likely protected under fair use. Much of this stems from a major misunderstanding of how AI generative tools work, which we explain here. Simply put, machine learning does not rest on copyright infringement.
Others may realize as much, so they looking to change the law to make it so. We’ve seen this before in broadcast television and cable systems. Broadcasters claimed they had a copyright in the free over the air broadcast signal that cable companies were using for free on cable TV. The Supreme Court disagreed and found no copyright interests rests in the broadcast signal, so TV broadcasters got Congress to create a new right to compensattion for “retransmission.”
But even if Congress were to do that for AI training data, who should get paid, and how much? Training data could touch billions of points of information to formulate an output that’s worth very little money. No one wants to believe they will only be given 1 millionth of a penny or less per use, which is what happens if you divided the value of the output with the vast volume of inputs. And no one will be able to create an AI tool that relies on billions of data points if the costs to do so are increased to unsustainable levels.
Wednesday 24th May 2023 6:16 pm