Silicon Valley is racing to have a say in how AI is regulated before Congress takes action; OpenAi and Anthropic are both setting up permanent Washington offices.
The New York Times reports that Open Ai’s new D.C. office will advocate for expanded data center space and seek to allow the use of copyrighted materials in AI training without needing to obtain permissions. Like Anthropic, Open Ai is now positioning itself to be able to lobby federal lawmakers on multiple proposed AI legislative frameworks they will consider this session.
It seems the timing is intentional. With the 18 month review period for President Biden’s AI Executive Order beginning soon and Congress considering at least three different pieces of AI-related legislation, time to shape the foundation for how AI will be used in the US may be running out. It seems Open Ai views lawsuits currently being pursued to stop AI training practices as a real threat to their business model and thus see copyright exemptions as part of that fight.
The push by Silicon Valley’s AI companies into lobbying efforts comes as these companies continue to receive increasing legal challenges regarding their uses of copyrighted material. According to Built In, U.S. law prohibits the ability to copyright anything generated by an AI system since there was no human involved in creating such content. However, whether or not using someone else's copyrighted works (such as books) to train an AI system constitutes “fair use” remains a question with several lawsuits still making their way through the court system.
According to The New York Times, Open Ai’s lobbying agenda specifically includes advocating for allowing the use of anyone else’s copyrighted material in AI training without obtaining permissions. This puts Open Ai directly opposite those who claim the use of their work without permission or compensation represents large-scale theft.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
In addition to legal theory, the potential financial impact on businesses created around video AI content is very high. As Scroll News notes, one Spanish-based video AI startup reportedly generated nearly $230M annually in recurring revenue based solely on creating video content using AI. Therefore, if restrictions were put on training data for AI, many businesses which rely upon low-cost training data could suffer significantly.
Unlike other attempts by tech industries to influence regulatory policies typically through trade associations or by retaining outside firms both Open Ai and Anthropic are developing internal teams with permanent staffs in Washington to advocate for their interests. Both companies are also calling for vast investments in data centers.

As we previously discussed, there is an increasingly wide gap between what is happening with organized advocacy from Silicon Valley and grassroots organizing around AI issues. Discussions on Reddit forums from mid-April reveal frustration among supporters of AI about the lack of well-organized and well-funded organizations supporting AI versus organizations focused on warning about AI risks. Additionally, one discussion noted that while tech companies do heavy lobbying on behalf of AI, some type of decentralized activism may need to occur in order to promote the benefits associated with AI. The difference between the number of corporate and citizen voices within the debate is growing larger.
One additional variable could further disrupt the lobbying picture AI's growing application in cybersecurity. ProjectMetrics reported in April that an AI system found a 16 year old weakness in FFmpeg (a piece of software widely used by video platforms) for less than $50 in computing costs. The finding suggests that not only can AI make both offensive and defensive changes to cyber security but that it can do so at very low cost.
The duality of this capability creates a dilemma for lobbyists. Advocating for limited regulations becomes difficult once the same systems that produce video content can discover significant vulnerabilities in critical infrastructure at very low cost.
Creators of video AI content should anticipate continued uncertainty regarding copyright laws through at least 2026 which will affect all types of training data strategies. Potential infrastructure investments required by AI companies may provide opportunities for data center and power grid development. Growing uses of AI for security purposes may lead federal agencies to increase their support for both advancing and regulating the use of AI. Until stronger pro-AI grassroots movements emerge, industry voices will dominate the conversation surrounding AI policy. Currently, lobbying efforts are focusing on avoiding limits on usage rather than creating positive frameworks for using AI.
