Feeling the Elephant: What Can and Can’t We Learn from Lobbying Disclosures on AI Policy?
Capitol Hill Climbing Part 2
Hello again! I’m still at the Cambridge AI Safety Hub attempting to explain how Big Tech actually influences AI policy in the US legislature. If you haven’t read part 1 of this series, which explains the mechanisms by which money is converted into political influence (lobbyists, non-profits, etc.), you can find it here.
Today, we’ll explore publicly available data on lobbying by special interest groups concerned with AI, what they reveal to the public, and what they keep hidden.
Before I dive into my own analysis, I’d like to highlight the source par excellence for understanding money in US politics, OpenSecrets. If you have a particular company, lobbying firm or donor you’d like to explore further, they will almost certainly have a page of high-quality, somewhat depressing information available.
Where I think OpenSecrets currently lacks sufficient classification is how they define industries — would you classify lobbying about AI across the value chain under Computer Software, Electronics Manufacturing & Equipment, Telecom Services & Equipment or Venture Capital? The answer is probably all four. However, as we’ll discuss when going through my preliminary data viz, the sheer scale of parties interested in lobbying on AI development and the variety of products that large tech firms offer make neat groupings difficult.
So, as noted in Part 1, wining and dining is no longer the lobbying modus operandi for lobbyists, as they’re constrained by several pieces of legislation, including the Honest Leadership and Open Government Act of 2007. Most relevant for our purposes is the Lobbying Disclosure Act (LDA) of 1995.
Under the LDA, special interest groups (companies, non-profits and the rest) are required to file publicly available reports called LD-2s, which include juicy information such as which individual lobbyists they used, what topics were discussed and how much they were paid.
All these filings to the Senate are standardised and easily accessible via an API, which I hypothesised might reveal some interesting aggregate data on which groups were allocating the most resources and the key AI issues parties felt it was worth lobbying to Congress about. The results were… mixed, but I’ve found that even a failure to feel out the whole elephant over a few hours’ work tells us something about the nature of tech lobbying.
LD-2 Filings on AI Policy Issues in the Current Congressional Session
Your first thought on seeing this visualisation (besides “Wow, he’s really sticking to that colour scheme.”) is probably “Where is state preemption, child safety, defense contractors, financial institutions and xyz?” The answer, in part, lies in the noise inherent to these documents, thanks to issue bundling.
Take, for example, this 2024 Q4 Microsoft report, which falls under the Safety and Risk/Big Tech cell. The report is rich in content. It shows that Microsoft paid $60,000 to West Front Strategies (now S3 Group), a DC-based lobbying firm. We can explore the LinkedIn profiles of the three lobbyists disclosed and see that —
All three are trained lawyers.
One was a former Democratic congressional staffer, including for Amy Klobuchar.
One spent eight years working for former Senate Majority Leader Mitch McConnell.
One spent five years in the Department of Justice.
Yet, the issues that these lobbyists discuss are bundled together. This filing discloses —
“Issues related to emerging technologies; artificial intelligence; cloud computing; supply chain security; Section 230 of the Communications Act of 1934; and, privacy; legislative proposals related to government surveillance and data collection, including issues of transparency and ECPA reform.”
We cannot assess which of these issues were discussed in a perfunctory vs. thorough manner, which ones were prioritised, or which representatives/staff on the Hill were spoken with. Moreover, two further sections, which were not picked up in the data scrape, included lobbying on —
“issues related to antitrust and competition policy”
“Promoting and Respecting Economically Vital American Innovation Leadership (PREVAIL) Act, S 2220. Patent Eligibility Restoration Act of 2023, S. 2140.”
Are these two sections relevant to our discussions about AI? Almost certainly, but counting all filings that include the terms “antitrust and competition policy” would create so much noise as to render the visualisation useless. Another subject matter where this problem arises is “safety” — a word used both in the context of protecting children from access to harmful content and in discussions of mitigating misaligned AI — both issues on which a tech firm would reasonably lobby. Another area where nearly all identified filings were noise was federal preemption of state AI regulations. Given that the issue only gained salience with regard to AI in mid-2025, nearly all prior disclosures in this congressional session related to financial institutions.
However, there are some lessons we can draw from this data. For one, it strengthens the claim I made in part 1 that big tech “floods the zone,” dominating the information flow to Hill staffers and representatives compared to other actors Another is that VC firms (A16Z) and industry groups (Business Software Alliance, ITIF, National Artificial Intelligence Association) take up plenty of volume, lobbying consistently on export controls, copyright laws, safety, and beyond. An area I’d be curious to explore in the following congressional session will be frontier lab filings on national security, given Anthropic’s recent stoush with the Pentagon.
The difficulties of gleaning high-confidence conclusions from this data demonstrate that disclosure does not equal honesty. The first finding of the 1995 LDA is that “responsible representative Government requires public awareness of the efforts of paid lobbyists,” but public awareness is made difficult by the fact that the computational capacities of these non-human actors are significantly higher than any one individual, yet they are given the same rights as persons, such as free speech, under Citizens United v. FEC. For individuals to have any chance of critically evaluating how technology firms shape AI policy and determine whether they’re making decisions for the general good, they require more tools from the government than just an API key.





