Home Technology Microsoft and a16z put aside their differences and join forces to oppose...

Microsoft and a16z put aside their differences and join forces to oppose AI regulation

Microsoft and a16z put aside their differences and join forces to oppose AI regulation

The two biggest forces in two deeply intertwined tech ecosystems, large corporations and startups, have jointly called for a break from counting money and for governments to stop and stop even pondering regulations that could impact their financial interests. Call it innovation.

a16z founding partners Marc Andreessen and Ben Horowitz, as well as Microsoft CEO Satya Nadella and President/Chief Legal Officer Brad, wrote to a group with very different perspectives and interests. “Our two companies may not agree on everything, but that’s not where our differences are.” smith. A truly intersectional collective representing both big business and big money.

But what they’re looking for are the little guys. That is, all companies that would have been affected by the recent attempt at regulatory overreach (SB 1047).

Imagine getting billed for inappropriate public model disclosure! a16z general partner Anjney Midha called it a “regressive tax” on startups and a “blatant regulatory capture” of Big Tech companies that, unlike Midha and his poor peers, can afford the lawyers needed to comply.

Excluding all the disinformation promulgated by Andreessen Horowitz and other financial interests who, as sponsors of multi-billion dollar corporations, may have actually been influenced. In reality, smaller models and startups would have been only marginally affected because they are specifically protected by the proposed law.

It’s strange that the type of intentional cutouts for “Little Tech” that Horowitz and Andreessen routinely advocate have been distorted and minimized by the lobbying campaign they and others have made against SB 1047. (California State Senator Scott Wiener, the bill’s architect, recently spoke about all this at Disrupt.)

The bill had its problems, but opponents overstated compliance costs and failed to meaningfully support claims that it would burden or burden startups.

It’s part of the established playbook that allows Big Tech (whom Andreessen and Horowitz are working closely with, despite their posturing) to operate at the state level and win (as was the case with SB 1047) while also demanding federal solutions. It will never come, or will be toothless, due to partisan bickering over technical issues and congressional incompetence.

This joint statement on “policy opportunities” is late in the play. After torpedoing SB 1047, they can say they did so solely to support federal policy. And yet, we’re still waiting for a federal privacy law that tech companies have been pushing for for a decade as they fight state bills.

And what policies do they support? “Various responsible market-based approaches” That means getting your hands on our money, Uncle Sam.

Regulations must have “a science- and standards-based approach that recognizes regulatory frameworks that focus on the application and misuse of technology” and “focus on the risks of malicious actors misusing AI.” What this means is that rather than ex ante regulation, ex post punishment is needed when unregulated products are used by criminals for criminal purposes. This approach is so effective for the whole FTX situation that I can see why they advocate it.

“Regulations should only be implemented if the benefits outweigh the costs.” It would take thousands of words to unpack all the interesting ways this idea is expressed in this context. But basically what they are proposing is to get foxes on the chicken coop planning committee.

Regulators “must allow developers and startups the flexibility to choose which AI model to use whenever they build a solution, and should not tilt the playing field in favor of any one platform.” This means that there is some sort of plan that requires permission to use one model or the other. Because it isn’t, this is a straw man.

Here are some big points to quote throughout:

right to learn: Copyright law is designed to promote the advancement of science and useful arts by extending protections that encourage publishers and authors to make new works and knowledge available to the public. However, the public’s right to learn from these works must not be sacrificed. Copyright law should not be adopted to imply that machines should be prevented from learning in the same way as humans using data, which is the basis of AI. Knowledge and non-protected facts, whether or not contained within protected subject matter, must be freely accessible.

To be clear, the explicit claim here is that software run by multibillion-dollar companies has a “right” to access all data because it must be able to learn from data “in the same way that people do.”

First of all, no. These systems are different from people. They generate data that mimics human output from training data. It is a complex statistical projection software with a natural language interface. You have no more “rights” to documents or facts than Excel.

Second, the idea that “facts,” meaning “intellectual property,” are the only thing that interests these systems, and that some sort of fact-hoarding cabal is working to prevent that, is a fabricated narrative we’ve seen before. Perplexity’s public response to being accused of systematic content theft sparked claims that “facts belong to everyone,” and CEO Aravind Srinivas told me on stage at Disrupt as if they were being sued for knowing common sense like the streets. I repeated the error. From Earth to Moon.

This is not the place to begin a full explanation of this particular strawman claim, but I will briefly point out that the facts are as follows. In fact, the way Free Agents is created through original reporting and scientific research comes with real costs. This is why copyright and patent systems exist. This is not to prevent intellectual property from being widely shared and used, but to encourage creation by ensuring that real value is assigned to intellectual property.

Copyright law is far from perfect, and is as likely to be abused as it is used. However, this was not “adopted to imply that machines should be prevented from using data.” This is done to prevent malicious actors from circumventing the value system built around intellectual property.

That’s a very obvious question. Let the systems we own, operate, and profit from freely use the valuable work of others without compensation. To be fair, that part is “human-like”. Because it is humans who design, direct, and deploy these systems, and humans don’t want to pay for things they don’t need. We don’t want regulations to change that.

This small policy document contains many other recommendations, and there will no doubt be more detail in the version sent directly to lawmakers and regulators through official lobbying channels.

Some ideas are undoubtedly great and even a bit selfish. “Fund digital literacy programs that help people understand how to use AI tools to create and access information.” good! Of course, the author has invested heavily in these tools. Supports “Open Data Commons – accessible data pools managed for the public benefit.” exorbitant! “Review your procurement practices to allow more startups to sell their technology to the government.” exorbitant!

But these more general, positive recommendations are ones we see in the industry every year: investing in public resources and speeding up government processes. Tasty but unimportant suggestions are merely vehicles for the more important suggestions described above.

Ben Horowitz, Brad Smith, Marc Andreessen and Satya Nadella want the government to roll back regulations on lucrative new developments, let industry decide which regulations are worth the trade-offs and invalidate copyright in a more or less general way. It’s an amnesty for illegal or unethical practices that many suspect have enabled the rapid growth of AI. Whether or not children are digitally literate, this is an important policy for them.

Exit mobile version