The Spear In AI’s Back

The Spear In AI’s Back

Authored by Charles Hugh Smith via OfTwoMinds blog,

That real harm will result from the use of AI tools is a given.

AI is like the powerful character in an action movie who looks invincible until they turn around, revealing a fatal spear embedded in their back. The spear in AI’s back is the American legal system, which has been issuing free passes to tech companies and platforms for decades on the idea that limiting innovation will hurt economic growth, so we’d best let tech companies run with few restrictions.

The issuance of free passes to Tech monopolies / cartels and platforms may be ending. Letting Big Tech run with few restrictions has led to the smothering of innovation as tech monopolies do what every monopoly excels at, which is buy up potential competitors, suppress competition, pursue regulatory capture via lobbying and spend freely on deceptive PR.

Now anti-trust regulators are finally looking at the uncompetitive wastelands created by Big Tech and recognizing the union-busting tactics of quasi-monopolies like Starbucks and Amazon. The bloom might be off the Big Tech / Monopoly rose.

Enter AI, which offers the thrilling prospect of trillions of dollars in additional profits for purveyors of AI and all those companies which use their AI tools.

The American legal system deals with new technologies much as a reptile digests a meal–slowly. I get email from readers about defending the Constitution, something we all support. I am not an attorney, but my impression of Constitutional law is that it is a tediously complex thicket of case law that must be carefully picked through before we can even begin to understand exactly what we’re defending: every issue anyone might be concerned about has already accumulated an immense load of rulings and arguments.

This is American jurisprudence: advocacy goes to trial and ruling are issued, some as rulings that will pertain to all future cases and some that will not. The law advances in new fields such as AI as positions are argued before judges / juries and then reviewed by higher courts as losers appeal judgments / rulings.

A great many things we might think are novel have long been settled. Isn’t the Selective Service Act a form of involuntary servitude? Nope, that’s been settled long ago. The government’s right to draft you to fight in a war of choice is unquestionably the law of the land.

AI has certain novel features which have yet to be decided by the processes of advocacy, rulings and appeals. In general, corporations selling / giving away AI tools are claiming these tools incur no liability to the issuers of the tools because they’re akin to software that, for example, adds HTML coding to plain text: a tool that performs a process.

This strikes me as incomplete. It seems to me that AI, by its very name and nature, is making implicit claims of utility far beyond mere processing of data or text: AI is called AI because it is adding intellectual value to data or text.

All the disclaimers in the world cannot dissolve this implicit claim of utility that adds value. Since I’m not an attorney, I’m not able to put this in proper legal terms; I am using the terminology of philosophy. But the law is a system based on philosophic principles, and so the language of philosophy plays a key role in broadly applicable legal rulings.

Now let’s consider a real-world example. A patient receives a mid-diagnosis and suffers as a direct result of the mis-diagnosis. In our system of law, somebody or some entity is liable for the consequences of the error, and must pay restitution to those harmed by the error.

As fact-finding proceeds, it turns out an AI tool was used in the initial scanning of the patient’s data. The company that created the AI tool will naturally claim that the tool was intended only to be used under the supervision of a human professional, and there were no claims made as to the accuracy of the AI tool’s output.

This is a specious argument, as the clear intent of the AI tool is to replace human expertise as a means of lowering the costs of diagnosis by accelerating the process and increasing the accuracy of the diagnosis.

Clearly, the tool was designed for exactly this purpose, and therefore deficiencies in its performance that contributed to the mis-diagnosis–for example, the fact that the AI tool rated the diagnostic result with a high probability of accuracy–are the responsibility of the company that issued the AI tool.

Should the court find the AI company 1% liable for the misdiagnosis, the principle of joint and several liability means the monetary settlement falls on whichever parties can pay the settlement. Should the other parties found liable be unable to pay a $10 million settlement, then the AI company might end up paying $9 million of the $10 million settlement, despite their apparently limited liability.

Off the top of my head, I can foresee dozens of similar examples in which an AI tool can be found partially liable for misrepresentations, errors of omission, unauthorized use of confidential intellectual property, and so on, in what can easily become an endless profusion of liability claims.

If the bloom is off the rose of Big Tech, the likelihood of a court assigning liability to those issuing AI tools increases proportionately. If the ruling is upheld by an appeals court, it will generally enter case law and become the basis for similar lawsuits assigning liability to those entities issuing AI tools.

That real harm will result from the use of AI tools is a given. The idea that those issuing these tools should be given a free pass because “we really didn’t mean that you could use the tools to reduce human labor and increase accuracy” does not pass the sniff test, nor will it negate advocacy claiming that these tools implicitly make claims about utility that incur liability.

Use an AI tool, get sued. The Wild West of AI’s claims of zero liability will soon enter the meat grinder of jurisprudence, and implicit claims of utility will be more than enough to incur liability in a court of law–as they should.

The legal spear in AI’s back could prove fatal. A 1% error rate and 1% liability will add up fast.

*  *  *

Become a $3/month patron of my work via patreon.com.

Subscribe to my Substack for free

Tyler Durden
Thu, 04/18/2024 – 07:20

via ZeroHedge News https://ift.tt/aNguxcn Tyler Durden

Leave a Reply

Your email address will not be published. Required fields are marked *