Blog

Injustice Robots: Real and Present Danger of Police Overconfidence in AI | flyingpenguin

This New Yorker story about unjustified overconfidence in AI — expensive and flashy policing toys technology — reminds me of the trouble with radar detectors.

…technology has only grown more ubiquitous, not least because selling it is a lucrative business, and A.I. companies have successfully persuaded law-enforcement agencies to become customers. […] Last fall, a man named Randal Quran Reid was arrested for two acts of credit-card fraud in Louisiana that he did not commit. The warrant didn’t mention that a facial-recognition search had made him a suspect. Reid discovered this fact only after his lawyer heard an officer refer to him as a “positive match” for the thief. Reid was in jail for six days and his family spent thousands of dollars in legal fees before learning about the misidentification, which had resulted from a search done by a police department under contract with Clearview AI. So much for being “100% accurate.” Five Axis Cnc Machining

You think that’s bad?

Imagine how many people since the 1960s in America have tangled into fines, jail or even being killed due to inaccurate and unreliable “velocity” sensors or “plate recognition” used for racial profiling law enforcement. The police know about technology flaws, and judges too, yet far too often they treat their heavy investments in poorly measured and irregularly operated technology as infallible.

They also have some clever court rules to protect their players in the game. For example, try walking into a court and saying this:

maximum acceleration = velocity accuracy / sample time

amax = Maximum Acceleration ±vacc = Velocity Accuracy ti = Sample Time

A speed sensor typically measures velocity of an object traveling a set distance (between “gates” that are within range of the sensor). Only targets within these parameters will have a fair detection or reading.

…accelerations must not be neglected in the along-track velocity estimation step if accurate estimates are required.

If a radar sensor samples every second, a velocity change greater than 1.0 mph can exceed a limit to accurately read. A half-second sample would be a 2.0 mph change limit. A quarter-second sample would be a 4.0 mph change limit, and so forth.

In other words, you step up to the judge and tell them their beloved expensive police toy technology is unable to measure vehicle velocity when it changes faster than a known calculated limit of a radar device, which is a problem especially pronounced around common road curves and with vehicle angles (e.g. “cosine effect” popularized in school math exams).

Any trustworthy court assessment would take a look at radar specs and acceleration risk to the sensor …to which the judge might spit their chew into a bucket and say “listen here Mr. smarty-math-pants big-city slicker from out-of-town, you didn’t register with our very nice and welcoming court here as an expert, therefore you are very rude and nothing you say can be heard here! Our machine says you are… GUILTY!” as they throw out any or all evidence that proves technology can be wrong.

Not saying this actual court exchange really happened in rural America, or that I gave a 2014 BlackHat talk about this happening (to warn that big data systems are highly vulnerable to breaches of integrity), but… anyway, have you seen Blazing Saddles?

It’s like saying guns don’t kill people, AI with guns kill people.

AI is just technology and it makes everything worse if we allow it to escape the fundamental social sciences of where and how people apply technology.

Fast forward (pun not intended) and my warnings from 2014 big data security talks have implications to things like “falsification methods to reveal safety flaws in adaptive cruise control (ACC) systems of automated vehicles”.

…we present two novel falsification methods to reveal safety flaws in adaptive cruise control (ACC) systems of automated vehicles. Our methods use rapidly-exploring random trees to generate motions for a leading vehicle such that the ACC under test causes a rear-end collision.

Falsification in AI safety literally has become a dangerous life and death crisis over the past decade, with some (arguably racist) robots already killing over 40 people.

Cars don’t kill people, AI in cars kill people. In fact, since applying AI to cars in a rush to put robots in charge of life or death decisions, Tesla has killed more people in a few short years than all people killed by all robots in history.

That’s a fact, as we recently published in The Atlantic. Predictable disaster, I say, because I warned about exactly this result for the past ten years (e.g. 2016 Ground Truth Keynote presentation at BSidesLV). Perhaps all these deaths are evidence of what courts now refer to as product “harm by design” due to a documented racist and antisemite.

Look at how the NHTSA frames the safety of radar sensors for police use in their Conforming Product List (CPL) Speed-Measuring Devices to maintain trust in the technology:

…performance specifications ensure the devices are accurate and reliable when properly operated and maintained…

Show me the comparable setup from NIST for a conforming list of AI image reading devices used by police, not to mention definitions of proper operation.

Let’s face it (pun not intended), any AI solution based on sensor data of any kind including cameras should have come under the same scrutiny as other reviews (human or machine) of sensor data, to avoid repeating all the inexcusable rookie mistakes injustices by overconfident technology-laden police over several prior decades.

And on that note, the police should expect to be severely harmed by AI themselves in careless operation.

Cluster of testicular cancer in police officers exposed to hand-held radar

Where are all the social scientists when you need them?

“No warning came with my radar gun telling me that this type of radiation has been shown to cause all types of health problems including cancer,” [police Officer] Malcolm said. “If I had been an informed user I could have helped protect myself. I am not a scientist but a victim of a lack of communication and regulation.” […] “We’re putting a lot of people at risk unnecessarily,” [Senator] Dodd said. “The work of police officers is already dangerous, and officers should not have to worry about the safety of the equipment they use.”

Which reminds me of the police officers who have been suing gun manufacturers over a lack of safety. You’d think, given the track record of high risk technology in law enforcement, no police department in their right mind would apply any AI to their work without clear and tested safety regulations. If you find any police department foolishly buying the notoriously deadly AI of Tesla, for example, they are headed directly into a tragic world of injustice.

Judge finds ‘reasonable evidence’ Tesla knew self-driving tech was defective

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Fingerprint Sensor https://www.flyingpenguin.com/?feed=rss2