Most people are aware of the risks of unreliable computers, yet tend to be oblivious to the distinct risk of too reliable computers. If computers were as unreliable as people, we would not be at risk of excess trust and overconfidence.
One particular anecdote from lasts night’s Newshour highlights the risks of computers that are too reliable, yet not perfect. When it comes to medicine (or robotic weapons) too reliable computers can cause harm, including death. When it comes elections too reliable computers can kill democracy.
This week the Newshour is covering Artificial Intelligence, a subject first covered in the McNeil-Lehrer Report in 1985, if I recall correctly. Last night’s segment was Why We’re Teaching Computers to Diagnose Cancer <read/video>
Here is the critical excerpt:
DR. ROBERT WACHTER: A lot of medicine kind of lives in that middle ground, where it’s really messy. And someone comes in to see me and they have a set of complaints and physical exam findings all that. And it could be — if you look it up in a computer, it could be some weird — it could be the Bubonic plague, but it probably is the flu.
HARI SREENIVASAN: Wachter is also concerned about fatal implications that can result from an over-reliance on computers. In his book, he writes about a teenage patient at his own hospital who barely survived after he was given 39 times the amount of antibiotics he should have received.
DR. ROBERT WACHTER: So, in two different cases, the computers threw up alerts on the computer screen that said, this is an overdose. But the alert for a 39-fold overdose and the alert for a 1 percent overdose looked exactly the same. And the doctors clicked out of it. The pharmacists clicked out of it. Why? Because they get thousands of alerts a day, and they have learned to just pay no attention to the alerts.
Where the people are relegated to being monitors of a computer system that’s right most of the time, the problem is, periodically, the computer system will be wrong. And the question is, are the people still engaged or are they now asleep at the switch because the computers are so good?
There are several related problems all contributing to increase the risk of too reliable computers:
- High Reliability: Most of the time the computers are more accurate than people, especially when the people are unsure of the diagnosis or remedy.
- Irrational Trust: The people are told and correctly believe the machine is more reliable than they are, especially when they are unsure or outside their expertise. Its likely our nature instilled by evolution to trust what has proven accurate. Its only irrational when the trust exceeds the risk. People are good at estimating accuracy, but not so good at intuiting the risks of lower probability events. We have biases for irrational fear and irrational trust, both can be costly, yet in different ways.
- Mesmerization: We get jaded or used to things going a particular way and miss the details that may indicate something is different. Here it is medical staff used to seeing irrelevant or low level warnings, missing the implications of a similar significant risk. Airline pilots, railroad engineers, drivers, doctors, and dentists among many others are subject to Mesmerization.
Another similar situation is too great a trust in vehicle electronics. Either a manufacturer relying on electronics to always apply the break or accelerator correctly when the pedal is pushed, or people trusting that car computers always work as designed and tested, with no danger of being hacked.
How does this apply by analogy to elections and too reliable voting machines?
It seems that almost everyone trusts electronic voting machines. We are used, for the most part, to computers working when they seem to work. When we use a spreadsheet we tend to assume it is working properly. Yet, beyond the chance of error in the spreadsheet software, we tend to trust the formulas put into spreadsheets by people. Even though we are flawed individuals,we tend to forget that equally flawed individuals (even ourselves) may have made a simple error in creating formulas e.g. adding up only some of the numbers, double counting others, or made a “small, harmless” change after testing the spreadsheet.
Election officials tend to have trust in voting machines. They are told that all types of voting machines or online voting machines are created by very smart people and include certification and “military grade” security. Yet, we are given no effective proof of those claims and typical officials are not able to judge such proofs. Officials see reports of tests and post-election audits that claim the machines are flawless, increasing their trust in the machines. Typically if they count ballots by hand and they do not match the machine counts, they count again and usually the machine was accurate.
On the other hand, those that are familiar with election equipment, computers and computer science know:
- No computer or software can ever be proven error free. In fact, most, even modestly complex, software is very likely to have multiple undetected bugs.
- It has not happened often, but computers and computer systems have counted incorrectly. Including in CA, FL, N.J., D.C., and in Connecticut.
- Without paper ballots and effective post-election audits there is no reason to trust that machines count accurately, or to know how often they do not.
- Machines are programmed for each election and voting district, so errors can be introduced into the system at any time.
- Beyond errors, insiders have multiple means of changing election results. Often a single individual insider can change results alone or with the help or by the intimidation of outsiders.
- A voting machine can be entirely accurate, yet its results or the total result can be changed independently of the voting machine. Unless the results are audited end-to-end or in each step of the process, the result cannot be legitimately trusted.
What about Connecticut?
- We have post-election audits, but they are not conducted in a manner that gives justified confidence. Errors in machine results have been detected, yet most differences between machine results and manual audits have been accepted as a human counting error without investigation. This makes common sense since usually when results are checked the human was wrong the first time – common sense that is at least as risky and unjustified as the unjustified trust in medical artificial intelligence directives in the Newshour story above.
- Connecticut is considering legislating Machine Audits, based on procedures to be approved by the Secretary of the State. Common sense supports a method demonstrated by UConn and the Secretary of the State’s office and touted in a paper presented at a conference – unjustified common sense. There is no scientific justification for that method demonstrated, and worse, every reason to believe that it would be subject to unjustified official trust in computers and mesmirization. Professor Alex Shvartsman of UConn has agreed that the procedures is insufficient to provide public verification.
Fortunately, there is a very effective solution available. We have proposed a sound method of Machine Assisted Audits based on proven scientific methods. Using Machine Assisted Audits in an effective manner could result in more accurate, trusted audits at less cost and stress to local election officials. If machine audits become law, we will work to insist on effective transparent and publicly verifiable procedures are employed. (Still, we would much prefer a law that mandated sufficient requirements now, that could not be weakened by a future Secretary of the State) <read more in our comments on the bill before the Connecticut General Assembly>













