Like several towns in the most recent Post-Election Audit, Hamden found unexplained discrepancies in the post election audit. Unlike most towns, the media in Hamden takes note. The Hamden Chronicle has the story <read>
Not by a large number, though Esposito considered any deviation to be problematic. They estimated no more than 3 percent as of Thursday, Nov. 20. That was within the range of standard human error according to Esposito.
“We’re looking at three to four votes out of 2,000 so far,” said Esposito.
We wonder where the 3% figure for standard human error comes from? We also note that four votes out of 2000 would represent .2% of the votes and perhaps a .4% margin difference in a 2000 vote race.
Our belief is that people can easily make errors, however, with reasonable procedures and supervision teams of people can count accurately. Machines can count accurately or inaccurately, but they ultimately cannot judge voters’ intent.
The Cross-Endorsed Counting Challenge:
The biggest problem they faced was cross-endorsed candidates. Rosa DeLauro, Joseph Crisco and Martin Looney are all Democratic candidates, all of whom won their reelection bids. All three were also cross-endorsed by the grass roots Working Families party. Most voters who selected them did so as Democratic candidates. A few chose them one the Working Family line. What confounded the issue was a number of “unknown” votes for the candidates where the voter filled in the bubble for both party endorsements, effectively voting twice for the same person for the same position. On election night those ballots were marked as unknowns and counted once for the candidate. It still however contributed to throwing the audit numbers off.
Like many towns, Hamden election officials apparently had problems with understanding and counting the cross-endorsed candidates. In our experience in observing audits and reviewing feedback from Audit Coalition observers we note that the problem with counting cross endorsed candidates starts with a lack of understanding on the part of registrars on how cross-endorsed candidate ballots are counted; continues with a failure to train and supervise counters in counting such ballots correctly; results in surprise and confusion with numbers that do not balance at the end of the day; and ends with many registrars suggesting an end to cross-endorsed candidates or to classify dual votes for the same candidate as overvotes.
We believe that cross-endorsements are good for Democracy. We would not want Connecticut to change from its current status as a “Voter Intent State”. Our solution would be better training for registrars, better training for election officials, detailed counting procedures, and higher standards for audits, including enforcement of existing procedures and audits independent of the Secretary of the State and local election officials.
In looking at the Hamden Audit Report collected at the audit by a Coalition observer, we note that many of the errors in cross-endorsed candidates cancel out. We disagree with some of the current audits in accurately assessing the impact of incorrectly filled in votes, yet we note that most errors in the report represent possible incorrectly filled in ballots . <Audit Report>
Upon completion of the audit, [Secretary of the State, Susan] Bysiewicz’s office will review the materials across the state and look for issues of voting fraud or mishandling of ballots and voting equipment. The audit itself is procedural and is not an indication of any implied wrongdoing.
Rae and Esposito themselves were uncertain of how the results of the Hamden audit would be used as this was the first time they had to perform the task.
In past elections we have been critical of the audit law, audit procedures, conduct of the post-election audits and follow-up. In reviewing the November town audit reports we have seen, so far, we find that Hamden’s results are not atypical. We also note that several towns report exact agreement between machines and people, while several other have demonstrated a lack of ability to follow the most basic procedures to fill in the reports accurately and completely. While some show discrepancies in ballot counts between officials and machines, others fail to report the number of ballots.














Agreed on all the areas necessary for response to this situation. Part of education is simple: read the audit protocol, flawed as it is. Many regstrars are part time and think they “don’t have time” or “already know how to do it because we have done X.” Untold grief could be saved by reading the instructions and asking questions about any misunderstood part (show me a town that understands how to identify count, and tally anomalously marked ballots?). Not doing so ultimately costs the ROV, the town, and SOTS more money and work.
I watched an audit this fall in which within the first few minutes of the audit, it became clear that only one of the counters — and none of the officials — had any idea the purpose of the audit, and none of them had read the iinstructions AT ALL. They were trying to run a recount, and when the lone worker who knew the drill (How, I don’t know) asked “what are you doing about discrepancies?” and tried to say that the procedure was somehow to separate them out and count them, both ROVs told her that it was really about looking at discrepancies and determining voter intent — not exactly. (It has been clear from the beginning that this part of the instructions is horrific, but IMHO no successful action ( in terms of results produced) has been taken by SOTS to change the instructions.)
First of all, SOTS did not anticipate and prepare town ROVS – many of whom were doing their first audit — about a completely obvious situation in regard to the audits: the new presence in a major election of the cross endorsement voting, and the possible presence of what normally would be considered “overvotes” but which should have been counted by the machine as UNK – unknown party – but for the correct candidate. I don’t think many states do cross endorsements, and therefore I would think that it’s an area where programming errors could occur. Thus, an accurate count is necessary to determine whether or not the machines “functioned properly.” Even though SOTS has been sent and says they have reviewed many counting protocols from other states, states which have been counting paper ballots for decades, SOTS has not adopted and sent around a detailed methodology. When asked how to count, SOTS seems typically to advise then to use a hash mark method — a method which, if done with less than impeccable care and routine, I believe is more prone to error than the other common method called “sort and stack”, where ballots are sorted into piles by candidate for each race, the totals tallied, and then re-sorted for the next race. (Different errors can occur with this method, mostly having to do with lack of oversight on one or both of the steps).
Next, has any town faced a penalty for not adhering to audit procedures? If so, it’s a well kept secret. I believe what happens is that Ted Bromley gives them a phone call and tries to figure out why there is not a match.
Which brings us to another issue with the audits: In order to find a discrepancy, a legitimate mismatch must be discovered. With irregular counting methods, SOTS is going to spend money (at a time when their budget was just cut by $1.5 million) chasing after the missing ballots/votes and second guessing the way the audit turned out. Instead of posting a lifeguard at the bottom of Niagara Falls, how about some warning buoys before the precipice??
In some cases, SOTS or the town may have to recount — spending more money. Why?
First, because the initial instructions were so poor that –as far as I can see — anyone coming out with a proper audit almost certainly somehow has a leg up — they helped to write and understand the instructions, e.g. Even doing four audits, as e.g. Greenwich has done, does not appear to be enough in and of itself to result predictably in an informed, comprehending ROV.
Second, the focus of “finding a match” is misguided, but based on actions, it appears to be the default focus of both SOTS and the ROVs. The true goal of the audit should be to produce a rock-solid count, matched or not, that ROVs will stake their reputation on, and about which they feel 100% confident. Where there’s no match, there may be programming errors, procedural errors (ballots left in the auxiliary bin of the machine, e.g.), or security issues (ballots still in the trunk of the car; issues of fraud). Audit escalation — research into why the discrepancy — or reconvening the audit at a future date to review counting (as is provided in the instructions) are both ways to further examine the situation.
However, CT has neither codified nor written into its procedures:
1) before-the-fact standards and “tripwires” for how much of a discrepancy, and of what kind, represents an issue. For example, discrepancies may occur with more/less checked in voters than ballots cast, more/less votes than ballots on the most prominent race, more/less votes than the machine tally shows. UCONN could identify and recommend from a technical standpoint which situations might benefit from examination of the machines and memory cards or other aspects of the election.
2) The series of procedures and actions that would be used to address poorly conducted counts, including UCONN examination of machine/mem cards, SEEC investigation, or SOTS intervention. ( audit questions that would be diagnostic of a falsely produced “match” that could be a helpful improvement, but I believe this protocol and implementation is pretty fatally flawed).
IN terms of appreciation of the importance of accurately discovering issues with the mechanical limits and shortcomings of the machines, programming discrepancies or fraud, I have been no town (in those I have audit observed) in which the town wanted to do anything other than “get a match”. The computer count is perceived to be 100% accurate, and the hand counts are believed universally (in my limited experience) to be not accurate.
3. Because escalation (stepping up the investigation in response to discrepancies between the election count and audit results) is not codified or even in writing, it is not transparent, observable process. Discrepancies shoudl NOT be examined in the dead of night and then hidden from the public with sanitized, “final” audit results (this is what happens now — one particularly severe discrepancy I have observed — involving 100% of the ballots for 25 districts being taken out of their bags prior to the audit and presorted by party for counting, then delivered for audit in unmarked, insecure recycled cardboard boxes — I cannot detect when I read the UCONN/SOTS report later). I believe “escalation” at this point is a phone call from SOTS employee Ted Bromley, and that in the past, discrepancies have sometimes simply been “resolved” by calling the ROV on the phone.
Don’t forget — the ROV is responsible for the original count, the audit, and the resolution of discrepancies. There is no independent outsider to come in and make the audit in the end a net learning opportunity for the ROV. (It is not codified as such, but a continuous improvement model would be better than what we have now).
If there is a discrepancy, among the possibilities are that the machine maintenance was done improperly, one of the parts that easily fails is in the process of failing or there was a programming error or malfeasance, the ballots were insecure and were interfered with somehow or that paperwork, security, methodology of the audit or election count were problematic.
There is not much of a “teaching moment” when there are no eyes on the ground to witness the logistical and other conditions that exist in the town that conducted the audit.
To further explain how matching works to discredit the quality of the hand count, consdider what typically happens (in a good percentage of cases; I can’t say most because I haven’t seen that many): Often ROVs dismiss their counters before they have completed the tally, especially in a long, involved count. It’s a very bad idea. It is then, after counters are dismissed, that finally tallying occurs and one or more discrepancies surfaces. Lacking personnel to speed up the operation of recounting the ballots (if vote aggregation errors are not the source of problems), the confirmation of base audit totals devolves into an effort that begins and ends by trying to “back out” errors — that is, in locating just enough problems in partial quantities of ballots to “match”.
By the way, sometimes the very methodologies used to tally the numbers are a potential source of the problem — counting silently in the counter’s head (how do you observe that and confirm accuracy?), paperless calculators where the totals are input and not observed by a team member, spreadsheets copied from previous elections or audits (with possibly incorrect formulas left in place but hidden from the novice spreadsheet user’s view).
Regarding backing out errors: you cannot take an improperly executed vote count, the totals of which are in question, and then only do a partial recount to resolve it — but nothing in the SOTS methodology clarifies this fact. Why? Because “resolve” in this case REALLY means to SOTS “match the count”. Don’t forget that out audit, by SOTS’ own admission, was originally a public relations exercise to convince voters the machines were highly accurate counting devices.
Offficials believe they have “resolved” something when they back out enough discrepancies to match the count — and they may have, but it is also possible that they really have only looked just far enough to get a number that will let them go home. The quality of the count may still be untrustworthy, and other, different errors may lurk in the nonreviewed ballots/votes. Thus, the audit, though “matched” is really unreliable for the purpose of examining machine function.
Another bit of folly regarding the audits involves the potential for SEEC enforcement. SEEC’s Joan Andrews has confirmed that SEEC does not believe it has the statutory authority to enforce SOTS regulations, and many of the machine -related rules are now regulations, not in the statutes. If problems arise that either are not covered in the statutes, or a regulation does not reference a statute, SEEC believes its hands are tied. Although SOTS says it is in favor of SEEC being fully empowered, the legislature last year did not take action on the request from SOTS and SEEC to clarify SEEC’s powers.