Bysiewicz: “Optical scanners were remarkably accurate”

Remarkable? We do NOT agree that phoning election officials and getting them to agree that they counted inaccurately provides much confidence in the audit, least of all proof that the machines counted accurately. Nor does disregarding incomplete reports create credibility.

Press Release:  BYSIEWICZ RELEASES FINAL REPORTS ON INDEPENDENT AUDIT OF NOVEMBER 2009 MUNICIPAL ELECTION RESULTS AND MEMORY CARDS <read>

“My office entered into this historic partnership with the University of Connecticut VoTeR Center so that we could receive an independent, unbiased accounting of Connecticut’s optical scan voting machines,” said Bysiewicz. “The results of these three studies confirm that numbers tallied by the optical scanners were remarkably accurate on Election Day November 3, 2009. Voters should feel confident that their votes were secure and accurately counted.”…

As part ofUConn’s report, a total of 776 records of races were reviewed by the VoTeR Center following the local audit process. Of that sample, 57.6% or 447 records were complete and contained no obvious audit errors. Of those, only 36 or 8% showed a discrepancy between machine counts and hand audits of between one and three votes, with the largest single discrepancy being three votes. Officials from the Secretary of the State’s office investigated another 299 records of audits where larger discrepancies were originally shown that were later determined to be caused by human error during the hand-count auditing process.

Unlike Secretary Bysiewicz: We do NOT agree that phoning election officials and getting them to agree that they counted inaccurately provides much confidence in the audit, least of all proof that the machines counted accurately.  Nor does disregarding incomplete reports create credibility.

See our comments on the UConn Report:

We have several concerns with these investigations:

  1. All counting and review of ballots should be transparent and open to public observation.  Both this year and last year we have asked that such counting be open and publicly announced in advance.
  2. Simply accepting the word of election officials that they counted inaccurately is hardly reliable, scientific, or likely to instill trust in the integrity of elections.  How do we know how accurate the machines are without a complete audit, any error or fraud would likely result in a count difference, and would be [or could have been] very likely dismissed.
  3. Even if, in every cases officials are correct that they did not count accurately, it cannot be assumed that the associated machines counted accurately.
  4. Simply ignoring the initial results in the analysis of the data provides a simple formula to cover-up, or not recognize error and fraud in the future.

Sort of like Major League Baseball doing a random drug test, and then calling the team managers and having them agree that they must have botched the tests that were positive for drugs.

We also question if audit would pass muster as “Independent” since all the counting is supervised by the same officials responsible for the conduct of the election in the first place.  Only the statistical analysis might be considered independent, being performed by UConn.

We will find it remarkable if anyone disagrees with our conclusions.

Nov 09 Election Audit Reports – Part 2 – Inadequate Counting, Reporting, and Transparency Continue

“The main conclusion of this analysis is that the hand counting remains an error prone activity. In order to enable a more precise analysis, it is recommended that the hand counting precision is substantially improved in future audits. The completeness of the audit reports also need to be addressed…Submitting incomplete audit returns has little value for the auditing process.”

Late last week the University of Connecticut (UConn) VoTeR Center posted three reports from the November election on its web site <Pre-Election Memory Card Tests>, <Post-Election Memory Card Tests>, and <Post-Election Audit Report>.  In Part 1 we discussed the memory card tests and in Part 2 we discuss the Post-Election Audit Report.

Highlights from the official report:

The VoTeR Center’s initial review of audit reports prepared by the towns revealed a number of returns with unexplained differences between hand and machine counts and also revealed discrepancies in cases of cross-party endorsed candidates (i.e., candidates whose names appear twice on the ballot because they are endorsed by two parties). As a result, the SOTS Office performed additional information-gathering and investigation and, in some cases, conducted independent hand-counting of ballots. …Further information gathering was conducted by the SOTS Office to identify the cause of the moderately large discrepancies, and to identify the cause of discrepancies for cross-party endorsed candidates…

This report presents the results in three parts: (i) the analysis of the original audit records that did not involve cross-party endorsed candidates, (ii) the analysis of the audit records for cross-party endorsed candidates, and (iii) the analysis of the records that were revised based on the SOTS Office follow up. The analysis does not include 6 records (0.8%) that were found to be incomplete. ..

The main conclusion in this report is that for all cases where non-trivial discrepancies were originally reported, it was determined that hand counting errors or vote misallocation were the causes. No discrepancies in these cases were reported to be attributable to incorrect machine tabulation. For the original data where no follow up investigation was performed, the discrepancies were small; in particular, the average reported discrepancy is much lower than the number of the votes that were determined to be questionable.

Further on in the report is another conclusion:

The main conclusion of this analysis is that the hand counting remains an error prone activity. In order to enable a more precise analysis, it is recommended that the hand counting precision is substantially improved in future audits. The completeness of the audit reports also need to be addressed. For example, in two of the towns when the second hand count was performed it was determined that the auditors did not count a batch of 25 ballots in one case and the absentee ballots in the second. This initially resulted in apparently unexplained discrepancies. Submitting incomplete audit returns has little value for the auditing process.

We note the details of the investigations to determine the accuracy of human and machine counting includes some counting of ballots and some telephone conversations with election officials:

The first follow up was performed to address substantial number of discrepancies in some precincts (discrepancies over 30 votes). All those Version 1.3 April 20, 2010 UConn VoTeR Center 15 unusual discrepancies were concentrated in four towns. As a result in those towns a second hand count of the actual ballots was performed by the SOTS Office personnel…

We now discuss a batch of records containing 218 (28.1% of 776) records where originally the reported discrepancies were under 30 (these do not include cross-party endorsed candidates). In this case the SOTS Office personnel contacted each registrar of voters and questioned their hand count audit procedures. In all instances, the registrars of voters were able to attribute the discrepancies to hand counting errors. Thus no discrepancies (zero) are reported for these districts. Given the fact that no discrepancies were reported for those records we do not present a detailed analysis.

We have several concerns with these investigations:

  1. All counting and review of ballots should be transparent and open to public observation.  Both this year and last year we have asked that such counting be open and publicly announced in advance.
  2. Simply accepting the word of election officials that they counted inaccurately is hardly reliable, scientific, or likely to instill trust in the integrity of elections.  How do we know how accurate the machines are without a complete audit, any error or fraud would likely result in a count difference, and would be [or could have been] very likely dismissed.
  3. Even if, in every cases officials are correct that they did not count accurately, it cannot be assumed that the associated machines counted accurately.
  4. Simply ignoring the initial results in the analysis of the data provides a simple formula to cover-up, or not recognize error and fraud in the future.

As we have said before we do not question the integrity of any individual, yet closed counting of ballots leaves an opening for fraud and error to go undetected and defeats the purpose and integrity of the audit.

We also note that in several cases officials continued to fail perform the audit as required by law or to provide incomplete reports.

On the other hand we note that only 6 records (0.8% of 776) were found to be incomplete. The statistical analysis does not include these records. While some problematic records are clearly due to human error (e.g., errors in addition), in other cases it appears that auditors either did not follow the audit instructions precisely, or found the instructions to be unclear. However, this is a substantial improvement relative to the November 2007 and November 2008 elections, where we reported correspondingly 18% and 3.2% of the records that were unusable.On the other hand we note that only 6 records (0.8% of 776) were found to be incomplete. The statistical analysis does not include these records. While some problematic records are clearly due to human error (e.g., errors in addition), in other cases it appears that auditors either did not follow the audit instructions precisely, or found the instructions to be unclear. However, this is a substantial improvement relative to the November 2007 and November 2008 elections, where we reported correspondingly 18% and 3.2% of the records that were unusable.

Improvement or not, our solution would be to require the towns involved to, correct their errors, comply with the law, and perhaps be subject to a penalty.  Not pursuing such provides a clear formula for covering up errors and fraud.

Finally, since only “good” records were fully analyzed we question the value of some the reported statistics based only on those results. We do agree with the reports recommendations:

The main conclusion of this analysis is that the hand counting remains an error prone activity. In order to enable a more precise analysis, it is recommended that the hand counting precision is substantially improved in future audits. The completeness of the audit reports also need to be addressed. For example, in two of the towns when the second hand count was performed it was determined that the auditors did not count a batch of 25 ballots in one case and the absentee ballots in the second. This initially resulted in apparently unexplained discrepancies. Submitting incomplete audit returns has little value for the auditing process.

For the cross party endorsement, it is important for the auditors to perform hand counting of the votes that precisely documents for which party endorsement the votes were cast, and to note all cases where more than one bubble was marked for the same candidate. The auditors should be better trained to follow the correct process of hand count audit…

We also believe that our reporting of the analysis, and the analysis itself needs to be improved. A major change planned for future analysis is to assess the impact of the perceived discrepancies on the election outcomes (in addition to analyzing individual audit return records). This is going to be exceedingly important for the cases where a race may be very close, but where the difference between candidates is over 0.5% (thus not triggering an automatic recount)[*]

* CTVotersCount Note: Connecticut has an automatic ‘recanvass’, triggered at a difference of less than 20 votes or 0.5% up to  a maimum difference of 2000 votes.

In January, the Connecticut Citizen Election Audit Coalition Report analyzed the November 2009 Post-Election Audit data and the observations of citizen volunteers:

In this report, we conclude that the November post-election audits still do not inspire confidence because of the continued lack of

  • standards for determining need for further investigation of discrepancies,
  • detailed guidance for counting procedures, and
  • consistency, reliability, and transparency in the conduct of the audit.

Compared with previous reports of November post-election audits:

  • The bulk of our general observations and concerns remain.
  • The accuracy of counting has improved. There was a significant reduction in the number of extreme discrepancies reported. However, there remains a need formuch more improvement.
  • There was a significant improvement in counting cross-endorsed candidate votes
  • The number of incomplete reports from municipalities has significantly decreased.

We find no reason to attribute all errors to either humans or machines.

There is no reason to modify the Coalition’s conclusion based on the official report. Many of the same concerns and conclusions we discussed last year still apply.  See last year’s post for more details, here is a summary:

  • The investigations prove that Election Officials in many Connecticut municipalities are not yet able to count votes accurately
  • The audit and the audit report are incomplete
  • Even with all the investigations and adjustments we have many unexplained discrepancies [Unless we accept the belief of officials that they counted inaccurately, and in all those cases the machine counted accurately]
  • The Chain-of-Custody is critical to credibility
  • Either “questionable ballot” classification is inaccurate in many towns or we have a “system problem”
  • Accuracy and the appearance of objectivity are important
  • Timeliness is important
  • The problem is not that there were machine problems. We have no evidence there were any. The problem is that when there are or ever were, dismissing all errors as human counting errors, we are unlikely to find a problem
  • We stand by our recommendations and the recommendations of other groups
  • The current Audit Process in Connecticut demonstrates the need for audits to be Independent and focused on election integrity, not just machine certification reliability

As we said last year;

We recognize and appreciate that everyone works hard on these programs, performing the audits, and creating these reports including the Registrars, Secretary of the State’s staff, and UConn.   We also welcome Secretary Bysiewicz’s commitment to solve the problems identified.  Yet, we have serious concerns with the credibility of the audits as conduced and their value, as conducted, to provide confidence to the public in the election process.

Nov 09 Election Audit Reports – Part 1 – Problems Continue and Some Good News

We should all applaud the unique memory card testing program, yet we must also act aggressively to close the gaps it continues to expose…The good news is that UConn has identified a likely cause of the “junk” data cards. Perhaps a solution is near.

Late last week the University of Connecticut (UConn) VoTeR Center posted three reports from the November election on its web site <Pre-Election Memory Card Tests>, <Post-Election Memory Card Tests>, and <Post-Election Audit Report>.  In Part 1 we will discuss the memory card tests and in Part 2 the Post-Election Audit Report.

As we said last year:  We should all applaud the unique memory card testing program, yet we must also act aggressively to close the gaps it continues to expose.

We note the following from this year’s reports:

  • An increase in the percentage of memory cards in the pre-election test.
[pre-election 2009]  The VoTeR Center received in total 491 memory cards from 481 districts before the elections. This document reports on the findings obtained during the audit. The 491 cards represent over 80.6% of all districts, thus the audit is broad enough to draw meaningful conclusions.

[pre-election 2008] the VoTeR Center received and examined 620 memory cards [about 74% of districts] as of November 3, 2008. These cards correspond to 620 distinct districts in Connecticut. About 2/3 of these memory cards were randomly chosen by the VoTeR Center personnel during the visits to LHS and before the cards were packed and shipped to the towns. Another 1/3 of the memory cards came from the towns directly, where the cards were randomly chosen for preelection audit (this procedure applied to the town for which the cards were not selected at LHS).

  • And a significant drop in the percentage of memory cards in the post-election test:
[post-election 2009] The VoTeR Center received in total 120 memory cards from 49 districts [approximately 8.0% of all districts] after the elections. The cards were received during the period from December 12, 2009 to February 12, 2010. Among the received cards, 49 were used in the elections,

[post-election 2008] The VoTeR Center received in total 462 memory cards from a number of districts after the elections… Among these cards, 279 were used in the elections… The 279 cards represent over 30% of all districts,

As we understand it, the Secretary of the State’s Office asks all towns to send in memory cards for each district, they are not randomly selected.  This means that we cannot be sure the percentages of  “junk” data or procedural lapses reported actually represent a reliable measure of all memory cards and official actions, yet it seems reasonable to conclude that:

  • “Junk” data continues at an unacceptable rate:
[pre-election]The audit identified forty two (42) cards, or 9%, that contained “junk” data; these cards are unreadable by the tabulators, and easily detected as such. This is a high percentage of faulty/unusable cards. We note that this is consistent with the percentage reported for the pre-election audit of November 2008 elections. The percentage is lower than detected in the post-election audit for the August 2008 primary (15%), but higher than detected in the pre-election audit for the August 2008 primary (5%), post-election audit for the February 2008

[post-election] Concerning the remaining cards, 14 (12% of the total number of cards) were found to contain junk data, that is, they were unreadable, which is easily detected by the tabulators; had a card contained junk data at the time of the election,

So the problem of “junk” data continues at a likely rate toward the middle of past testing results.  As we have said before 5%, 9%, 15%, even 1% is a huge failure rate for relatively simple technology such as memory cards.

  • Very good news on the “Junk” data cards:

We have determined that weak batteries are the primary cause of junk data on cards; a separate report will document this in more detail. It is recommended that batteries are replaced before each election.

It seems that UConn has identified a likely cause of the “junk” data cards.  Perhaps a solution is near.  We look forward to reading that separate report.

  • Officials continue to fail to follow procedures at a significant rate
[pre-election] The audit identified twenty-three (23) cards where the audit log indicates card duplication events. Card duplication is not authorized per SOTS Office instructions. Otherwise the cards were properly programmed for elections…There are 76 cards (15%) that were properly programmed, but were found in unexpected states or contained unexpected timing of events. This does not necessarily present an immediate security concern, however the findings indicate that the established procedures are not strictly followed in some cases.

[post-election] 14 contained junk data
2 were not programmed (formatted, but blank)
3 were involved in duplication
4 were non-standard cards (32KB instead of 128KB) [LHS not election official error] 4 were programmed for different elections

The main concern with such failures to follow procedures is that they are symptomatic that other procedures are frequently not being followed, yet each failure represents a possible lapse in security and election integrity.

Comments from our post on last year’s report still apply:

  • A non-random partial post-election audit of memory cards is useful, but it is insufficient
  • How many more tests, reports, and elections will it take before the junk data problem is significantly reduced? [Thanks to UConn, based on the 2009 report, we may have an answer soon]
  • Almost every failure to follow procedures is an opportunity to cause problems, cover up errors, or cover up fraud. [including not sending in cards for testing]. We can only hope that the Registrars of Voters will join in the commitment to meet a much higher standard.

For more details behind these comments please read our post on last year’s report.

Municipal Primary Post-Election Audit Drawing

There were 33 districts in the recent primary, so 4 or 10% were selected for audit. We also selected one alternate. Given the number of districts in Hartford, New Haven, and West Haven it is not surprising that they were selected.

Today, I observed and participated in the post-election audit random drawing at the Secretary of the State’s Office. There were 33 districts in the recent primary, so 4 or 10% were selected for audit.  We also selected one alternate. Given the number of districts in Hartford, New Haven, and West Haven it is not surprising that they were selected.

Here is the official Press Release.

Nov 09 Election Observation Report – Improvement, Yet Still Unsatisfactory

The Coalition noted significant differences between results reported by optical scanners and the hand count of ballots by election officials across Connecticut. Compared to previous audits, the Coalition noted small incremental improvements in the attention to detail, following procedures, and in the chain-of-custody.

In this report, we conclude that the November post-election audits still do not inspire confidence. We find no reason to attribute all errors to either humans or machines.

Press Release, Full Report etc: <click>

Summary, from the Press Release and Report:

Coalition Finds Unsatisfactory Improvement
In Election Audits Across The State

Citizen observation and analysis show the need for more attention to detail by officials, improvement in counting methods, and ballot chain-of-custody

The Coalition noted significant differences between results reported by optical scanners and the hand count of ballots by election officials across Connecticut. Compared to previous audits, the Coalition noted small incremental improvements in the attention to detail, following procedures, and in the chain-of-custody.

Coalition spokesperson Luther Weeks noted, “We acknowledge some improvement, yet there is still a long way to go to provide confidence in our election system that the voters of Connecticut deserve.”

From the report:

In this report, we conclude that the November post-election audits still do not inspire confidence because of the continued lack of

  • standards for determining need for further investigation of discrepancies,
  • detailed guidance for counting procedures, and
  • consistency, reliability, and transparency in the conduct of the audit. .

We find no reason to attribute all errors to either humans or machines.

Cheryl Dunson, League of Women Voters of Connecticut’s Vice President of Public Issues, stated, “We continue to support our past recommendations to the Secretary of the State and the Legislature for improvement in the post-election audit laws, counting procedures, and chain-of-custody.”

Tom Swan, Executive Director, Connecticut Citizen Action Group, said, “Among our greatest concerns are the discrepancies between machine counts and hand-counts reported to the Secretary of the State by municipalities When differences are dismissed as human counting errors, it is unlikely that an audit would identify an election error or fraud should that occur”

Cheri Quickmire, Executive Director, Connecticut Common Cause said “There needs to be training and accountability.  Election officials need to be familiar with the procedures, follow the procedures, and the procedures must be enforceable.”

Press Release, Full Report etc: <click>

West Hartford: Mayor Supports Audits – Would Like State To Pay

“The audit is being done for the best of reasons to protect..to make sure democracy is being done accurately.”

Local Online News Video report on West Hartford post-election audit counting <video>

Featured in the video is West Hartford Mayor, Scott Slifka.

The audit is being done for the best of reasons to protect..to make sure democracy is being done accurately.

Like CTVotersCount, the Mayor believes that the State should pay for the audits:

Disappointing that we have been asked to pay for it.  We have requested of the Secretary of the State that here office pay for it.

We believe that the 10% of districts chosen to be counted are checking the system for the whole State, not just their local community. The audits are a small investment to pay to provide confidence and deterrence to the voters that our system is protected from errors and fraud.  The audits cost about 10% of the cost of the paper ballots printed for each election.  Last year’s Presidential post-election audit cost about $72,000 state wide.  West Hartford audited three districts out of the sixty chosen statewide for $2,300.

Of course, even thought the Secretary of the State supports the audits, they are mandiated by the legislature.

Update: Looking at West Hartford’s 2010 budget, the entire budget for the Registrar of Voters is $259,662 and for the town, 212,571,688.

Reporting Error In Westport News Story

This report from Westport deserves attention because it is actual reporting by a reporter at the audit, it makes one significant error, and it has been linked from some national voting news sites.

Story in Westport News: Audit ensures election accuracy <read>

In the last couple of weeks there have been perhaps a dozen  local news stories covering the post-election audits around the state.  For the most part we have not posted or commented on them here as they are usually reports prior to the audits and authored by the local Registrars covering the basic facts of the audit and the town’s selection.  Yet this report from Westport deserves attention because it is actual reporting by a reporter at the audit, it makes one significant error, and it has been linked from some national voting news sites.  (As some readers may not be aware, CTVotersCount is a member of the Connecticut Citizen Election Audit Coalition where I also serve as Executive Director.)

From the Westport News:

There were 12 people who had a hand in the audit, including the town’s Democratic and Republican registrars of voters, Nita Cohen and Judy Raines. Some people had the task of counting the ballots. Others had to watch them count. Two monitors hired by the state had to watch people count and also watch the people watching them count.

In fact:

  • Rather than state officials, the two observers present were unpaid volunteer observers from the Coalition, which itself is entirely an unpaid volunteer operation.
  • Checking with our observers and the Westport Registrars’ Office, we understand that accurate information was given to the reporter and that the registrar was misquoted in the story.
  • With perhaps 130 audit observations to date, our observers have never reported the presence of an observer from the State.  The State, as far as we know, does not observe the local audits and bases all of  its reports on data supplied from the registrars and investigations not open to the public:

The next step will be to forward the report to the University of Connecticut for analysis of the accuracy of the tabulators. After their analysis is written, it’s then sent back to the Secretary of State. Finally, it’s passed along to the State Elections Enforcement Commission.

Our observers also report precious few sightings of the media at audits, for that we appreciate the Westport News for sending a reporter.

Based on past observations, Westport election officials deserve applause for their consistent performance in the organization of their audits. Past observation reports show that many towns fall short in conducting audits in a way that provides data sufficient to judge the accuracy of election results.

Registrars in Westport, like those in many towns are concerned with the costs of the audit:

All of this was required by the state. Most of it was paid for by the town. The total cost came to $1,388…In an interview with the Westport News, Cohen noted just how important everyone’s role is in an audit and other aspects of elections, despite the tediousness and “labor intensive” nature of the process.

We believe the audits are a small. critical, incremental investment. The statewide cost of audits represents less than 10% of the costs of printing the paper ballots. We estimate $72,000 in Nov 2008 for local audit activities vs. more than $750,000 for ballots in that election.   Both of these costs are small when compared to the total costs of conducing elections, not to mention the risk to democracy if the voters intentions are not consistently realized in the official election results.  In Federal elections billions of dollars and thousands of lives are at stake.  In local elections millions of dollars, eminent domain issues, education, and quality of life are at stake

Post-Election Audit Random Drawing

Today, three members of CTVotersCount drew 10% (60 districts) of 600 districts in the November election for the post-election audits.
Many towns ask why the are chosen so often?

Today, three members of CTVotersCount drew 10% (60 districts) of 600 districts in the November election for the post-election audits.    There were thirty-five towns comprising well over 100 districts that were therefor exempted from the audit.  We chose the 60 from the remaining districts.

Many towns ask why the are chosen so often?  In Connecticut we audit 10% of the districts after each election and primary — a town with around 10 districts should expect to be chosen frequently to participate in the audit  —  if a town has 20+ districts it should expect to have a district chosen almost every time  and ususally more than one of its districts.  In other states such as, Minnesota, each county must audit some of its districts each time.  Some of those counties are smaller than many Connecticut towns.

Here is the Secretary of the State’s Press Release with the list of towns and a photo from the drawing: <Press Release>

“We had a very smooth Municipal Election Day last Tuesday, but as I have said many times, we in Connecticut don’t just take the machines’ word for it — we audit the results of every election,” said Secretary Bysiewicz. “We want to make sure that as voters come to the polls and cast ballots in Connecticut they have continued confidence that their votes were recorded accurately. hat is why the independent audits are so vital”…

“Auditing election results isn’t just a good idea, it’s absolutely essential in order to guarantee the integrity of our elections,” said Secretary Bysiewicz.

Statisticians, Political Scientists, Election Officials, and Advocates Recommendations to NIST

“We strongly recommend that the next version of the VVSG support auditing election outcomes by facilitating small-batch reporting in standardized electronic reporting formats, and usable voter-verifiable cast vote records.”

Last weekend I participated in a working meeting in Alexandria, VA to design pragmatic post-election audits.   One result was a letter to the National Institute of Standards and Testing (NIST) making suggestions for the Voluntary Voting Systems Guidelines which they are in the process of updating.   I am one of two participants and endorsers from Connecticut <Letter>

Overview

Two key goals of vote tabulation audits are

-To verify that the election outcomes implied by the reported vote totals are correct, and
-To provide data for process improvement: specifically, to identify and quantify various causes of discrepancies between voter intentions and the originally reported vote totals.

Difficulty in obtaining subtotals of the machine tallies to compare with manually-derived totals from small batches of ballots is a major problem. Efficient vote tabulation audits require – in addition to software-independent audit trails – timely, comprehensive, detailed, standardized, machine-readable subtotals of the votes as recorded by the vote tabulation systems. For greatest efficiency, individual ballot interpretations should be available to support emerging methods that audit at the ballot level (that is, batches of size 1) without breaching confidentiality.

Future VVSGs should contain audit-related requirements for all voting systems, designed in consultation with experts in election auditing, to ensure that the next generation of voting systems facilitate election audits.

Key areas for standards include:

-Usability of the paper record
-Comprehensive reporting of all important data elements
-Small-batch or individual ballot reporting capability
-Machine-readable, standard election result reporting formats, with support for standardized identification of contests and candidates, –that facilitate aggregation for electoral contests spanning multiple jurisdictions
-Machine-readable, standard audit result reporting formats, including audit units selected and discrepancies found

Voting systems should make it easy to create detailed reports with subtotals by contest, by ballot batch, by precinct, or by scanner or tabulation machine.

One common, standardized data format is needed for reporting audit results, as well as initial election results. Implementation details are outside the scope of this letter; election auditing experts should participate in specifying these requirements.

In summary, we strongly recommend that the next version of the VVSG support auditing election outcomes by facilitating small-batch reporting in standardized electronic reporting formats, and usable voter-verifiable cast vote records.

One a personal note, it was great to meet in person several political scientists, officials and advocates whom I’ve know only from frequent email and conference call presentations; see again, those I have met previously; and meet several additional scientists, advocates, and officials dedicated to election integrity.

Timely Reminders from Secretary Bysiewicz and CTVotersCount

Referendums and questions are exempt from the Connecticut post-election audit law. However, they are not exempt from the risks of error and fraud.

In a press release, Secretary of State Susan Bysiewicz reminds voters of the importance of questions on the November municipal ballots:

“Towns throughout Connecticut are facing some crucial decisions on
schools, local budgets, road repair and other issues, so it is
imperative that voters make their voices heard next Tuesday,” said
Secretary Bysiewicz.  “Local elections are enormously important for
determining the future direction of all of our communities.”

CTVotersCount reminds voters and the legislature that:

Referendums and questions are exempt from the Connecticut post-election audit law.  However, they are not exempt from the risks of error and fraud. The post-election audit law needs to be strengthened is several ways; including subjecting all critical ballots and contests to be subject to selection for audit; and all ballots and contests are critical.

For more information: FAQ: Why Would Anyone Steal A Referendum?