What do [Connecticut] voters think?

A new Caltech/MIT Voting Technology Project Report provides insight into the opinions of voters on several voting reform issues. We comment on Connecticut specific results and editorialize on voting integrity implications of the survey. We recommend the survey and commentary be contemplated by activists, legislators, and future Secretaries of the State.

A new Caltech/MIT Voting Technology Project Report provides insight into the opinions of voters on several voting reform issues. We comment on Connecticut specific results and editorialize on voting integrity implications of the survey.We recommend the survey and commentary be contemplated by activists, legislators, and future Secretaries of the State.

Voter Opinions about Election Reform: Do They Support Making Voting More Convenient? <read>

The thrust of they survey is a state by state assessment of the level of support for several voting reforms. We find the National summary interesting and somewhat surprising:

Overall Support for Election Reform
Require [Govt Photo] ID 75.6%
Make Election Day a holiday 57.5%
Auto-register all citizens to vote 48.3%
Election Day Registration 43.7%
Election Day to Weekend 41.8%
Absentee voting over Internet 30.1%
Vote by Mail 14.7%

We will have more to say about the Connecticut Results and provide CTVotersCount Commentary after the conclusions from the report:

Conclusions

Our analysis of the American voting public’s support for the many potential election reforms provides a variety of important insights into the potential direction of innovations in the electoral process in the near future. First, we found that some other reforms have mixed support. These include attitudes toward automatic voter registration, Election Day voter registration, and moving Election Day to a weekend. These reforms do not have majority support among all voters in the United States but there are some states where these reforms do have majority support and could be implemented. Second, we found that Internet voting and voting-by-mail did not receive a great deal of support from American voters. There was no state where Internet voting was supported by a majority of voters and there were no states that do not already have expanded vote by mail Washington and Oregon) where expanded vote by mail had majority support. Finally, we found that a majority of Americans support two reforms — requiring showing photo identification (overwhelming support) and making election Day a holiday (bare majority support). These two reforms have strong support nationally and amajority of support in most of the states. Americans, in general, are more interested in the one reform that would promote security, requiring photo identification, than any of the conveniencevoting reforms that would improve the accessibility to the voting process.

Our findings are indicative of where the public stands today, with what they know aboutthese election reforms today. These results do not mean that election reforms with substantial support from voters are inevitable, that reforms without substantial support will never be enacted, or that or that voters actually have strong or well-formed opinions about the potential ramifications of reform. Still, the patterns we discover here have implications for current politics and for the likelihood of election reform in future years.

Partisanship, for instance, is strongly associated with support for and opposition to virtually every reform proposal. To a large degree, these popular reform attitudes tend to map onto the attitudes of legislators, both at the national and state levels, and as with most attitudes in legislatures these days, the partisan divisions are likely stronger among legislators than among their electoral supporters. Although there are exceptions, Democratic lawmakers tend to be the advocates of most of the reforms we explore in this paper, and that support tends to be mirrored, in a muted fashion, among the electorate. (The exceptions are requiring photo identification and Internet voting.)

Younger voters tend to support the reforms studied here, except all-mail voting and moving Election Day to a weekend. What we cannot judge is whether this is a cross-sectional or a cohort effect. That is, we cannot tell whether younger voters are more likely to support reforms because young people are inherently prone to support making it easier to vote, or because they have lived more of their lives surrounded by easy conveniences and electronic appliances. If the latter, and if reforms tend to be more likely when voters support them, then it may be a matter of time before support for some of these reforms, such as voter identificationand making Election Day a holiday, become irresistible. If the former, then there are no obvious future trends favoring or opposing reform.

Finally, the findings here provide an interesting insight into how the adoption of weakly supported (or even strongly opposed) reforms may eventually win over voters. Note that respondents were overwhelmingly opposed to vote-by-mail, except in Oregon and Washington — one state that has long had the practice, and the other which has recently transitioned to it. Unfortunately, we do not have evidence of attitudes toward vote-by-mail in these two states prior to its adoption, but it is hard to believe that residents in Oregon and Washington were wildly out of step with voters in other states, even though they may have supported it more than average. For all Oregon and most Washington voters, voting by mail is “the way it’s done,” and voters there by-and-large support it like voters in no other state. And in general, now that we have benchmarked all states according to their voters’ attitudes toward electoral reform, it will be possible in the future to answer causal questions concerning public attitudes toward electoral practices. Are states whose citizens most support particular electoral reforms more likely to enact them? Do voters in states that adopt reform become more accepting of those reforms after they have been adopted and put into place?

Here are some other items in the report that we found particularly interesting:

The slow pace of election reform in national and state legislatures is no doubt due to multiple causes, including the low salience of election reform in the face of other governing crises, the inertia of elected officials who have succeeded under current electoral rules, economic factors, and uncertainties about the political consequences and political costs of each reform.

The factor we focus on in this article is public opinion. Based on data derived from a unique national survey, we show that a major hurdle many election reforms face is public opinion. Only one prominent reform proposal, requiring photo identification, is supported overwhelmingly nationwide. Other reforms—reforms that are justified based on convenience— at best divide the public, and are generally opposed by them…

There were, generally unsurprising, party and demographic differences in voter preferences.  What was surprising was that, for the most part, the differences were marginal, with voters generally agreeing across the political, age, racial, educational, and income lines.  For the details, see table 3 on page 29 of the <report .pdf>

Connecticut Results

Connecticut tracked very closely with the National averages:

National Connecticut
Require [Govt Photo] ID 75.6% 72%
Make Election Day a holiday 57.5% 57%
Auto-register all citizens to vote 48.3% 44%
Election Day Registration 43.7% 43%
Election Day to Weekend 41.8% 44%
Absentee voting over Internet 30.1% 31%
Vote by Mail 14.7% 12%

We looked at several other states near Connecticut and around the Nation. We find, in general, that other states varied more than Connecticut from the National averages.

CTVotersCount Commentary

The primary focus of CTVotersCount is on voting integrity. We also consider total costs and the implications that voting reforms would have on the objective of our democracy flourishing. Through our filters we comment:

  • We are not ready to celebrate the lack of public support for reforms that we conditionally against(*), such as vote by mail and internet voting. Nor are we ready to give up on reforms that we are conditionally for(*). such as election day registration and automatic registration. As the report points out, voters well educated on these items might change their conclusions. As we have pointed out, fast-food is not good for us, but despite lots of evidence and education it remains popular. When it comes to voting reforms we see little education and usually a lack of evidence or balance available to the public.
  • We caution against recommending a reform or opposing a reform based on public perception reported in a single, or several surveys providing simple reform descriptions. However, public support and perception is an important factor worthy of consideration. This report should provide caution to legislators and Secretaries of State who believe there is a strong degree of public support for some of these reforms.
  • We repeatedly pointed to surveys, some a generation old, supporting complex reforms such a national popular vote and instant runoff voting. We wonder what the result would have been, if these reforms had been included in this survey. Yet, it is risky to decide complex issues based on simple surveys – we suspect most surveys of the public would support cutting taxes, cutting the deficit, with a majority also supporting almost any list of proposals to maintain and increase spending on specific items.
  • CTVotersCount has not taken a position on voter id. It is clear from the survey that voter id is supported by a significant majority of voters and it is also a relatively simple reform to understand. Yet, caution is still prudent – it does have implications on ballot access that may be complex and less generally understood.
  • Optimistically, we note, as the survey did, that the voter id preference may well indicate that the public is more concerned with and supportive of reforms associated with voting integrity, while significantly less concerned with increasing the convenience of voting. Perhaps this is our bias celebrating.
  • We wonder how the survey would have come out if voters were asked voters about requiring a paper ballot, an independent post-election audit, transparent close election recounts, the preservation of the anonymous/secret ballot, public campaign financing, corporate/lobbyist contributions, or stronger National minimum standards in these areas etc.?  What would the public do first? Where would voters be willing to make expenditures and investments?

Update 8/19/2010 More Research:  How Polling Places Can Affect Your Vote How Polling Places [and early voting] Can Affect Your Vote <read>

Their first finding was hardly a shocker: While distance to the polling place did influence the likelihood of voting, the impact was much greater for households in which no one owned a car. But the researchers were surprised by a seemingly counterintuitive statistic: Moving the location of a polling place actually increased voter turnout…

A follow-up laboratory experiment confirmed their theory that the voters had been “primed” with the idea of schooling. Participants shown images of a school were more likely to support increased education funding than those who had seen photos of a church. In contrast, those who viewed the house of worship were more likely to support an initiative to limit stem-cell research — a favorite issue of the religious right.

This same dynamic was documented in a study published earlier this year in the journal Political Psychology. Abraham Rutchick of California State University, Northridge, found that during a 2006 election in South Carolina, a proposed constitutional amendment prohibiting gay marriage was supported by 83 percent of voters who cast their ballots in churches, as opposed to 81.5 percent of those who voted elsewhere...

“There are good reasons to adopt early voting,” he and his colleagues concluded in the journal Political Science & Politics. “Ballot counting is more accurate, it can save administrative costs and headaches and voters express a high level of satisfaction with the system. If a jurisdiction adopts early voting in the hopes of boosting turnout, however, it is likely to be disappointed. We find that early voting reforms have, at best, a modest effect on turnout.”

Priscilla Southwell of the University of Oregon, Eugene, came to a similar conclusion in a 2009 issue of the Social Science Journal. She reports that the effect of voting by mail in primary and general elections is “positive but fairly minimal.” However, the format apparently increases voter participation “in low-stimulus special elections where the context is a single candidate race, or when a single or a few ballot measures are involved.”..

Update 9/10/2010: MD: Little interest shown in early voting <read>

Despite spending millions of dollars on early voting this year, it appears that only about 2 percent of Marylanders will take advantage of the new option before the primary election…

Local election officials say early voting has been a success, but has caused a few problems, primarily with staffing and budgets.

Like in the other districts, Baltimore city Election Director Armstead B.C. Jones Sr. said his employees worked Saturday and on Labor Day to staff early-voting centers and the local election office. He said employees have been putting in 12-hour days during early voting, and are being paid overtime and holiday wages.

“It’s really tough on us,” Jones said. “On Election Day it’s bad enough. It’s just spreading everyone real thin, but the job is getting done.”

As of Wednesday, 5,604 of the city’s 319,342 eligible voters had voted early at the polls, or 1.75 percent, according to the state Board of Elections.

Jones expects to spend about $1 million on early voting this month and before the Nov. 2 general election.

(*) When we say we are “Conditionally Against” a proposition, we mean that nobody has proposed a realistic safe way to accomplish the proposition. We remain open to the possibility that a means may be found that would pass the scrutiny of the majority of computer scientists, security experts, election officials, and voting integrity advocates.

When we say we are “Conditionally For” a proposition, we mean that other states have safe implementations of the proposition or computer scientists, security experts, election officials, and voting integrity advocates have recommended a safe solution. We caution that a particular implementation or law may not meet a reasonable standard of safety.

Nov 09 Election Audit Reports – Part 2 – Inadequate Counting, Reporting, and Transparency Continue

“The main conclusion of this analysis is that the hand counting remains an error prone activity. In order to enable a more precise analysis, it is recommended that the hand counting precision is substantially improved in future audits. The completeness of the audit reports also need to be addressed…Submitting incomplete audit returns has little value for the auditing process.”

Late last week the University of Connecticut (UConn) VoTeR Center posted three reports from the November election on its web site <Pre-Election Memory Card Tests>, <Post-Election Memory Card Tests>, and <Post-Election Audit Report>.  In Part 1 we discussed the memory card tests and in Part 2 we discuss the Post-Election Audit Report.

Highlights from the official report:

The VoTeR Center’s initial review of audit reports prepared by the towns revealed a number of returns with unexplained differences between hand and machine counts and also revealed discrepancies in cases of cross-party endorsed candidates (i.e., candidates whose names appear twice on the ballot because they are endorsed by two parties). As a result, the SOTS Office performed additional information-gathering and investigation and, in some cases, conducted independent hand-counting of ballots. …Further information gathering was conducted by the SOTS Office to identify the cause of the moderately large discrepancies, and to identify the cause of discrepancies for cross-party endorsed candidates…

This report presents the results in three parts: (i) the analysis of the original audit records that did not involve cross-party endorsed candidates, (ii) the analysis of the audit records for cross-party endorsed candidates, and (iii) the analysis of the records that were revised based on the SOTS Office follow up. The analysis does not include 6 records (0.8%) that were found to be incomplete. ..

The main conclusion in this report is that for all cases where non-trivial discrepancies were originally reported, it was determined that hand counting errors or vote misallocation were the causes. No discrepancies in these cases were reported to be attributable to incorrect machine tabulation. For the original data where no follow up investigation was performed, the discrepancies were small; in particular, the average reported discrepancy is much lower than the number of the votes that were determined to be questionable.

Further on in the report is another conclusion:

The main conclusion of this analysis is that the hand counting remains an error prone activity. In order to enable a more precise analysis, it is recommended that the hand counting precision is substantially improved in future audits. The completeness of the audit reports also need to be addressed. For example, in two of the towns when the second hand count was performed it was determined that the auditors did not count a batch of 25 ballots in one case and the absentee ballots in the second. This initially resulted in apparently unexplained discrepancies. Submitting incomplete audit returns has little value for the auditing process.

We note the details of the investigations to determine the accuracy of human and machine counting includes some counting of ballots and some telephone conversations with election officials:

The first follow up was performed to address substantial number of discrepancies in some precincts (discrepancies over 30 votes). All those Version 1.3 April 20, 2010 UConn VoTeR Center 15 unusual discrepancies were concentrated in four towns. As a result in those towns a second hand count of the actual ballots was performed by the SOTS Office personnel…

We now discuss a batch of records containing 218 (28.1% of 776) records where originally the reported discrepancies were under 30 (these do not include cross-party endorsed candidates). In this case the SOTS Office personnel contacted each registrar of voters and questioned their hand count audit procedures. In all instances, the registrars of voters were able to attribute the discrepancies to hand counting errors. Thus no discrepancies (zero) are reported for these districts. Given the fact that no discrepancies were reported for those records we do not present a detailed analysis.

We have several concerns with these investigations:

  1. All counting and review of ballots should be transparent and open to public observation.  Both this year and last year we have asked that such counting be open and publicly announced in advance.
  2. Simply accepting the word of election officials that they counted inaccurately is hardly reliable, scientific, or likely to instill trust in the integrity of elections.  How do we know how accurate the machines are without a complete audit, any error or fraud would likely result in a count difference, and would be [or could have been] very likely dismissed.
  3. Even if, in every cases officials are correct that they did not count accurately, it cannot be assumed that the associated machines counted accurately.
  4. Simply ignoring the initial results in the analysis of the data provides a simple formula to cover-up, or not recognize error and fraud in the future.

As we have said before we do not question the integrity of any individual, yet closed counting of ballots leaves an opening for fraud and error to go undetected and defeats the purpose and integrity of the audit.

We also note that in several cases officials continued to fail perform the audit as required by law or to provide incomplete reports.

On the other hand we note that only 6 records (0.8% of 776) were found to be incomplete. The statistical analysis does not include these records. While some problematic records are clearly due to human error (e.g., errors in addition), in other cases it appears that auditors either did not follow the audit instructions precisely, or found the instructions to be unclear. However, this is a substantial improvement relative to the November 2007 and November 2008 elections, where we reported correspondingly 18% and 3.2% of the records that were unusable.On the other hand we note that only 6 records (0.8% of 776) were found to be incomplete. The statistical analysis does not include these records. While some problematic records are clearly due to human error (e.g., errors in addition), in other cases it appears that auditors either did not follow the audit instructions precisely, or found the instructions to be unclear. However, this is a substantial improvement relative to the November 2007 and November 2008 elections, where we reported correspondingly 18% and 3.2% of the records that were unusable.

Improvement or not, our solution would be to require the towns involved to, correct their errors, comply with the law, and perhaps be subject to a penalty.  Not pursuing such provides a clear formula for covering up errors and fraud.

Finally, since only “good” records were fully analyzed we question the value of some the reported statistics based only on those results. We do agree with the reports recommendations:

The main conclusion of this analysis is that the hand counting remains an error prone activity. In order to enable a more precise analysis, it is recommended that the hand counting precision is substantially improved in future audits. The completeness of the audit reports also need to be addressed. For example, in two of the towns when the second hand count was performed it was determined that the auditors did not count a batch of 25 ballots in one case and the absentee ballots in the second. This initially resulted in apparently unexplained discrepancies. Submitting incomplete audit returns has little value for the auditing process.

For the cross party endorsement, it is important for the auditors to perform hand counting of the votes that precisely documents for which party endorsement the votes were cast, and to note all cases where more than one bubble was marked for the same candidate. The auditors should be better trained to follow the correct process of hand count audit…

We also believe that our reporting of the analysis, and the analysis itself needs to be improved. A major change planned for future analysis is to assess the impact of the perceived discrepancies on the election outcomes (in addition to analyzing individual audit return records). This is going to be exceedingly important for the cases where a race may be very close, but where the difference between candidates is over 0.5% (thus not triggering an automatic recount)[*]

* CTVotersCount Note: Connecticut has an automatic ‘recanvass’, triggered at a difference of less than 20 votes or 0.5% up to  a maimum difference of 2000 votes.

In January, the Connecticut Citizen Election Audit Coalition Report analyzed the November 2009 Post-Election Audit data and the observations of citizen volunteers:

In this report, we conclude that the November post-election audits still do not inspire confidence because of the continued lack of

  • standards for determining need for further investigation of discrepancies,
  • detailed guidance for counting procedures, and
  • consistency, reliability, and transparency in the conduct of the audit.

Compared with previous reports of November post-election audits:

  • The bulk of our general observations and concerns remain.
  • The accuracy of counting has improved. There was a significant reduction in the number of extreme discrepancies reported. However, there remains a need formuch more improvement.
  • There was a significant improvement in counting cross-endorsed candidate votes
  • The number of incomplete reports from municipalities has significantly decreased.

We find no reason to attribute all errors to either humans or machines.

There is no reason to modify the Coalition’s conclusion based on the official report. Many of the same concerns and conclusions we discussed last year still apply.  See last year’s post for more details, here is a summary:

  • The investigations prove that Election Officials in many Connecticut municipalities are not yet able to count votes accurately
  • The audit and the audit report are incomplete
  • Even with all the investigations and adjustments we have many unexplained discrepancies [Unless we accept the belief of officials that they counted inaccurately, and in all those cases the machine counted accurately]
  • The Chain-of-Custody is critical to credibility
  • Either “questionable ballot” classification is inaccurate in many towns or we have a “system problem”
  • Accuracy and the appearance of objectivity are important
  • Timeliness is important
  • The problem is not that there were machine problems. We have no evidence there were any. The problem is that when there are or ever were, dismissing all errors as human counting errors, we are unlikely to find a problem
  • We stand by our recommendations and the recommendations of other groups
  • The current Audit Process in Connecticut demonstrates the need for audits to be Independent and focused on election integrity, not just machine certification reliability

As we said last year;

We recognize and appreciate that everyone works hard on these programs, performing the audits, and creating these reports including the Registrars, Secretary of the State’s staff, and UConn.   We also welcome Secretary Bysiewicz’s commitment to solve the problems identified.  Yet, we have serious concerns with the credibility of the audits as conduced and their value, as conducted, to provide confidence to the public in the election process.

Nov 09 Election Audit Reports – Part 1 – Problems Continue and Some Good News

We should all applaud the unique memory card testing program, yet we must also act aggressively to close the gaps it continues to expose…The good news is that UConn has identified a likely cause of the “junk” data cards. Perhaps a solution is near.

Late last week the University of Connecticut (UConn) VoTeR Center posted three reports from the November election on its web site <Pre-Election Memory Card Tests>, <Post-Election Memory Card Tests>, and <Post-Election Audit Report>.  In Part 1 we will discuss the memory card tests and in Part 2 the Post-Election Audit Report.

As we said last year:  We should all applaud the unique memory card testing program, yet we must also act aggressively to close the gaps it continues to expose.

We note the following from this year’s reports:

  • An increase in the percentage of memory cards in the pre-election test.
[pre-election 2009]  The VoTeR Center received in total 491 memory cards from 481 districts before the elections. This document reports on the findings obtained during the audit. The 491 cards represent over 80.6% of all districts, thus the audit is broad enough to draw meaningful conclusions.

[pre-election 2008] the VoTeR Center received and examined 620 memory cards [about 74% of districts] as of November 3, 2008. These cards correspond to 620 distinct districts in Connecticut. About 2/3 of these memory cards were randomly chosen by the VoTeR Center personnel during the visits to LHS and before the cards were packed and shipped to the towns. Another 1/3 of the memory cards came from the towns directly, where the cards were randomly chosen for preelection audit (this procedure applied to the town for which the cards were not selected at LHS).

  • And a significant drop in the percentage of memory cards in the post-election test:
[post-election 2009] The VoTeR Center received in total 120 memory cards from 49 districts [approximately 8.0% of all districts] after the elections. The cards were received during the period from December 12, 2009 to February 12, 2010. Among the received cards, 49 were used in the elections,

[post-election 2008] The VoTeR Center received in total 462 memory cards from a number of districts after the elections… Among these cards, 279 were used in the elections… The 279 cards represent over 30% of all districts,

As we understand it, the Secretary of the State’s Office asks all towns to send in memory cards for each district, they are not randomly selected.  This means that we cannot be sure the percentages of  “junk” data or procedural lapses reported actually represent a reliable measure of all memory cards and official actions, yet it seems reasonable to conclude that:

  • “Junk” data continues at an unacceptable rate:
[pre-election]The audit identified forty two (42) cards, or 9%, that contained “junk” data; these cards are unreadable by the tabulators, and easily detected as such. This is a high percentage of faulty/unusable cards. We note that this is consistent with the percentage reported for the pre-election audit of November 2008 elections. The percentage is lower than detected in the post-election audit for the August 2008 primary (15%), but higher than detected in the pre-election audit for the August 2008 primary (5%), post-election audit for the February 2008

[post-election] Concerning the remaining cards, 14 (12% of the total number of cards) were found to contain junk data, that is, they were unreadable, which is easily detected by the tabulators; had a card contained junk data at the time of the election,

So the problem of “junk” data continues at a likely rate toward the middle of past testing results.  As we have said before 5%, 9%, 15%, even 1% is a huge failure rate for relatively simple technology such as memory cards.

  • Very good news on the “Junk” data cards:

We have determined that weak batteries are the primary cause of junk data on cards; a separate report will document this in more detail. It is recommended that batteries are replaced before each election.

It seems that UConn has identified a likely cause of the “junk” data cards.  Perhaps a solution is near.  We look forward to reading that separate report.

  • Officials continue to fail to follow procedures at a significant rate
[pre-election] The audit identified twenty-three (23) cards where the audit log indicates card duplication events. Card duplication is not authorized per SOTS Office instructions. Otherwise the cards were properly programmed for elections…There are 76 cards (15%) that were properly programmed, but were found in unexpected states or contained unexpected timing of events. This does not necessarily present an immediate security concern, however the findings indicate that the established procedures are not strictly followed in some cases.

[post-election] 14 contained junk data
2 were not programmed (formatted, but blank)
3 were involved in duplication
4 were non-standard cards (32KB instead of 128KB) [LHS not election official error] 4 were programmed for different elections

The main concern with such failures to follow procedures is that they are symptomatic that other procedures are frequently not being followed, yet each failure represents a possible lapse in security and election integrity.

Comments from our post on last year’s report still apply:

  • A non-random partial post-election audit of memory cards is useful, but it is insufficient
  • How many more tests, reports, and elections will it take before the junk data problem is significantly reduced? [Thanks to UConn, based on the 2009 report, we may have an answer soon]
  • Almost every failure to follow procedures is an opportunity to cause problems, cover up errors, or cover up fraud. [including not sending in cards for testing]. We can only hope that the Registrars of Voters will join in the commitment to meet a much higher standard.

For more details behind these comments please read our post on last year’s report.

Nov 09 Election Observation Report – Improvement, Yet Still Unsatisfactory

The Coalition noted significant differences between results reported by optical scanners and the hand count of ballots by election officials across Connecticut. Compared to previous audits, the Coalition noted small incremental improvements in the attention to detail, following procedures, and in the chain-of-custody.

In this report, we conclude that the November post-election audits still do not inspire confidence. We find no reason to attribute all errors to either humans or machines.

Press Release, Full Report etc: <click>

Summary, from the Press Release and Report:

Coalition Finds Unsatisfactory Improvement
In Election Audits Across The State

Citizen observation and analysis show the need for more attention to detail by officials, improvement in counting methods, and ballot chain-of-custody

The Coalition noted significant differences between results reported by optical scanners and the hand count of ballots by election officials across Connecticut. Compared to previous audits, the Coalition noted small incremental improvements in the attention to detail, following procedures, and in the chain-of-custody.

Coalition spokesperson Luther Weeks noted, “We acknowledge some improvement, yet there is still a long way to go to provide confidence in our election system that the voters of Connecticut deserve.”

From the report:

In this report, we conclude that the November post-election audits still do not inspire confidence because of the continued lack of

  • standards for determining need for further investigation of discrepancies,
  • detailed guidance for counting procedures, and
  • consistency, reliability, and transparency in the conduct of the audit. .

We find no reason to attribute all errors to either humans or machines.

Cheryl Dunson, League of Women Voters of Connecticut’s Vice President of Public Issues, stated, “We continue to support our past recommendations to the Secretary of the State and the Legislature for improvement in the post-election audit laws, counting procedures, and chain-of-custody.”

Tom Swan, Executive Director, Connecticut Citizen Action Group, said, “Among our greatest concerns are the discrepancies between machine counts and hand-counts reported to the Secretary of the State by municipalities When differences are dismissed as human counting errors, it is unlikely that an audit would identify an election error or fraud should that occur”

Cheri Quickmire, Executive Director, Connecticut Common Cause said “There needs to be training and accountability.  Election officials need to be familiar with the procedures, follow the procedures, and the procedures must be enforceable.”

Press Release, Full Report etc: <click>

EVT/WOTE Conference, Montreal

Monday and Tuesday, the EVT/WOTE Conference was held in Montreal. This tends to be a highly technical conference on potentential voting technologies, security and vulnerabilities in current technology, and related projects. For me it is a mixture of new information relevant voting, interesting technical articles, a time to reflect, and to connect with others involved in causing voting integrity.

8/19/2009:  Back from a week away from the Internet, I’ve updated the links below and highly recommend the sensation of the conference, the debut of “Plaudits for Audits”:

<video and lyrics>

Friends, raise your joyful plaudits to post-election audits
Where we count some votes by hand to check the work of the machines.
It might sound esoteric, or tiresomely numeric
But democracy’s at stake, so let’s make sure those counts are clean.

******

Monday and Tuesday, the EVT/WOTE Conference was held in Montreal.  This tends to be a highly technical conference on potentential voting technologies, security and vulnerabilities in current technology, and related projects.  For me it is a mixture of new information relevant to voting, interesting technical articles, a time to reflect, and to connect with others involved in causing voting integrity.

Joe Hall and Ben Adida have provide photos and summaries of day 1.  You can see me in the white striped shirt in the front row in the 1st picture from Joe. <Joe Day 1>. <Joe Day 2>  Ben covers a bit more of the details in his two posts <Day 1 AM> <Day 1 PM> <Day 2>  (I’ll update this list with day 2 reviews when they become available)

All the papers are now available from the USENIX site <here>

There were several highlights for me, several of which will provide fodder for more extensive posts in the near future:

  • Larry Norden highlighted the issues associated with voter registration systems and the potential for improving this area.  Our current systems are expensive,  error prone, and disenfranchising.  Surprisingly voter registration is more than half the cost of election administration – more than all the other costs combined:  Equipment, training, ballot printing, auditing, and election day activities.
  • One theme was the potential for cryptography to provide secure and auditable elections without paper records.  Here I was surprised at all the activity and claimed potential.  I am open but not yet convinced.  Proponents claim it can be done, others point out challenges still to be addressed.  There would be a lot of reading for me to follow all the details of the proposed processes to begin to understand the potential risks and value.  Perhaps in a few years we will have an agreed upon alternative to paper ballots.
  • Representative Rush Holt made the case for his bill.  He is one of four or five scientists in the Congress (he has a Ph. D. in Astro Physics).  Several years ago Congress eliminated its Office of Technology Assessment which provided analysed technology independently for the Congress.  He pointed out that the OTA might have avoided some of the problems with HAVA and with bio fuels.  It says to me that most members of congress are clueless that they are clueless about the need for scientific analysis.
  • Much has been made of a paper on the potential to hack voting machines that use return-oriented programming <read>.  While technically significant, to me it is just another confirmation that its almost impossible to trust software – we must assume that any system can be hacked.
  • Last on the aganda, but not least a paper by Joe Hall and several others reviewing “Risk Limiting” audits <read>.  As an attendee I have had access to this paper for several weeks and have been a party to several lively discussions of its implications and conclusions.  It essentially, successfully challenges the assumptions behind previous papers on risk limiting audits and calls for much more rigirous statistical methods of analysis if audits are to claim exact levels of statistical confidence in election integrity.  I will have much more to say on this in the future.  Sufice for now to say that we are are on much more solid ground proposing post-eleciton audit laws to select ballots or districts based on specific fixed audit percentages or tiered audit percentages than laws specifying statistical confidence levels – unless and until statisticans can agree on how to compute confidence levels and realistic levels ballot counting.

A Better Alternative To Election Day Registration?

An alternative is available that would eclipse Election Day Registration (EDR), while fixing our inaccurate voter registration databases and save government expenses.

Update: New York Times Editorial endorses

Update: 07/24/2009:  New York Times Editorial endorses  <read>

Bolder action is needed to impose a higher standard on the states. Senator Charles Schumer, the Democrat of New York who is chairman of the Senate Rules and Administration Committee, is at work on a national voter registration modernization bill. To be effective, it should follow the lead of nations that are far more serious than the United States about getting eligible voters on the rolls — and have the registration rates to prove it.

******

An alternative is available that would eclipse Election Day Registration (EDR), while fixing our  inaccurate voter registration databases and save government expenses.

A National Journal Article, Looking Abroad For Answers On Voter Registration, summarizes the case and points to recent reports covering the possibilities of making the government responsible that all voters are registered. <read>

As lawmakers on Capitol Hill mull the best way to overhaul the voter registration system, advocacy groups that endorse fixes are pointing overseas for answers.

Unlike the United States, which puts the onus for registering entirely on the voter, many Western democracies put government officials in charge of adding voters to the rolls, according to a recent study by New York University School of Law’s Brennan Center for Justice. These include Australia, Belgium, Canada, Germany, Peru and Sweden.

It seems that there are advantages that both progressives and conservatives would find in this alternaive:

A recent PIRG report makes the case that the nation’s current, paper-based registration system is not only inefficient and error-riddled, but burdens election administrators with excessive costs. The survey of 100 counties, conducted by PIRG’s Education Fund, found that election officials spent more than $33 million in the 2008 election on simply populating and correcting the voter rolls.

Voting rights advocates argue that registration snafus were the No. 1 problem plaguing last year’s election. Certainly registration controversies dominated the headlines, with progressive activists complaining that eligible voters were blocked from registering and voting, and conservatives arguing that ACORN, the Association of Community Organizations for Reform Now, was fraudulently registering ineligible voters.

Putting registration in the hands of government officials would solve both problems, said Wendy Weiser, director of the Brennan Center’s voting rights and elections project. The group’s recent analysis of voter registration systems around the world points to data sharing — capturing voter information from government records used for other purposes — as the most promising model internationally.

Nov 08 Election Audit Reports – Part 2 – Counting Not Extremely Accurate

We recognize and appreciate that everyone works hard on these programs, performing the audits, and creating these reports including the Registrars, Secretary of the State’s staff, and UConn. We also welcome Secretary Bysiewicz’s committment to solve the problems identified. Yet, we have serious concerns with the credibility of the audits as conduced and their value, as conducted, to provide confidence to the public in the election process.

Introduction

This week the University of Connecticut (UConn) VoTeR Center released reports on post-election audits and memory card testing for the November 2008 election. These reports were announced by a press release from the Secretary of the State, Susan Bysiewicz. <Press Release> <Post-Election Memory Card Report> <Post-Election Audit Report>.  Yesterday, in Part 1, we covered the Memory Card Report.   Today, in Part 2 we highlight and comment on the Post-Election Audit Report.

We recognize and appreciate that everyone works hard on these programs, performing the audits, and creating these reports including the Registrars, Secretary of the State’s staff, and UConn.   We also welcome Secretary Bysiewicz’s commitment to solve the problems identified.  Yet, we have serious concerns with the credibility of the audits as conduced and their value, as conducted, to provide confidence to the public in the election process.

Summary Three Reports On The November Post-Election Audits

From the Secretary of the State’s Press Release, December 12th, 2008, headlined:  BYSIEWICZ: RESULTS OF POST ELECTION AUDIT SHOW ACCURATE ELECTION DAY MACHINE COUNTS:

Secretary of the State Susan Bysiewicz announced today that post election audits conducted in 10% of all voting precincts in Connecticut have shown extremely accurate machine counts on Election Day November 4, 2008. An initial review of the audit results has been completed by the Office of the Secretary of State, and the results will now undergo a complete, independent analysis by the University of Connecticut’s Voting Technology Research Center under the direction of Dr. Alexander Shvartsman.

“We set a record in Connecticut on November 4th with 1.64 million people casting ballots and Election Day went remarkably smoothly,” said Secretary Bysiewicz. “The results of this audit indicate, once again, that the optical scan voting system is secure and extremely accurate. Connecticut voters can be confident in the integrity of our elections and that their votes were counted correctly. Still, I’m not asking anyone to simply take my word for it: that’s why these post-election procedures are so important. We want to shine the light on the electoral process, before and after all votes are cast…

While the audits did uncover accurate machine counts on Election Day, there were discrepancies in isolated cases involving the hand-count audits for some ballots marked with votes for major party candidates who were cross endorsed by minor parties.

From the Connecticut Citizen Election Audit Coalition Report, January 28, 2009 (I was the lead author of this report):

In this report, we conclude, based on our observations and analysis of audit reports submitted to the Secretary of the State that the November post-election audits still do not inspire confidence because of the continued lack of

* standards,
* detailed guidance for counting procedures, and
* consistency, reliability, and transparency in the conduct of the audit.

We also note continuing failures to follow audit and chain-of-custody procedures.

Among our greatest concerns are the discrepancies between machine counts and hand-counts reported to the Secretary of the State by several municipalities. In many cases, these discrepancies are not thoroughly and reasonably explained. We believe that the ad-hoc counting procedures used by many municipalities were not sufficient to count ballots accurately and efficiently.

  • The Coaltion report highlighted many discrepancies in votes and ballot counts that exceded questionable ballot counts and  that did not involve cross-endorsed candidates <read>

From the UConn Post-Election Audit Report, May 12 2009:

The VoTeR Center]s initial review of audit reports prepared by the towns revealed a number of returns with unacceptably high unexplained differences between hand and machine counts…As a result the [Secretary of the State’s] Office performed additional information-gathering and investigation and, in some cases, conducted independent hand counting of ballots…

The main conclusion in this report is that for all cases where non-trivial discrepancies were originally reported, it was determined that hand counting errors or vote misallocation were the causes.  No discrepancies in these cases were reported to be attributable to machine tabulation.  For the original data where no follow up investigation was performed, the discrepancies were small, in particular the average reported discrepancy is lower than the number of votes that were determined to be questionable…

The main conclusion of this analysis is that the hand counting remains an error prone activity. In order to enable a more precise analysis it is recommended that the hand counting precision is substantially improved in future audits. The completeness of the audit reports also need to be addressed

This analysis does not include 42 records (3.2% of 1311 [candidate race counts]) that were found to be incomplete, unusable or obviously incorrect.  This an improvement relative to the November 2007 elections.

The Secretary of the State and her Office,  are rightfully proud of proposing the audit to the Legislature in 2007.  Based on the municipal reports from November, we asked for public follow-up investigations of the initial audit results, at least starting with the largest and most blatant discrepancies and incomplete forms.  We appreciate that follow-up of the largest discrepancies was initiated.  Yet, we are disappointed that the investigations were not open to public observation and that all incomplete forms were not investigated.

Our comments and concerns:

  • The investigations prove that Election Officials in many Connecticut municipalities are not  yet able to count votes accurately. As we have noted, we appreciate that the largest discrepancies were investigated. We asked for that as a minimum.  Yet without reliable counting, initially, or via follow-up we find no reason to agree that the audits prove the machines in Connecticut were “extremely accurate”.  Reports in  several other states show that officials and machines can count quite accurately.   Some audits and recounts by hand show occasionally that initial reported election results created on election night by people and machines are inaccurate because of human and machine errors — that is what  is supposed to happen, exactly what audits are designed to do.  (We plan on developing a detailed post discussing counting accuracy and reasonable expectations in the near term)
  • The audit and the audit report are incomplete. The report “does not include 43 records (3.2% of 1311) that were found to be incomplete, unusable, or obviously incorrect”.   When we overlook obvious errors, then in future elections creating obvious errors is another route to avoiding detection of errors or fraud — If the counts don’t match, just don’t report the result.
  • Even with all the investigations and adjustments we have many unexplained discrepancies. The largest discrepancies were investigated, leaving 98 cases of discrepancies greater than 4 and 34 cases greater than 9. Or 68 with discrepancies over 2% including 31 with discrepancies over 5% of the vote. (Table 1 and Table 2 of the UConn report) We are reminded of Ohio in the same November 2008 election, where a discrepancy of 5 ballots was a matter of serious national concern.  With effective initial counting, there should be a small number of counts requiring investigation.
  • The Chain-of-Custody is critical to credibility. Even the some of the originally reported data which closely matched machine totals lacks credibility — based on lapses in the chain-of-custody of ballots prior to the initial municipal counting. In several cases the ballots were not resealed after the initial audit counts, and are thus less than fully credible. Once again, we do not have any reason to suspect errors or fraud, we just point out the lack of following procedures, the holes in credibility, and the openings for covering errors and fraud in elections.
  • The entire audit process should be open to observation. We do not doubt the hard work and integrity of the Secretary of the State’s election staff of seven, several of whom recounted ballots and performed research in a number of towns. If the recounting and field research had been open to the public, we might be in a position to vouch for the integrity of the process.  We, not the public, were informed of the initial “site visits” but our request that they be open to the public, or at least open to us, was to no avail.  If any critical part of the audit is performed out of public view, it leaves questions for the public and opens up another avenue for fraud or for errors to be covered up.
  • Either “questionable ballot” classification is inaccurate in many towns or we have a “system problem”.  Based on our analysis of the audit results and our observations during 18 post election audits, we find that election officials classified way more votes as questionable than necessary. In most cases only a very small number, 1 or 2 votes per candidate, are filled out poorly enough not to counted by the machine, but several towns classify large numbers as questionable. This is a problem as it opens a hole for real problems to go undetected. When 10% of ballots are incorrectly classified as questionable it opens up the possibility of not recognizing a problem of a 10% undercount for a candidate.
    Conversely, if the officials are reasonably classifying questionable ballets, then we have a system that 5%, 10%, or 25% of voters in several towns are actually unable to use properly. That would really be a serious “system problem” that needs to be addressed – by better systems not by smarter voters. (If we actually have such a “system problem” then we have a very poor ballot layout, much worse than other states, much worse than the legendary “butterfly ballot” in Florida 2000.)
  • Accuracy and the appearance of objectivity are important. We disagree with the Secretary of the State’s initial assessment on December 12 when all the data from the municipalities was available, but no UConn report was available.   Secretary Bysiewicz said the audits “have shown extremely accurate machine counts”.  While it may actually be the case,  the accuracy of all scanners in the audit cannot be proven based on the data available now, and less so based on the data available in December.
  • Timeliness is important. We have reports and follow-up, long after the election: The election was November 4th, the audit was complete in early December, the Presidential electors were certified on December 14th, the initial “site visits” began on January 19th and were complete by January 23rd, the data from the initial “site visits” were sent to UConn on February 18th, and further follow up data on April 3, 2009.   The report is dated on May 12, 2009 — over six months after the election. The longer the delay the colder the trail of evidence, the more opportunity for cover-up, and the less value the data.
    When the Presidential electors were certified on December 14th, there were huge obvious discrepancies.  What if a race for President, a U.S. Representative, or State Legislator was close? Would there have been a swifter response even though the machines were declared “extremely accurate” at that point?

Our bottom line:

  • The problem is not that there were machine problems. We have no evidence there were any. The problem is that when there are or ever were, dismissing all errors as human counting errors, we are unlikely to find a problem. In this audit, the worst discrepancies were investigated, which proved that many discrepancies in these audits were human errors —  not surprisingly based on observation of inadequate counting.   However, a sample does not prove all discrepancies are human errors.
  • We stand by our recommendations and the recommendations of other groups for what is required to have effective, credible audits.  CTVotersCount Petition 2009, Coalition Recommendations, Principles and Best Practices for Post-Election Audits, and The League Of Women Voters Report on Election Audits.
  • The current Audit Process in Connecticut demonstrates the need for audits to be Independent and focused on election integrity, not just machine certification reliability. Even with hard work and high integrity the appearance of integrity is questionable when the Election Officials and the State’s Chief Election Official are responsible for both the audit the election.  We know of no other area of business or Government where something labeled an “audit” is this far from independent. If the purpose of the audit were to just check if the machines can work as certified and not to uncover and rectify instances of error and fraud, then improved audit execution and tweaks of the current law might suffice.  Yet, for the audits to provide election integrity we need a credible audit that is completed in time to adjust election results, that includes an audit of the entire process, and one that subjects all ballots and machines to selection for audit.  The current law has too many exemptions which all represent opportunities for error or fraud to go undetected.

Secretary Bysiewicz supported independent audits in 2008 when they failed to pass the Connecticut Legislature.  She reiterated her commitment to independent audits and having a completely open process in a letter to CTVotersCount petition signers earlier this year, sent on January 23rd:

We supported the creation of an Independent Audit Board last legislative session as the next step in improving the administration of audits in our state.  We also supported a bill that would require one hundred percent (100%) testing of memory cards prior to thier use. Of course, we will continue to support that type of refinement to our current process.

Our audit law requires that audits be conducted in public.  I have long been a strong advocate of openness and transparency in government.  We will advocate for the types of improvments suggested in the petition (e.g. that all audit activiteis should occur in public)...

We will work very hard to maintain the gains we have made against any effort to pull away from these basic security measures, like audits, on the ground that we face financial challenges in the state and local level.  Like you, I strongly believe that such measures are critical to maintaining public confidence in our electoral process and constitute a small price to pay for ensuring that our elections function properly.

Nov 08 Election Audit Reports – Part 1 – Bad Cards, Procedural Lapses Continue

This week the University of Connecticut VoTeR Center released reports on post-election audits and memory card testing for the November 2008 election. Today we will highlight and comment on the Memory Card Report.

We should all applaud the unique memory card testing program, yet we must also act aggressively to close the gaps it continues to expose.

Introduction

This week the University of Connecticut (UConn) VoTeR Center released reports on post-election audits and memory card testing for the November 2008 election. These reports were announced by a press release from the Secretary of the State, Susan Bysiewicz. <Press Release> <Post-Election Memory Card Report> <Post-Election Audit Report>. Today we will highlight and comment on the Memory Card Report.  In Part 2 we will highlight and comment on the Post-Election Audit Report.

We should all applaud the unique memory card testing program, yet we must also act aggressively to close the gaps it continues to expose.

Summary

From the press release:

My office entered into this historic partnership with the University of
Connecticut VoTeR Center so that we could receive an independent, unbiased accounting of Connecticut’s optical scan voting machines,” said Bysiewicz.  “The results of these two studies confirm that numbers tallied by the optical scanners were remarkably accurate on Election Day November 4, 2008.  Voters should feel confident that their votes were secure and accurately counted.

From the Post-Election Memory Card Audit Report:

In summary. (1) all cards used in the election were properly programmed, (2) cards with junk data continues to be a problem, and additional analysis is in progress to determine the cause, (3) a number of cards show that the pre-election procedures are not followed uniformly and that cards continue to be duplicated; we recommend a stronger policy statement is needed on handling the cards before and during the election an disallowing memory card duplication.

The Secretary of the State, her Office, and UConn are rightfully proud of initiating the audit in 2008 and instituting the unique memory card testing program. We recognize and appreciate that everyone works hard on these programs, performing the audits, and creating these reports including the Registrars, Secretary of the State’s staff, and UConn.   We also welcome Secretary Bysiewicz’s committment to solve the problems identified:

From the Press Release:

“Overall, I’m pleased that our first pre- and post-testing procedures with UConn demonstrate the security of our office’s chain of custody practices with election officials,” said Bysiewicz. “However, the percentage of unreadable cards is still too high and we await UConn’s forthcoming investigation into possible causes and recommended solutions for guidance on this issue. In the interim we will provide additional training to local election officials to make sure regulations concerning the handling and security of memory cards used by the optical scanners are uniformly followed throughout the State of Connecticut.”

From the Secretary of the State’s May 14th Newsletter:

Moving forward, my office will continue to improve the training we give to Registrars of Voters and local election officials to reduce any further errors in counting. We will also start new training within weeks to improve the security of memory cards used by the optical scanners to record votes on Election Day.

Our comments and concerns:

  • This is not a random audit of memory cards. We continue to applaud this unique memory card testing program, yet it is a registrar selected set of memory cards, not exhaustive, not a random sample. 297 cards used in the election were tested out of 833 districts. This opens a huge hole for covering up errors and fraud – just don’t send in your card. It also biases any statistics one way or another based on which cards tend to be sent to UConn.
  • 9% error memory card failure rate is bad enough, but is it the actual rate? 9% of all cards sent to UConn had memory problems. These are all classified as cards not used in the election, if so, the rate would be 41/142 or 29%. We wonder if many bad cards are found in the process of testing and not used in the election thus not counted in the audit. Or could some of these cards have worked in the election and failed subsequently? Bottom line we don’t know the actual failure rate % since we don’t have a random sample. The 9% is within the range of previous UConn pre and post election tests <all UConn Reports> <Our Past Commentary>
  • There is a serious failure of officials to follow procedures. A rate of 34% or 144 failures to follow procedures on 421 non-junk-data cards is a serious pervasive problem. (52-Not Set for Election, 20 Results Print Aborted, 2-Set for Election, Zero Counters, 41-Duplication Events, 29-Zero Totals Printed Before Date of Election)  In some cases multiple problems may have occurred on the same card so the number of districts detected as failing to follow procedures is likely a bit less that the 34%. (This  paragraph has corrected numbers, in an earlier version we had double counted some of the errors)Once again, this is not a random sample yet it’s a totally unacceptable level of not following procedures. The report correctly suggests these should be changed through better training and instruction to municipalities. If procedures are necessary, then when they are not followed it means there is an opportunity for problems to occur. This should cause everyone to wonder to what extent other unaudited election procedures are regularly not followed. Most procedures are in place because they are intended to prevent election day problems, errors, and fraud.  In fact, this memory card finding is very consistent with the Audit Coaltion Reports which have consistently shown a significant level of failures to follow procedures, for instance the chain-of-custody failures described in the most recent Coalition report <read>

Our bottom line:

  • A non-random partial post-election audit of memory cards is useful, but it is insufficient. A more rigorous sampling process would yield more accurate information and, just as importantly, it would eliminate the existing opportunity for errors or fraud to be covered up by not sending the cards for testing.  Last year we proposed and the GAE Committee passed 100% pre-election independent testing of memory cards.  We stand by that recommendation to protect the cards from front-end insider fraud and to make it less likely that election officials have to deal with junk data cards.  Post-election random testing or 100% testing of memory cards is also advisable.
  • How many more tests, reports, and elections will it take before the junk data problem is significantly reduced? Ridiculous, Unacceptable, Unconscionable come to mind to describe the junk data problem.  5%, 9%, 20% or even 1% is way out of line for electronic equipment.  Why do we stand for it?  What about all the other states that use this exact same technology, why are they putting up with it?
  • Almost every failure to follow procedures is an opportunity to cause problems, cover up errors, or cover up fraud. Perhaps it is easier to understand human failure to follow procedures exactly, every time.  Once again no mater if failure to follow procedures is 5%, 10%, 20% or 40% in handling memory cards it points to a likely much higher rate of failure to follow all procedures.  How can we have confidence in elections with such a lack of ability or attention to following procedures, many of which are performed outside of public view, outside of audits purview to discover.    We can only hope that the Registrars of Voters will join in the commitment to meet a much  higher standard.

************

Related story 5/28:  Diebold memory card problems in Florida — a different model, this time it is high speed wireless cards <read>

A New Approach To Voter Registration?

The United States is one of the few industrialized democracies that place the onus of registration on the voter. In other democracies, the government facilitates voting by taking upon itself the responsibility to build voter rolls of all eligible citizens. Even in the United States, voter-initiated registration did not exist until the late nineteenth century. It was instituted then in many states with the intention of suppressing unpopular voters, especially former slaves and new European immigrants, and it continues to disenfranchise many Americans to this day.

How about a simple, reliable system similar to the rest of the World?

The United States is one of the few industrialized democracies that place the onus of registration on the voter. In other democracies, the government facilitates voting by taking upon itself the responsibility to build voter rolls of all eligible citizens. Even in the United States, voter-initiated registration did not exist until the late nineteenth century. It was instituted then in many states with the intention of suppressing unpopular voters, especially former slaves and new European immigrants, and it continues to disenfranchise many Americans to this day.

The Brennan Center for Justice has produced a “Policy Summary”, Universal Voter Registration, <Summary> <.pdf>

Too often, when it comes to our election system, policymaking has devolved into partisan wrangling or become bogged down in arcane technicalities.

Today we have the opportunity for a major breakthrough for effective democracy. The 2008 election saw a record number of new voters. New election technology and the implementation of a recent federal law in the states make it possible to overcome the challenges with our voter registration system—the single greatest cause of voting prob lems in the United States. We can now truly modernize the voter registration process by upgrading to a system of universal voter registration—a system where all eligible citizens are able to vote because the government has taken the steps to make it possible for them to be on the voter rolls, permanently. Citizens must take responsibility to vote, but government should do its part by clearing away obstacles to their full participation. The current voter registration system—which is governed by a dizzying array of rules and is susceptible to error and manipulation—is the largest source of such obstacles…

The next Congress can substantially speed up the process by:

  • Establishing a national mandate for universal voter registration within each state;
  • Providing federal funds for states taking steps toward universal voter registration;
  • Requiring “permanent voter registration” systems, so that once voters are registered, they will stay on the rolls when they move; and
  • Requiring fail-safe procedures, so that eligible voters whose names do not appear on the voter rolls or whose information is not up to date can correct the rolls and vote on the same day.

An attractive idea which claims the magical combination of increasing participation in democracy, fairness,and  integrity, while saving  everyone money, reducing frustration for officials, and the public.  We will put this in our “Conditionally For” column, because we would have to see the details of any proposal and see the comments and evaluation of computer scientists, security experts, election officials, and activists.

League Of Women Voters: Report On Election Auditing

Includes many of the same recommendations in the Principles and Best Practices for Post-Election Audits and expands the scope to include auditing of the whole election process.

Includes many of the same recommendations in the Principles and Best Practices for Post-Election Audits and expands the scope to include auditing of the whole election process: <web intro> <Report .pdf>

This report consists of four key parts: Recommended Guidelines for Election Audits, Criteria for an Election Auditing Law, Glossary of Election Audits Terminology, and Election Audits Resources. These sections are intended to be used together in their entirety.