Getting your Trinity Audio player ready...

If you have a process for recounting the votes in an election, but the costs are so astronomical that it is implausible that it could be utilized, can you really say that you have one?

There were 21,761 ballots cast in Palo Alto in the just recently completed election. The Santa Clara County’s Registrar of Voters (SCC ROV) estimated a manual recount would cost $177,000, or $8.13 per ballot processed. Simply running the ballots through the scanners again was estimated to cost $10,000 ($0.46/ballot). (foot#1) (foot#2) With the exception of a few ballots cast using the touchscreen system (primarily for voters with special needs), ballots are counted with a optical scanning system that is commonly characterized (Web research) as having an “inherent error rate” of 0.5% (additional in Appendix). For this recent election, this works out to over 100 errors. In the SCC ROV’s audit of 1% of the precincts county-wide, they found a much lower rate: less than one discrepancy per precinct. Palo Alto has 41 precincts of widely varying sizes, and this averages out to an error rate under 0.2%, and less than 40 votes. While this inherent error rate is smaller that the margin in both the Palo Alto City Council and School Board contest, it is larger than the margins in several nearby election.

Confidence in election results is crucial for democracy, and is undermined if a substantial portion of the electorate comes to believe that the winner wasn’t, for example, what happened in 2000 in Florida in the Bush-Gore Presidential election.(foot#3)

—-Example of concern—-

Note: I am using this example because it is one for which I have the data and an analysis that will probably register with most readers. The anomaly is centered on a single day’s count, but I haven’t been able to tease out of the data either a satisfying explanation of why it happened, nor anything that points to a significant error in the count. It is this ambiguity that makes this a useful (motivating?) example.

I became aware of the impractically of a recount as part of the recent election for City Council. There was small margin between Cory Wolbach and Lydia Kou for the fifth seat, and that margin dramatically changed on Friday November 7. The ballots being counted on that day were a significant portion of the Vote-by-Mail ballots that had been received on Election Day, either through the mail or handed in at precincts and other drop-off locations. Some of this category of ballots had already been counted on previous days, and more were yet to be counted (mine wasn’t counted until November 8 or 9).(foot#4) At the beginning of that day’s work, the SCC ROV had already counted 69% of the Palo Alto ballots (14,942 of 21,761), and they counted 19% more (4043) that day.(foot#5) While the ballots counted were not uniformly spread over the precincts, the distribution was broad enough that it seemed unlikely that particular sections of the city that favored particular candidates were over-represented to the degree that would account for the shift. So one would expect that that day’s count would roughly follow the established percentages for the candidates.

Instead, this day saw a sharp surge of votes for Wolbach, getting 12% more votes than the increase extrapolated from the percentages. Combined with Kou receiving 4% less than extrapolated, there was a shift from Kou leading by 38 votes to Wolbach leading by 201. This 239 vote margin change corresponded to 6% of the ballots counted that day (with an average of 3.55 votes per ballot). The charts above show just how much variability there was on this day. Recognize that the ballots being counted on the previous and subsequent days were in this same category of ballot, and have much less variability (which is at a level that I find unsurprising).

If you want to further explore the data, the Excel workbook (.XLSX, opens in Excel Online) that generated these charts is available (it is a big file, 700KB).
Note: Please don’t ask me for interpretation of this data: I don’t have the data from previous elections nor the expertise to comment on non-obvious aspects. My role was to do enough analysis to see if it warranted, and could get, attention from someone with that expertise.

—-Exploring the Options—-
So, was this variability simply unusual, or indicative of a problem in the vote count? What sort of problem might it be? What ways are there to identify and localize such problems? How much would it cost to confirm/refute?

I was part of a group from the Kou campaign that met with the Assistant Registrar of Voters to try to get answers to these questions. He was quite generous with his time?this was only shortly after the crunch of ballot counting?and he gave good explanations and was responsive to our questions. However, the answers themselves were disappointing.

The basic problem was that the smallest legally permissible unit that can be recounted is the ballots from a specified precinct. Suppose you suspect that there was a problem with one of the counting machines during one particular day. Because there can’t legally be a recount of just those ballots, the system isn’t set up to track the ballots to make this logistically possible (the lack of tracking is greater than that needed to preserve the secret ballot). As the ballots move through the steps in the verification and counting process, they can be split up and combined into different groupings. Because the technology allows portions of multiple precincts to be combined in a single batch for counting, you see the count for each precinct fragmented over multiple days. With small fragments, it becomes becomes very hard to distinguish normal variation from anomalies. (The SCC ROV tries to have less than 1000 registered voters in a precinct and with the turnout in the recent election being 59%, a day on which 20% of its ballots were counted is less than 120 ballots). One member of our group used the analogy of a spreadsheet where the only way to double-check the data down one column was to use the totals from across the rows.

Learning the constraints on what could be recounted, the costs of a recount made the remaining questions moot.

Aside: I was surprised that the SCC ROV didn’t use a deck of PowerPoint slides for our meeting. I would have expected that they would have gotten an initial deck from the vendor when they purchased the system in 2003, and that deck would have evolved over the years. For example, I would have expected it to be needed for presentations to others considering a recount, to the County Supervisors and others involved in oversight, to local students, and to academics researching the integrity of the various voting systems. However, he said that most people asking about the process want to see a demo.

—-Conclusion/Musings—-
We need to have a balloting system that provides for viable recounts. It is unacceptable to have a situation where people have strong suspicions about the validity of the count (and the results), but can’t resolve those suspicions because of the astronomical costs: $177,000 to recount an election where a candidate spent $25-35,000 on the campaign itself. I fully expect commenters below to describe various systems for doing a better count/recount. For example, the Trachtenberg Election Verification System (TEVS). However, I don’t know enough to comment on them.

One thing people need to recognize is that all vote counting systems have an inherent error rate, and that includes when humans are counting ballots (humans spot errors that machines make, but tend to make far more of their own). With the exceptions of systems that ask the voter to confirm their choices before recording the votes, there is the question of what is and isn’t a vote (Is that thin faint line intended as a vote, or is it just a stray mark?). This can vary not only between runs on mechanical systems, but counts by different people.

It is impossible to eliminate a margin of error in the results, only reduce it. Most people I know find this unacceptable ?they expect the election process to produce definite winners. And a big part of me wants to agree with that expectation, even though I know it is impractical. However, there already are elections where the results are a tie?two candidates get exactly the same number of votes. These ties are resolved by a coin toss or by cutting cards or ? Maybe we need to extend the notion of a tie to include results that fall within the margin of error, and accept the distribution of “errors” within the counting process is the tie-breaker, that is, it is as an (implicit) substitute for the explicit coin toss.

—-Appendix: Ignorance is bliss, or
—-It is not an error until someone spots it—-
“God, grant me the serenity to accept the things I cannot change,
The courage to change the things I can,
And the wisdom to know the difference.” ? The Serenity Prayer
and applies not just to individuals, but organizational dynamics (“Corporations are people too” ? Mitt Romney).

I was repeatedly reminded of this during my years working in computer security, especially in a start-up circa 2000. Various IT managers confessed that although they were all too aware that their companies were not just vulnerable but under attack, that they would not buy various products (including the start-up’s). Getting better alerts and details about individual instances of attacks was at best useless if they didn’t have the budget to do anything about it, and they told us that they had repeatedly been unsuccessful in this. Consequently, having such improved capabilities would simply have raised the visibility of the problem (and potential liability), and generated demands from the higher-ups (aka pointy-haired bosses) that they “Do something” or “Do more with less” or ?. As one bluntly put it: “If I install this product, I get fired this year. If I don’t, it will probably be several years before we get hit hard, and hopefully by then I will have already moved on” (paraphrase).

From the descriptions given by the SCC ROV, they seem to be caught “between a rock and a hard place” (although they gave no hint of realizing this). On one side is the California Election Code which doesn’t seem to have kept up with changes in the voting practices. First, it was created in a time where virtually everyone voted in person at a precinct and Absentee/Vote-by-Mail ballots were few. The Code has been patched to allow broader Vote-by-Mail, but doesn’t seem to have been re-thought to take into account all the changes in how ballots are processed when the vast majority of ballots are Vote-By-Mail.

Second, the California Election Code updates don’t seem to have taken into account the “inherent error rate” of the technologies used to count votes, despite this having been a prominent concern since the 2000 Florida count/recount. Some other jurisdictions have automatic recounts when the vote margin is small enough to fall into this range. (foot#6) There is a proposal by Assemblyman Kevin Mullin (D-South San Francisco) to have an automatic recount when the vote margin is less than 0.1% (Aside: the margin between Wolbach and Kou was 0.18% of the votes recorded).

—-Appendix: Process Errors—-
The ROV’s mandatory 1% audit is intended to catch mistakes in handling the ballots, such as a batch of ballots being misplaced or otherwise not counted. A recurrent situation across the country is that the elections staff has failed to update the written procedures after discovering a problem in the process because “We will remember”, but that corporate memory gets lost due to retirement, resignation, illness? For example, in another county, the election staff discovered that their software discarded the results from the first precinct counted, but couldn’t get the software corrected (for a reason unknown to me). So their remedy was to prepare a dummy precinct (one ballot with zero votes) and feed it first into the system. Worked perfectly until there was a staff change?

There was special concern about this sort of situation in this election here in Santa Clara County because the SCC ROV’s head of information technology (Joseph Le) resigned abruptly just before the election (Note: there were many different stories swirling around about this resignation). The SCC ROV requested the California Secretary of State (responsible for elections) audit the results, but was refused.

—-Appendix: Mechanical Errors—-
The SCC ROV uses the Sequoia Voting Systems’ model 400C optical scanning system. My web research found a multitude of citations of this category of system as having an “inherent error rate” of 0.5%, but I was unable to find an adequate definition of that term, much less how the value was derived. I long ago learned not to assume that I knew what someone else means by “error rate”, much less “inherent”. For example, consider a system that processes 1000 ballots and fails to count 10 votes (false negatives) while counting 5 votes that weren’t there (false positives). The error rate is 1.5% when computed on a per ballot comparison, but only 0.5% when it is just the difference between the system’s total and the correct total (because the false positives offset half the false negatives). Similarly, if a ballot gets mishandled, is that counted as one error, or as an error for each vote that could have been cast on that ballot?

Then there is the interpretation of what is the correct interpretation of what should count as a vote?the long-established convention is it is what a human can determine as “voter intent” (this was a major factor in the controversy surrounding the 2000 Florida recount ). An example from the SCC ROV: The voter uses a specialty ink that contained sparkles and those sparkles reflect enough light back to the scanner that it doesn’t see the mark, but the human eye easily sees the mark. Another example, the voter uses an ink that gets smudged, not giving enough contrast in the target zone for the scanner to see it as a mark.

Another potential source of error is material on the ballots smudging either the scanning plate or the ballot itself. This happened in 2010 to the SCC ROV when the printer for the ballots used an inappropriate ink?the smudging occurred only under the pressures encountered in the mechanical processing, and thus had escaped detection during manual examination and use. Darker smudges can create false positives, while fainter smudges can create false negatives (by reducing the contrast below the threshold needed to register as a vote). For smudges on the scanner plate, the SCC ROV follows the equipment vendor’s recommendations for how often to clean the scanning plate (and other maintenance), but given the history of the voting machine companies, I would be highly skeptical about the diligence they put into creating those recommendations. Recognize that it would likely be hard for an individual ROV to test the recommendations for an auto-feed scanner: The passage of the ballots themselves do a certain amount of cleaning of the scanning plate, and thus transient smudges that affect results may go unrecognized.

Although the SCC ROV does a range of quality control checks on the operation of their system, there were several checks that I thought were obvious that weren’t done. For example, active monitoring of the over-vote (and to a lesser extent, the under-vote). (foot#7) Too large a variation could serve as an alert to check the system for malfunctions or other problems. I was surprised by this because this type of active monitoring was already common back in the 1980s and early 1990s when I was working on decision aids for manufacturing and quality assurance engineers. The SCC ROV’s system was designed in the 1990s and purchased by them in 2003.
Recognize that the SCC ROV is constrained in its choice of system to ones that have been approved by the California Secretary of State. My understanding is that the ability of an individual ROV to customize the software is largely precluded by the vendor’s license and by the extensive testing needed to obtain approval (although reports of vendors installing unapproved patches, including just before elections, abound from around the country).

Another interesting issue is how persistent are errors (false readings). For example, one would expect the problem reading ink containing sparkles would happen consistently?on repeated runs through the same scanner and on runs through other scanners of the same model. However, problems from smudges on the scanner plate are unlikely to persist between runs or across scanners. This is an important distinction for someone requesting a recount: If most of the likely errors are transient, you might go with re-running the ballots through the scanners. Although this is still expensive, it is not the astronomically expense of a manual recount needed to find the persistent errors.

—-Appendix: The mandatory 1% audit—-
After the vote count has been completed, the ROV performs an audit to certify the election. This is commonly called the 1% audit because it involves a manual recount of the ballots in a randomly selected 1% of the precincts. This partial recount is intended to detect a variety of errors (“logic and accuracy test”). To detect problems such as misconfigured software that assigns one candidate’s votes to another, all contests that weren’t covered by the initial precincts selected are checked by a random selection of a precinct for that contest.
Aside: The Palo Alto precinct selected was 2118, which is east of Middlefield Road between E. Meadow and Charleston.

This audit also checks the processes involved in handling the ballots all the way from the precincts through the counting process.

—-Appendix: Miscellaneous info—-
About 2-3% of the ballots returned as challenged, most for having been delivered in the mail after the deadline (there is a move in the state legislature to change the rule on Vote-By-Mail ballots to use the postmark date instead of having to be received by the ROV by 8pm on Election Day).

If the ROV receives a Vote-by-Mail ballot without the required signature far enough before before election day, they return it to the voter for resubmission. However, such ballots received on Election Day have to be disqualified, because the Election Code specifies receipt of the completed ballot, that is, with valid signature.

Questions about the signatures on the ballot envelope are resolved giving the voter the benefit of the doubt.

For provisional ballots, there is a 10-20% rejection rate, for example, the voter is registered elsewhere or not registered at all. If a voter has moved within the same county without update his voter registration, the ROV has the capability to count some of the votes on the ballot (for example, state-wide offices and propositions).

—- Footnotes —-

1. Manual recounts for multi-seat elections, such as the 5-seat, 12-candidate City Council contest, are expensive because of the difficulty of manual producing an accurate count. It is slow because of the care needed to count and record each ballot correctly, and the repeats of the counting of batches when errors in the count are detected.

2. The party requesting a recount has to pay estimated costs up-front on a day-by-day basis, but gets billed for actual incurred costs. The SCC ROV’s advice was to use $8.50 per ballot as the fund-raising target for a potential recount.

3. Bush-Gore results: An independent unofficial recount that took years found that Gore would have lost the recount that he had requested, but would have won if he had requested a recount of the whole state (or of a different set of counties/precincts).

4. When was my/your ballot counted? I was using the SCC ROV’s website “Track Vote By Mail Ballot” tool each night to see when my ballot was counted. It showed up as Received and Counted on Sunday November 9, but was listed as not yet received on the previous evening. Similarly for my neighbors who were similarly tracking their ballots. However, the SCC ROV’s spreadsheet shows details of ballots counted shows none for my precinct (2108) on that day. I am assuming that there was simply a delay in updating the website’s Track tool.

5. Ballots Cast overstates the number of people voting in the City Council contest: Some people voted only in the state-wide contests (candidates and propositions), some voted for School Board but not Council? A better estimate might come from the votes cast on the City measures, with Measure B (TOT = hotel tax) receiving the most votes (both Yes and No). At the beginning of November 7, there were 14,008 votes for Measure B (93.75% of 14,942 ballots) and 3815 more votes were counted that day (94.4% of 4043 ballots), with a total of 20,287 votes cast (93.23% of 21,761 ballots).

6. Margin for automatic recount: The choice of this threshold seems to be based on perceptions (politics, public relations), rather than scientific analysis. This is not surprising giving the history of resisting such analyses by the voting machine vendors.

7. An over-vote is when there are too many votes in a particular contest. For example, in the City Council contest, there were 5 seats open so you could vote for 5. However, if you voted for 6, all your votes in that one particular contest would be discarded (because the counting system wouldn’t be able to decide which votes to keep and which to discard). Note: your votes in all the other contests in the ballot are counted.
An under-vote occurs when the ballot contains fewer votes than allowed, for example, voting for only 4 candidates in a contest for 5 seats. It also applies to when you cast no votes in a particular contest.
The SCC ROV’s system doesn’t track either of these metrics, nor does it flag over-votes for later examination (or even spot checking). Under-voting is so common that it is unclear if even spot-checking would be helpful. During the manual recount portion of the 1% audit, over-vote and under-vote are recorded, but my understanding is that it isn’t used except as a mathematical check during that manual count.

—-
The Guidelines for comments on this blog are different from those on Town Square Forums. I am attempting to foster more civility and substantive comments by deleting violations of the guidelines.

I am particular strict about misrepresenting what others have said (me or other commenters). If I judge your comment as likely to provoke a response of “That is not what was said”, don’t be surprised to have it deleted. My primary goal is to avoid unnecessary and undesirable back-and-forth, but such misrepresentations also indicate that the author is unwilling/unable to participate in a meaningful, respectful conversation on the topic.

Leave a comment