2011-07-28 23:48:35 by chort
Unless were living under a rock, you're aware of some public outrage over the acquittal of Casey Anthony on the most serious charges against her. As is usually the case when someone widely believed to be guilty is not convicted, there are all kinds of demands for new laws, criticisms of the jurors, etc. Everyone is so concerned with trying to prevent cases from falling through the cracks that they don't stop to think about how well the system actually does work in general, particularly how rare it is that people are wrongly convicted (rare, but sadly not impossible). It strikes me that this issue is very similar to one I know a lot about.
For well over 12 years I've had jobs dealing with email; not long after I started, spam arrived on the scene. Ever since then I've had to combat it on personal and professional levels. I've been employed by several anti-spam companies over the years and been on the front lines, explaining to customers how the systems worked, why the engines made certain decisions about particular emails, and in general why a computer isn't perfect (nor are humans).
To everyone who thinks that DHS and TSA are a good idea, I'm speaking to you.
Naturally every time an individual human gets a spam email that has been judged legitimate by a filter, they're baffled by how the system couldn't draw the same conclusions they did about the message. Never mind that there is a huge divergence in what any two humans believe to be spam, they claim the system is flawed and has failed. That is nothing, however, compared with the reaction when a spam filter blocks a message that a human did want to receive. It's not unheard of for an executive to demand a meeting with their anti-spam vendor for an explanation as to why a message was blocked and assurance that it will never happen again. In the business we call that former mis-identification (not blocking a "bad" message) a "false negative" and the latter failure (blocking a "good" message) a "false positive."
What's interesting to me is that anti-spam solutions could block more spam than they do, and they could block less legitimate messages than they do, but the aggressiveness of the filters are basically tuned to market valuation from purchasers voting with their dollars. Furthermore, most organizations have gone through 2 - 3 generations of anti-spam solutions, so they've revoted at least once, if not twice. At the time of this writing the ballpark "effectiveness" claimed by most anti-spam solutions is between 99% and 99.99%--i.e. of all the actual spam messages, 99% or greater are blocked. The rough "accuracy," or false positive frequency in all emails processed is generally claimed to be anywhere from 1 in 100,000 to 1 in 1,000,000. What this tells me is that users (and purchasers) are overwhelmingly more tolerant of false negatives than they are of false positives.
Unfortunately, due to the way the US criminal justice system works, and the news cycles of the media who cover it, legal false negatives get a lot more press than false positives. There is an uproar if someone goes free when popular opinion is that they should have been convicted, yet there's very little outrage when someone is imprisoned for a crime they did not commit.
There might not be a lot we can do about the way media hype cycles work, but we can remember the lessons of spam when we consider whether new laws ought to be passed in an attempt to punish every person who ever commits a bad deed. We need to think very hard about how such laws could be misused to accidentally, or intentionally punish people who haven't caused anyone serious harm. Don't get caught in the emotional trap of trying to prevent or punish every bad act. Realized that we're better off when a few bad people go free, because it means many innocent people also stay free.
People who think the DHS and TSA are a good idea, I'm looking at you.
- Comments (0)