You can't go a week anymore without hearing news of yet another massive computer-security breach.
Sometimes it's a business-specific hacking: “Everyone's at risk who used a credit or debit card to make payment at a given restaurant, retailer or service provider in the past year or so.”
Sometimes it's a bank-specific hacking: “All clients of this particular financial institution are at risk.”
Then there's the healthcare hackings: “Beware if you sought treatment from any hospital, physician or medical center in this network.”
And the government hackings: “This state's licensed drivers are at risk after hackers broke into the DMV database.” “That state's income taxpayers are at risk.”
Car insurance or security companies have started putting out lists of which makes and models of automobiles are at the greatest risk of hackers hijacking their vital control systems. And as people equip their homes with Internet-connected “smart” devices, they risk hackers taking control of everything from their baby monitors to their HVAC systems.
Global risks
In addition to these relatively “localized” computer-security problems, there's also the occasional gigantic security flaws affecting the entire Internet: the “Heartbleed” open-source security flaw discovered last April put almost every password-protected online account at risk. The current “Shellshock” flaw discovered last week in software widely used in UNIX, Linux and Mac OS X systems can apparently let hackers take control of any computer that visits a compromised website.
Of course, there are certainly ways you can reduce your vulnerability to some online security flaws – I for one pay with cash in lieu of credit card anytime I can (though as a practical matter, credit cards are mandatory if you want to go on vacation: renting a hotel room or a car is impossible without one). This cash-only policy is partly to avoid the temptation of spending/charging more than I should, but mainly so I needn't bother getting a new credit card every week after my last one got compromised in the database hacking du jour.
But unless you completely drop out of modern mainstream society, staying out of all hackable databases simply isn't possible: if you hold a job, pay taxes, have a driver's license or visit a doctor, your personal information is at risk.
Can't patch the holes?
And if you're old enough to have adult, or at least teenage, memories of life before the Internet, you might occasionally grow frustrated enough to wonder: how did we reach the point where pretty much our entire system of business, finance and government is reliant on this constantly insecure network? Can't we patch these security holes, and fix the Internet?
Jose Pagliary, writing about “the cybercrime economy” for CNN, suggests the answer to that last question is “no.” Or maybe it's even the wrong question: these massive security flaws aren't a sign that the Internet is broken, so much as an indication the Internet is being used for purposes it was never intended to serve. As Pagliary said (bold print from the original):
The Internet was never meant for this. We use the Internet for banking, business, education and national defense. These things require privacy and the assurance that you are actually who you say you are.
The Internet, as it was designed, offers neither. When the Worldwide Web was built 25 years ago, it existed as a channel for physicists to pass research back and forth. It was a small, closed community. The scientists at Stanford trusted the researchers at the University of California - Los Angeles.
In other words: the whole point of the Internet was originally about making it easier to share information – remember the “information superhighway?” – whereas for modern “online security” concerns, the point is to prevent unauthorized sharing (read: “theft”) of information.
You can make it easier to share something, or you can make that something harder to steal – but try accomplishing both tasks at once, with the same tool, and you've got a problem. And that, in a nutshell, is what's wrong with “Internet security.”
Pagliary notes:
In 2014, it's still standard to send Internet communication in plain text. Anyone could tap into a connection and observe what you're saying. Engineers developed HTTPS nearly 20 years ago to protect conversations by encrypting them -- but major email providers and social media sites are only now enabling this. And sites like Instagram and Reddit still don't use it by default.
Not everyone favors privacy
One problem Pagliary does not mention: in a post-Edward Snowden world, where it's common knowledge that the NSA engages in warrantless monitoring of pretty much all American electronic communications, some members of the American government actively oppose certain forms of Internet security.
When Apple bragged last month about the secure encryption it uses on it iPhone 6, for example, FBI director James Comey said he was “very concerned” about what he considers “companies marketing something expressly to allow people to place themselves beyond the law.”
Yet even assuming encrypted communications win out over law enforcement's desire to read communications at will, that alone wouldn't be enough to make the Internet “secure,” thanks to problems not just in the software itself, but in the very culture that creates it.
Pagliary said “software is a hodgepodge of flawed Lego blocks. The big, ugly secret in the world of computer science is that developers don't check their apps closely enough for bugs. ”
Even professional developers – those getting paid for their efforts – don't have time to properly vet their software, in the fast-paced world of computer and Internet technology where anything more than a few years old is most likely obsolete. Pagliary notes that the problem's even worse with open-source software (like Linux) made and maintained mostly by unpaid volunteers:
Sometimes, that flawed code becomes widespread. Most of the world relies on open-source software that's built to be shared and maintained by volunteers and used by everyone -- startups, banks, even governments.
There's an illusion of safety. The thinking goes: So many engineers see the code, they're bound to find bugs. Therefore, open-source software is safe, even if no one is directly responsible for reviewing it.
Nope. Last week's shellshock bug is the perfect example of that flawed thinking. Bash, a program so popular it's been placed on millions of machines worldwide, was found to have a fatal flaw that's more than 20 years old.
So what do we do? We live in a modern society where our allegedly confidential and secure data is stored in and shared by an inherently insecure system – yet abandoning the Internet clearly isn't a feasible option (and few would want to try it, anyway). How do we leave the corner we've painted ourselves into?
Maybe we can't. Pagliary ended his piece with a quote by Scott Hanselman, a programmer and former college professor living in Oregon, who made this analogy: “It's not Toyota having a recall. It's like tires as a concept have been recalled and someone says, 'Holy crap, tires?! We've been using tires for years!' It's that level of bad.”
You can't go a week anymore without hearing news of yet another massive computer-security breach....