Carry On - Bruce Schneier - E-Book

Carry On E-Book

Bruce Schneier

3,9
23,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Up-to-the-minute observations from a world-famous security expert Bruce Schneier is known worldwide as the foremost authority and commentator on every security issue from cyber-terrorism to airport surveillance. This groundbreaking book features more than 160 commentaries on recent events including the Boston Marathon bombing, the NSA's ubiquitous surveillance programs, Chinese cyber-attacks, the privacy of cloud computing, and how to hack the Papal election. Timely as an Internet news report and always insightful, Schneier explains, debunks, and draws lessons from current events that are valuable for security experts and ordinary citizens alike. * Bruce Schneier's worldwide reputation as a security guru has earned him more than 250,000 loyal blog and newsletter readers * This anthology offers Schneier's observations on some of the most timely security issues of our day, including the Boston Marathon bombing, the NSA's Internet surveillance, ongoing aviation security issues, and Chinese cyber-attacks * It features the author's unique take on issues involving crime, terrorism, spying, privacy, voting, security policy and law, travel security, the psychology and economics of security, and much more * Previous Schneier books have sold over 500,000 copies Carry On: Sound Advice from Schneier on Security is packed with information and ideas that are of interest to anyone living in today's insecure world.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 665

Bewertungen
3,9 (18 Bewertungen)
6
6
4
2
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Chapter 1: The Business and Economics of Security

Consolidation: Plague or Progress

Prediction: RSA Conference Will Shrink Like a Punctured Balloon

How to Sell Security

Why Do We Accept Signatures by Fax?

The Pros and Cons of LifeLock

The Problem Is Information Insecurity

Security ROI: Fact or Fiction?

Social Networking Risks

Do You Know Where Your Data Are?

Be Careful When You Come to Put Your Trust in the Clouds

Is Perfect Access Control Possible?

News Media Strategies for Survival for Journalists

Security and Function Creep

Weighing the Risk of Hiring Hackers

Should Enterprises Give In to IT Consumerization at the Expense of Security?

The Vulnerabilities Market and the Future of Security

So You Want to Be a Security Expert

When It Comes to Security, We're Back to Feudalism

You Have No Control Over Security on the Feudal Internet

Chapter 2: Crime, Terrorism, Spying, and War

America's Dilemma: Close Security Holes, or Exploit Them Ourselves

Are Photographers Really a Threat?

CCTV Doesn't Keep Us Safe, Yet the Cameras Are Everywhere

Chinese Cyberattacks: Myth or Menace?

How a Classic Man-in-the-Middle Attack Saved Colombian Hostages

How to Create the Perfect Fake Identity

A Fetishistic Approach to Security Is a Perverse Way to Keep Us Safe

The Seven Habits of Highly Ineffective Terrorists

Why Society Should Pay the True Costs of Security

Why Technology Won't Prevent Identity Theft

Terrorists May Use Google Earth, but Fear Is No Reason to Ban It

Thwarting an Internal Hacker

Market An Enterprising Criminal Has Spotted a Gap in the Market

We Shouldn't Poison Our Minds with Fear of Bioterrorism

Raising the Cost of Paperwork Errors Will Improve Accuracy

So-Called Cyberattack Was Overblown

Why Framing Your Enemies Is Now Virtually Child's Play

Beyond Security Theater

Cold War Encryption Is Unrealistic in Today's Trenches

Profiling Makes Us Less Safe

Fixing Intelligence Failures

Spy Cameras Won't Make Us Safer

Scanners, Sensors Are Wrong Way to Secure the Subway

Preventing Terrorist Attacks in Crowded Areas

Where Are All the Terrorist Attacks?

Worst-Case Thinking Makes Us Nuts, Not Safe

Threat of “Cyberwar” Has Been Hugely Hyped

Cyberwar and the Future of Cyber Conflict

Why Terror Alert Codes Never Made Sense

Debate Club: An International Cyberwar Treaty Is the Only Way to Stem the Threat

Overreaction and Overly Specific Reactions to Rare Risks

Militarizing Cyberspace Will Do More Harm Than Good

Rhetoric of Cyber War Breeds Fear—and More Cyber War

The Boston Marathon Bombing: Keep Calm and Carry On

Why FBI and CIA Didn't Connect the Dots

The FBI's New Wiretapping Plan Is Great News for Criminals

US Offensive Cyberwar Policy

Chapter 3: Human Aspects of Security

Secret Questions Blow a Hole in Security

When You Lose a Piece of Kit, the Real Loss Is the Data It Contains

The Kindness of Strangers

Blaming the User Is Easy—But It's Better to Bypass Them Altogether

The Value of Self-Enforcing Protocols

Reputation Is Everything in IT Security

When to Change Passwords

The Big Idea: Bruce Schneier

High-Tech Cheats in a World of Trust

Detecting Cheaters

Lance Armstrong and the Prisoner's Dilemma of Doping in Professional Sports

Trust and Society

How Secure Is the Papal Election?

The Court of Public Opinion

On Security Awareness Training

Our New Regimes of Trust

Chapter 4: Privacy and Surveillance

The Myth of the “Transparent Society”

Our Data, Ourselves

The Future of Ephemeral Conversation

How to Prevent Digital Snooping

Architecture of Privacy

Privacy in the Age of Persistence

Should We Have an Expectation of Online Privacy?

Offhand but On Record

Google's and Facebook's Privacy Illusion

The Internet: Anonymous Forever

A Taxonomy of Social Networking Data

The Diffi culty of Surveillance Crowdsourcing

The Internet Is a Surveillance State

Surveillance and the Internet of Things

Government Secrets and the Need for Whistleblowers

Before Prosecuting, Investigate the Government

Chapter 5: Psychology of Security

The Security Mindset

The Difference between Feeling and Reality in Security

How the Human Brain Buys Security

Does Risk Management Make Sense?

How the Great Conficker Panic Hacked into Human Credulity

How Science Fiction Writers Can Help, or Hurt, Homeland Security

Privacy Salience and Social Networking Sites

Security, Group Size, and the Human Brain

People Understand Risks—But Do Security Staff Understand People?

Nature's Fears Extend to Online Behavior

Chapter 6: Security and Technology

The Ethics of Vulnerability Research

I've Seen the Future, and It Has a Kill Switch

Software Makers Should Take Responsibility

Lesson from the DNS Bug: Patching Isn't Enough

Why Being Open about Security Makes Us All Safer in the Long Run

Boston Court's Meddling with “Full Disclosure” Is Unwelcome

Quantum Cryptography: As Awesome as It Is Pointless

Passwords Are Not Broken, but How We Choose Them Sure Is

America's Next Top Hash Function Begins

Tigers Use Scent, Birds Use Calls—Biometrics Are Just Animal Instinct

The Secret Question Is: Why Do IT Systems Use Insecure Passwords?

The Pros and Cons of Password Masking

Technology Shouldn't Give Big Brother a Head Start

Lockpicking and the Internet

The Battle Is On against Facebook and Co. to Regain Control of Our Files

The Difficulty of Un-Authentication

Is Antivirus Dead?

Virus and Protocol Scares Happen Every Day—but Don't Let Them Worry You

The Failure of Cryptography to Secure Modern Networks

The Story behind the Stuxnet Virus

The Dangers of a Software Monoculture

How Changing Technology Affects Security

The Importance of Security Engineering

Technologies of Surveillance

When Technology Overtakes Security

Chapter 7: Travel and Security

Crossing Borders with Laptops and PDAs

The TSA's Useless Photo ID Rules

The Two Classes of Airport Contraband

Fixing Airport Security

Laptop Security while Crossing Borders

Breaching the Secure Area in Airports

Stop the Panic on Air Security

A Waste of Money and Time

Why the TSA Can't Back Down

The Trouble with Airport Profiling

Chapter 8: Security, Policy, Liberty, and Law

Memo to Next President: How to Get Cybersecurity Right

CRB Checking

State Data Breach Notifi cation Laws: Have They Helped?

How to Ensure Police Database Accuracy

How Perverse Incentives Drive Bad Security Decisions

It's Time to Drop the “Expectation of Privacy” Test

Who Should Be in Charge of Cybersecurity?

Coordinate, but Distribute Responsibility

“Zero Tolerance” Really Means Zero Discretion

US Enables Chinese Hacking of Google

Should the Government Stop Outsourcing Code Development?

Punishing Security Breaches

Three Reasons to Kill the Internet Kill Switch Idea

Web Snooping Is a Dangerous Move

The Plan to Quarantine Infected Computers

Close the Washington Monument

Whitelisting and Blacklisting

Securing Medical Research: a Cybersecurity Point of View

Fear Pays the Bills, but Accounts Must Be Settled

Power and the Internet

Danger Lurks in Growing New Internet Nationalism

IT for Oppression

The Public/Private Surveillance Partnership

Transparency and Accountability Don't Hurt Security—They're Crucial to It

It's Smart Politics to Exaggerate Terrorist Threats

References

Introduction

Chapter 1

The Business and Economics of Security

Consolidation: Plague or Progress

Originally published in Information Security, March 2008

This essay appeared as the second half of a point/counterpoint with Marcus Ranum.

We know what we don't like about buying consolidated product suites: one great product and a bunch of mediocre ones. And we know what we don't like about buying best-of-breed: multiple vendors, multiple interfaces, and multiple products that don't work well together. The security industry has gone back and forth between the two, as a new generation of IT security professionals rediscovers the downsides of each solution.

The real problem is that neither solution really works, and we continually fool ourselves into believing whatever we don't have is better than what we have at the time. And the real solution is to buy results, not products.

Honestly, no one wants to buy IT security. People want to buy whatever they want—connectivity, a Web presence, email, networked applications, whatever—and they want it to be secure. That they're forced to spend money on IT security is an artifact of the youth of the computer industry. And sooner or later the need to buy security will disappear.

It will disappear because IT vendors are starting to realize they have to provide security as part of whatever they're selling. It will disappear because organizations are starting to buy services instead of products, and demanding security as part of those services. It will disappear because the security industry will disappear as a consumer category, and will instead market to the IT industry.

The critical driver here is outsourcing. Outsourcing is the ultimate consolidator, because the customer no longer cares about the details. If I buy my network services from a large IT infrastructure company, I don't care if it secures things by installing the hot new intrusion prevention systems, by configuring the routers and servers as to obviate the need for network-based security, or if it uses magic security dust given to it by elven kings. I just want a contract that specifies a level and quality of service, and my vendor can figure it out.

IT is infrastructure. Infrastructure is always outsourced. And the details of how the infrastructure works are left to the companies that provide it.

This is the future of IT, and when that happens we're going to start to see a type of consolidation we haven't seen before. Instead of large security companies gobbling up small security companies, both large and small security companies will be gobbled up by non-security companies. It's already starting to happen. In 2006, IBM bought ISS. The same year BT bought my company, Counterpane, and last year it bought INS. These aren't large security companies buying small security companies; these are non-security companies buying large and small security companies.

If I were Symantec and McAfee, I would be preparing myself for a buyer.

This is good consolidation. Instead of having to choose between a single product suite that isn't very good or a best-of-breed set of products that don't work well together, we can ignore the issue completely. We can just find an infrastructure provider that will figure it out and make it work—who cares how?

Prediction: RSA Conference Will Shrink Like a Punctured Balloon

Originally published in Wired News, April 17, 2008

Last week was the RSA Conference, easily the largest information security conference in the world. More than 17,000 people descended on San Francisco's Moscone Center to hear some of the more than 250 talks, attend I-didn't-try-to-count parties, and try to evade over 350 exhibitors vying to sell them stuff.

Talk to the exhibitors, though, and the most common complaint is that the attendees aren't buying.

It's not the quality of the wares. The show floor is filled with new security products, new technologies, and new ideas. Many of these are products that will make the attendees' companies more secure in all sorts of different ways. The problem is that most of the people attending the RSA Conference can't understand what the products do or why they should buy them. So they don't.

I spoke with one person whose trip was paid for by a smallish security firm. He was one of the company's first customers, and the company was proud to parade him in front of the press. I asked him whether he walked through the show floor, looking at the company's competitors to see if there was any benefit to switching.

“I can't figure out what any of those companies do,” he replied.

I believe him. The booths are filled with broad product claims, meaningless security platitudes and unintelligible marketing literature. You could walk into a booth, listen to a five-minute sales pitch by a marketing type, and still not know what the company does. Even seasoned security professionals are confused.

Commerce requires a meeting of the minds between buyer and seller, and it's just not happening. The sellers can't explain what they're selling to the buyers, and the buyers don't buy because they don't understand what the sellers are selling. There's a mismatch between the two; they're so far apart that they're barely speaking the same language.

This is a bad thing in the near term—some good companies will go bankrupt and some good security technologies won't get deployed—but it's a good thing in the long run. It demonstrates that the computer industry is maturing: IT is getting complicated and subtle, and users are starting to treat it like infrastructure.

For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure—power, water, cleaning service, tax preparation—customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.

No one wants to buy security. They want to buy something truly useful—database management systems, Web 2.0 collaboration tools, a company-wide network—and they want it to be secure. They don't want to have to become IT security experts. They don't want to have to go to the RSA Conference. This is the future of IT security.

You can see it in the large IT outsourcing contracts that companies are signing—not security outsourcing contracts, but more general IT contracts that include security. You can see it in the current wave of industry consolidation: not large security companies buying small security companies, but non-security companies buying security companies. And you can see it in the new popularity of software as a service: Customers want solutions; who cares about the details?

Imagine if the inventor of antilock brakes—or any automobile safety or security feature—had to sell them directly to the consumer. It would be an uphill battle convincing the average driver that he needed to buy them; maybe that technology would have succeeded and maybe it wouldn't. But that's not what happens. Antilock brakes, airbags and that annoying sensor that beeps when you're backing up too close to another object are sold to automobile companies, and those companies bundle them together into cars that are sold to consumers. This doesn't mean that automobile safety isn't important, and often these new features are touted by the car manufacturers.

The RSA Conference won't die, of course. Security is too important for that. There will still be new technologies, new products and new startups. But it will become inward-facing, slowly turning into an industry conference. It'll be security companies selling to the companies who sell to corporate and home users—and will no longer be a 17,000-person user conference.

How to Sell Security

Originally published in CIO, May 26, 2008

It's a truism in sales that it's easier to sell someone something he wants than a defense against something he wants to avoid. People are reluctant to buy insurance, or home security devices, or computer security anything. It's not they don't ever buy these things, but it's an uphill struggle.

The reason is psychological. And it's the same dynamic when it's a security vendor trying to sell its products or services, a CIO trying to convince senior management to invest in security or a security officer trying to implement a security policy with her company's employees.

It's also true that the better you understand your buyer, the better you can sell.

Why People Are Willing to Take Risks

First, a bit about Prospect Theory, the underlying theory behind the newly popular field of behavioral economics. Prospect Theory was developed by Daniel Kahneman and Amos Tversky in 1979 (Kahneman went on to win a Nobel Prize for this and other similar work) to explain how people make trade-offs that involve risk. Before this work, economists had a model of “economic man,” a rational being who makes trade-offs based on some logical calculation. Kahneman and Tversky showed that real people are far more subtle and ornery.

Here's an experiment that illustrates Prospect Theory. Take a roomful of subjects and divide them into two groups. Ask one group to choose between these two alternatives: a sure gain of $500 and 50 percent chance of gaining $1,000. Ask the other group to choose between these two alternatives: a sure loss of $500 and a 50 percent chance of losing $1,000.

These two trade-offs are very similar, and traditional economics predicts that whether you're contemplating a gain or a loss doesn't make a difference: People make trade-offs based on a straightforward calculation of the relative outcome. Some people prefer sure things and others prefer to take chances. Whether the outcome is a gain or a loss doesn't affect the mathematics and therefore shouldn't affect the results. This is traditional economics, and it's called Utility Theory.

But Kahneman's and Tversky's experiments contradicted Utility Theory. When faced with a gain, about 85 percent of people chose the sure smaller gain over the risky larger gain. But when faced with a loss, about 70 percent chose the risky larger loss over the sure smaller loss.

This experiment, repeated again and again by many researchers, across ages, genders, cultures and even species, rocked economics, yielded the same result. Directly contradicting the traditional idea of “economic man,” Prospect Theory recognizes that people have subjective values for gains and losses. We have evolved a cognitive bias: a pair of heuristics. One, a sure gain is better than a chance at a greater gain, or “A bird in the hand is worth two in the bush.” And two, a sure loss is worse than a chance at a greater loss, or “Run away and live to fight another day.” Of course, these are not rigid rules. Only a fool would take a sure $100 over a 50 percent chance at $1,000,000. But all things being equal, we tend to be risk-averse when it comes to gains and risk-seeking when it comes to losses.

This cognitive bias is so powerful that it can lead to logically inconsistent results. Google the “Asian Disease Experiment” for an almost surreal example. Describing the same policy choice in different ways—either as “200 lives saved out of 600” or “400 lives lost out of 600”—yields wildly different risk reactions.

Evolutionarily, the bias makes sense. It's a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses. Lions, for example, chase young or wounded wildebeests because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there's a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow. Similarly, it is better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor's edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. Because both can result in death, and the best option is to risk everything for the chance at no loss at all.

How to Sell Security

How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It's a choice between a small sure loss—the cost of the security product—and a large risky loss: for example, the results of an attack on one's network. Of course there's a lot more to the sale. The buyer has to be convinced that the product works, and he has to understand the threats against him and the risk that something bad will happen. But all things being equal, buyers would rather take the chance that the attack won't happen than suffer the sure loss that comes from purchasing the security product.

Security sellers know this, even if they don't understand why, and are continually trying to frame their products in positive results. That's why you see slogans with the basic message, “We take care of security so you can focus on your business,” or carefully crafted ROI models that demonstrate how profitable a security purchase can be. But these never seem to work. Security is fundamentally a negative sell.

One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs. And when people are truly scared, they're willing to do almost anything to make that feeling go away; lots of other psychological research supports that. Any burglar alarm salesman will tell you that people buy only after they've been robbed, or after one of their neighbors has been robbed. And the fears stoked by 9/11, and the politics surrounding 9/11, have fueled an entire industry devoted to counterterrorism. When emotion takes over like that, people are much less likely to think rationally.

Though effective, fear mongering is not very ethical. The better solution is not to sell security directly, but to include it as part of a more general product or service. Your car comes with safety and security features built in; they're not sold separately. Same with your house. And it should be the same with computers and networks. Vendors need to build security into the products and services that customers actually want. CIOs should include security as an integral part of everything they budget for. Security shouldn't be a separate policy for employees to follow but part of overall IT policy.

Security is inherently about avoiding a negative, so you can never ignore the cognitive bias embedded so deeply in the human brain. But if you understand it, you have a better chance of overcoming it.

Why Do We Accept Signatures by Fax?

Originally published in Wired News, May 29, 2008

Aren't fax signatures the weirdest thing? It's trivial to cut and paste—with real scissors and glue—anyone's signature onto a document so that it'll look real when faxed. There is so little security in fax signatures that it's mind-boggling that anyone accepts them.

Yet people do, all the time. I've signed book contracts, credit card authorizations, nondisclosure agreements and all sorts of financial documents—all by fax. I even have a scanned file of my signature on my computer, so I can virtually cut and paste it into documents and fax them directly from my computer without ever having to print them out. What in the world is going on here?

And, more importantly, why are fax signatures still being used after years of experience? Why aren't there many stories of signatures forged through the use of fax machines?

The answer comes from looking at fax signatures not as an isolated security measure, but in the context of the larger system. Fax signatures work because signed faxes exist within a broader communications context.

In a 2003 paper, Economics, Psychology, and Sociology of Security, professor Andrew Odlyzko looks at fax signatures and concludes:

Although fax signatures have become widespread, their usage is restricted. They are not used for final contracts of substantial value, such as home purchases. That means that the insecurity of fax communications is not easy to exploit for large gain. Additional protection against abuse of fax insecurity is provided by the context in which faxes are used. There are records of phone calls that carry the faxes, paper trails inside enterprises and so on. Furthermore, unexpected large financial transfers trigger scrutiny. As a result, successful frauds are not easy to carry out by purely technical means.

He's right. Thinking back, there really aren't ways in which a criminal could use a forged document sent by fax to defraud me. I suppose an unscrupulous consulting client could forge my signature on a non-disclosure agreement and then sue me, but that hardly seems worth the effort. And if my broker received a fax document from me authorizing a money transfer to a Nigerian bank account, he would certainly call me before completing it.

Credit card signatures aren't verified in person, either—and I can already buy things over the phone with a credit card—so there are no new risks there, and Visa knows how to monitor transactions for fraud. Lots of companies accept purchase orders via fax, even for large amounts of stuff, but there's a physical audit trail, and the goods are shipped to a physical address—probably one the seller has shipped to before. Signatures are kind of a business lubricant: mostly, they help move things along smoothly.

Except when they don't.

On October 30, 2004, Tristian Wilson was released from a Memphis jail on the authority of a forged fax message. It wasn't even a particularly good forgery. It wasn't on the standard letterhead of the West Memphis Police Department. The name of the policeman who signed the fax was misspelled. And the time stamp on the top of the fax clearly showed that it was sent from a local McDonald's.

The success of this hack has nothing to do with the fact that it was sent over by fax. It worked because the jail had lousy verification procedures. They didn't notice any discrepancies in the fax. They didn't notice the phone number from which the fax was sent. They didn't call and verify that it was official. The jail was accustomed to getting release orders via fax, and just acted on this one without thinking. Would it have been any different had the forged release form been sent by mail or courier?

Yes, fax signatures always exist in context, but sometimes they are the linchpin within that context. If you can mimic enough of the context, or if those on the receiving end become complacent, you can get away with mischief.

Arguably, this is part of the security process. Signatures themselves are poorly defined. Sometimes a document is valid even if not signed: A person with both hands in a cast can still buy a house. Sometimes a document is invalid even if signed: The signer might be drunk, or have a gun pointed at his head. Or he might be a minor. Sometimes a valid signature isn't enough; in the United States there is an entire infrastructure of “notary publics” who officially witness signed documents. When I started filing my tax returns electronically, I had to sign a document stating that I wouldn't be signing my income tax documents. And banks don't even bother verifying signatures on checks less than $30,000; it's cheaper to deal with fraud after the fact than prevent it.

Over the course of centuries, business and legal systems have slowly sorted out what types of additional controls are required around signatures, and in which circumstances.

Those same systems will be able to sort out fax signatures, too, but it'll be slow. And that's where there will be potential problems. Already fax is a declining technology. In a few years it'll be largely obsolete, replaced by PDFs sent over e-mail and other forms of electronic documentation. In the past, we've had time to figure out how to deal with new technologies. Now, by the time we institutionalize these measures, the technologies are likely to be obsolete.

What that means is people are likely to treat fax signatures—or whatever replaces them—exactly the same way as paper signatures. And sometimes that assumption will get them into trouble.

But it won't cause social havoc. Wilson's story is remarkable mostly because it's so exceptional. And even he was rearrested at his home less than a week later. Fax signatures may be new, but fake signatures have always been a possibility. Our legal and business systems need to deal with the underlying problem—false authentication—rather than focus on the technology of the moment. Systems need to defend themselves against the possibility of fake signatures, regardless of how they arrive.

The Pros and Cons of LifeLock

Originally published in Wired News, June 12, 2008

LifeLock, one of the companies that offers identity-theft protection in the United States, has been taking quite a beating recently. They're being sued by credit bureaus, competitors and lawyers in several states that are launching class action lawsuits. And the stories in the media… it's like a piranha feeding frenzy.

There are also a lot of errors and misconceptions. With its aggressive advertising campaign and a CEO who publishes his Social Security number and dares people to steal his identity—Todd Davis, 457-55-5462—LifeLock is a company that's easy to hate. But the company's story has some interesting security lessons, and it's worth understanding in some detail.

In December 2003, as part of the Fair and Accurate Credit Transactions Act, or FACTA, credit bureaus were forced to allow you to put a fraud alert on their credit reports, requiring lenders to verify your identity before issuing a credit card in your name. This alert is temporary, and expires after 90 days. Several companies have sprung up—LifeLock, Debix, LoudSiren, TrustedID—that automatically renew these alerts and effectively make them permanent.

This service pisses off the credit bureaus and their financial customers. The reason lenders don't routinely verify your identity before issuing you credit is that it takes time, costs money and is one more hurdle between you and another credit card. (Buy, buy, buy—it's the American way.) So in the eyes of credit bureaus, LifeLock's customers are inferior goods; selling their data isn't as valuable. LifeLock also opts its customers out of pre-approved credit card offers, further making them less valuable in the eyes of credit bureaus.

And, so began a smear campaign on the part of the credit bureaus. You can read their points of view in this New York Times article, written by a reporter who didn't do much more than regurgitate their talking points. And the class action lawsuits have piled on, accusing LifeLock of deceptive business practices, fraudulent advertising and so on. The biggest smear is that LifeLock didn't even protect Todd Davis, and that his identity was allegedly stolen.

It wasn't. Someone in Texas used Davis's SSN to get a $500 advance against his paycheck. It worked because the loan operation didn't check with any of the credit bureaus before approving the loan—perfectly reasonable for an amount this small. The payday-loan operation called Davis to collect, and LifeLock cleared up the problem. His credit report remains spotless.

The Experian credit bureau's lawsuit basically claims that fraud alerts are only for people who have been victims of identity theft. This seems spurious; the text of the law states that anyone “who asserts a good faith suspicion that the consumer has been or is about to become a victim of fraud or related crime” can request a fraud alert. It seems to me that includes anybody who has ever received one of those notices about their financial details being lost or stolen, which is everybody.

As to deceptive business practices and fraudulent advertising—those just seem like class action lawyers piling on. LifeLock's aggressive fear-based marketing doesn't seem any worse than a lot of other similar advertising campaigns. My guess is that the class action lawsuits won't go anywhere.

In reality, forcing lenders to verify identity before issuing credit is exactly the sort of thing we need to do to fight identity theft. Basically, there are two ways to deal with identity theft: Make personal information harder to steal, and make stolen personal information harder to use. We all know the former doesn't work, so that leaves the latter. If Congress wanted to solve the problem for real, one of the things it would do is make fraud alerts permanent for everybody. But the credit industry's lobbyists would never allow that.

LifeLock does a bunch of other clever things. They monitor the national address database, and alert you if your address changes. They look for your credit and debit card numbers on hacker and criminal websites and such, and assist you in getting a new number if they see it. They have a million-dollar service guarantee—for complicated legal reasons, they can't call it insurance—to help you recover if your identity is ever stolen.

But even with all of this, I am not a LifeLock customer. At $120 a year, it's just not worth it. You wouldn't know it from the press attention, but dealing with identity theft has become easier and more routine. Sure, it's a pervasive problem. The Federal Trade Commission reported that 8.3 million Americans were identity-theft victims in 2005. But that includes things like someone stealing your credit card and using it, something that rarely costs you any money and that LifeLock doesn't protect against. New account fraud is much less common, affecting 1.8 million Americans per year, or 0.8 percent of the adult population. The FTC hasn't published detailed numbers for 2006 or 2007, but the rate seems to be declining.

New card fraud is also not very damaging. The median amount of fraud the thief commits is $1,350, but you're not liable for that. Some spectacularly horrible identity-theft stories notwithstanding, the financial industry is pretty good at quickly cleaning up the mess. The victim's median out-of-pocket cost for new account fraud is only $40, plus ten hours of grief to clean up the problem. Even assuming your time is worth $100 an hour, LifeLock isn't worth more than $8 a year.

And it's hard to get any data on how effective LifeLock really is. They've been in business three years and have about a million customers, but most of them have joined up in the last year. They've paid out on their service guarantee 113 times, but a lot of those were for things that happened before their customers became customers. (It was easier to pay than argue, I assume.) But they don't know how often the fraud alerts actually catch an identity thief in the act. My guess is that it's less than the 0.8 percent fraud rate above.

LifeLock's business model is based more on the fear of identity theft than the actual risk.

It's pretty ironic of the credit bureaus to attack LifeLock on its marketing practices, since they know all about profiting from the fear of identity theft. FACTA also forced the credit bureaus to give Americans a free credit report once a year upon request. Through deceptive marketing techniques, they've turned this requirement into a multimillion-dollar business.

Get LifeLock if you want, or one of its competitors if you prefer. But remember that you can do most of what these companies do yourself. You can put a fraud alert on your own account, but you have to remember to renew it every three months. You can also put a credit freeze on your account, which is more work for the average consumer but more effective if you're a privacy wonk—and the rules differ by state. And maybe someday Congress will do the right thing and put LifeLock out of business by forcing lenders to verify identity every time they issue credit in someone's name.

The Problem Is Information Insecurity

Originally published in Security Watch, August 10, 2008

Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.

The problem is that all the money we spend isn't fixing the problem. We're paying, but we still end up with insecurities.

The problem is insecure software. It's bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.

And that's the problem. We're not paying to improve the security of the underlying software. We're paying to deal with the problem rather than to fix it.

The only way to fix this problem is for vendors to fix their software, and they won't do it until it's in their financial best interests to do so.

Today, the costs of insecure software aren't borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that's borne by people other than those making the decision.

There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is a way to make it in those organizations' best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.

Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.

Clearly, this isn't all or nothing. There are many parties involved in a typical software attack. There's the company that sold the software with the vulnerability in the first place. There's the person who wrote the attack tool. There's the attacker himself, who used the tool to break into a network.

There's the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn't fall on the shoulders of the software vendor, just as 100% shouldn't fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

We will always pay for security. If software vendors have liability costs, they'll pass those on to us. It might not be cheaper than what we're paying today. But as long as we're going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they're entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security isn't a technological problem. It's an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.

Security ROI: Fact or Fiction?

Originally published in CSO Magazine, September 2, 2008

Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

It's become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

It's a good idea in theory, but it's mostly bunk in practice.

Before I get into the details, there's one point I have to make. “ROI” as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.

But as anyone who has lived through a company's vicious end-of-year budget-slashing exercises knows, when you're trying to make your numbers, cutting costs is the same as increasing revenues. So while security can't produce ROI, loss prevention most certainly affects a company's bottom line.

And a company should implement only security countermeasures that affect its bottom line positively. It shouldn't spend more on a security problem than the problem is worth. Conversely, it shouldn't ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.

The classic methodology is called annualized loss expectancy (ALE), and it's straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you're wasting money. Spend less than that, and you're also wasting money.

Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent—to 6 percent a year—then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it's worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn't.

The Data Imperative

The key to making this work is good data; the term of art is “actuarial tail.” If you're doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store's neighborhood and maybe have some idea of how much cameras improve the odds of convincing criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale; maybe you're having trouble hiring salespeople to work the night shift. With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night—assuming that the closed store won't get robbed as well. And then you can decide whether to install one.

Cybersecurity is considerably harder, because there just isn't enough good data. There aren't good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures—or specific configurations of countermeasures—mitigate those risks. We don't even have data on incident costs.

One problem is that the threat moves too quickly. The characteristics of the things we're trying to prevent change so quickly that we can't accumulate data fast enough. By the time we get some data, there's a new threat model for which we don't have enough data. So we can't create ALE models.

But there's another problem, and it's that the math quickly falls apart when it comes to rare and expensive events. Imagine you calculate the cost—reputational costs, loss of customers, etc.—of having your company's name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10,000 of that happening in any one year. ALE says you should spend no more than $2,000 mitigating that risk.

So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You can't argue, since we're just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1,000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.

It gets worse when you deal with even more rare and expensive events. Imagine you're in charge of terrorism mitigation at a chlorine plant. What's the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand, 1 in a million, 1 in 10 million? Depending on how you answer those two questions—and any answer is really just a guess—you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.

Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by—and I'm making this up—30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43,000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has “killed” 620 people per year—930 if you calculate the numbers based on 16 hours of awake time per day. So the question is: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?

Caveat Emptor

This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They've jiggered the numbers so that they do.

This doesn't mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only. So when you get an ROI model from your vendor, take its framework and plug in your own numbers. Don't even show the vendor your improvements; it won't consider any changes that make its product or service less cost-effective to be an “improvement.” And use those results as a general guide, along with risk management and compliance analyses, when you're deciding what security products and services to buy.

Social Networking Risks

Originally published in Information Security, February 2009

This essay appeared as the first half of a point-counterpoint with Marcus Ranum.

Are employees blogging corporate secrets? It's not an unreasonable fear, actually. People have always talked about work to their friends. It's human nature for people to talk about what's going on in their lives, and work is a lot of most people's lives. Historically, organizations generally didn't care very much. The conversations were intimate and ephemeral, so the risk was small. Unless you worked for the military with actual national secrets, no one worried about it very much.

What has changed is the nature of how we interact with our friends. We talk about our lives on our blogs, on social networking sites such as Facebook and Twitter, and on message boards pertaining to the work we're doing. What was once intimate and ephemeral is now available to the whole world, indexed by Google, and archived for posterity. A good open-source intelligence gatherer can learn a lot about what a company is doing by monitoring its employees' online activities. It's no wonder some organizations are nervous.

So yes, organizations should be concerned about employees leaking corporate secrets on social networking sites. And, as much as I hate to admit it, disciplinary action against employees who reveal too much in public is probably in order. But actually policing employees is almost certainly more expensive and more trouble than it's worth. And when an organization catches an employee being a bit too chatty about work details, it should be as forgiving as possible.

That's because this sort of openness is the future of work, and the organizations that get used to it or—even better—embrace it, are going to do better in the long run than organizations that futilely try to fight it.

The Internet is the greatest generation gap since rock and roll, and what we're seeing here is one particular skirmish across that gap. The younger generation, used to spending a lot of its life in public, clashes with an older generation in charge of a corporate culture that presumes a greater degree of discretion and greater level of control.

There are two things that are always true about generation gaps. The first is that the elder generation is always right about the problems that will result from whatever new/different/bad thing the younger generation is doing. And the second is that the younger generation is always right that whatever they're doing will become the new normal. These things have to be true; the older generation understands the problems better, but they're the ones who fade away and die.

Living an increasingly public life on social networking sites is the new normal. More corporate—and government—transparency is becoming the new normal. CEOs who blog aren't yet the new normal, but will be eventually. And then what will corporate secrecy look like? Organizations will still have secrets, of course, but they will be more public and more open about what they're doing and what they're thinking of doing. It'll be different than it is now, but it most likely won't be any worse.

Today isn't that day yet, which is why it's still proper for organizations to worry about loose fingers uploading corporate secrets. But the sooner an organization can adapt to this new normal and figure out how to be successful within it, the better it will survive these transitions. In the near term, it will be more likely to attract the next-generation talent it needs to figure out how to thrive. In the long term…well, we don't know what it will mean yet.

Same with blocking those sites; yes, they're enormous time-wasters. But if an organization has a problem with employee productivity, they're not going to solve it by censoring Internet access. Focus on the actual problem, and don't waste time on the particulars of how the problem manifests itself.

Do You Know Where Your Data Are?

Originally published in the Wall Street Journal, April 28, 2009

Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.

There's a basic consumer protection principle at work here, and it's the concept of “unfair and deceptive” trade practices. Basically, a company shouldn't be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren't generally available, claim features that don't exist, and so on.

Buried in RealAge's 2,400-word privacy policy is this disclosure: “If you elect to say yes to becoming a free RealAge Member, we will periodically send you free newsletters and e-mails that directly promote the use of our site(s) or the purchase of our products or services and may contain, in whole or in part, advertisements for third parties which relate to marketed products of selected RealAge partners.”

They maintain that when you join the website, you consent to receiving pharmaceutical company spam. But since that isn't spelled out, it's not really informed consent. That's deceptive.

Cloud computing is another technology where users entrust their data to service providers. Salesforce.com, Gmail, and Google Docs are examples; your data isn't on your computer—it's out in the “cloud” somewhere—and you access it from your web browser. Cloud computing has significant benefits for customers and huge profit potential for providers. It's one of the fastest growing IT market segments—69% of Americans now use some sort of cloud computing services—but the business is rife with shady, if not outright deceptive, advertising.

Take Google, for example. Last month, the Electronic Privacy Information Center (I'm on its board of directors) filed a complaint with the Federal Trade Commission concerning Google's cloud computing services. On its website, Google repeatedly assures customers that their data is secure and private, while published vulnerabilities demonstrate that it is not. Google's not foolish, though; its Terms of Service explicitly disavow any warranty or any liability for harm that might result from Google's negligence, recklessness, malevolent intent, or even purposeful disregard of existing legal obligations to protect the privacy and security of user data. EPIC claims that's deceptive.

Facebook isn't much better. Its plainly written (and not legally binding) Statement of Principles contains an admirable set of goals, but its denser and more legalistic Statement of Rights and Responsibilities undermines a lot of it. One research group who studies these documents called it “democracy theater”: Facebook wants the appearance of involving users in governance, without the messiness of actually having to do so. Deceptive.

These issues are not identical. RealAge is hiding what it does with your data. Google is trying to both assure you that your data is safe and duck any responsibility when it's not. Facebook wants to market a democracy but run a dictatorship. But they all involve trying to deceive the customer.

Cloud computing services like Google Docs, and social networking sites like RealAge and Facebook, bring with them significant privacy and security risks over and above traditional computing models. Unlike data on my own computer, which I can protect to whatever level I believe prudent, I have no control over any of these sites, nor any real knowledge of how these companies protect my privacy and security. I have to trust them.

This may be fine—the advantages might very well outweigh the risks—but users often can't weigh the trade-offs because these companies are going out of their way to hide the risks.

Of course, companies don't want people to make informed decisions about where to leave their personal data. RealAge wouldn't get 27 million members if its webpage clearly stated “you are signing up to receive e-mails containing advertising from pharmaceutical companies,” and Google Docs wouldn't get five million users if its webpage said “We'll take some steps to protect your privacy, but you can't blame us if something goes wrong.”

And of course, trust isn't black and white. If, for example, Amazon tried to use customer credit card info to buy itself office supplies, we'd all agree that that was wrong. If it used customer names to solicit new business from their friends, most of us would consider this wrong. When it uses buying history to try to sell customers new books, many of us appreciate the targeted marketing. Similarly, no one expects Google's security to be perfect. But if it didn't fix known vulnerabilities, most of us would consider that a problem.

This is why understanding is so important. For markets to work, consumers need to be able to make informed buying decisions. They need to understand both the costs and benefits of the products and services they buy. Allowing sellers to manipulate the market by outright lying, or even by hiding vital information, about their products breaks capitalism—and that's why the government has to step in to ensure markets work smoothly.

Last month, Mary K. Engle, Acting Deputy Director of the FTC's Bureau of Consumer Protection said: “a company's marketing materials must be consistent with the nature of the product being offered. It's not enough to disclose the information only in a fine print of a lengthy online user agreement.” She was speaking about Digital Rights Management and, specifically, an incident where Sony used a music copy protection scheme without disclosing that it secretly installed software on customers' computers. DRM is different from cloud computing or even online surveys and quizzes, but the principle is the same.

Engle again: “if your advertising giveth and your EULA [license agreement] taketh away don't be surprised if the FTC comes calling.” That's the right response from government.

Be Careful When You Come to Put Your Trust in the Clouds

Originally published in the Guardian, June 4, 2009

This year's overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the Internet and access it via a browser. The salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

But, hype aside, cloud computing is nothing new. It's the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It's what Hotmail and Gmail have been doing all these years, and it's social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing—network infrastructure, security monitoring, remote hosting—is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they're still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

But what about security? Isn't it more dangerous to have your email on Hotmail's servers, your spreadsheets on Google's, your personal conversations on Facebook's, and your company's sales prospects on salesforce.com's? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors—and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We've spent decades dealing with worms and rootkits that target software vulnerabilities. We've worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further—you now have to also trust your software service vendors—but it doesn't fundamentally change anything. It's just another vendor we need to trust.

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can't. You have to trust your outsourcer completely. You not only have to trust the outsourcer's security, but its reliability, its availability, and its business continuity.

You don't want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt. You don't want the company you're using to be sold to your direct competitor. You don't want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren't as drastic.

There are two different types of cloud computing customers. The first only pays a nominal fee for these services—and uses them for free in exchange for ads: e.g., Gmail and Facebook. These customers have no leverage with their outsourcers. You can lose everything. Companies like Google and Amazon won't spend a lot of time caring. The second type of customer pays considerably for these services: to salesforce.com, MessageLabs, managed network companies, and so on. These customers have more leverage, providing they write their service contracts correctly. Still, nothing is guaranteed.

Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we'll get this right, but you don't want to be a casualty along the way.

Is Perfect Access Control Possible?

Originally published in Information Security, September 2009

This essay appeared as the second half of a point/counterpoint with Marcus Ranum.

Access control is difficult in an organizational setting. On one hand, every employee needs enough access to do his job. On the other hand, every time you give an employee more access, there's more risk: he could abuse that access, or lose information he has access to, or be socially engineered into giving that access to a malfeasant. So a smart, risk-conscious organization will give each employee the exact level of access he needs to do his job, and no more.

Over the years, there's been a lot of work put into role-based access control. But despite the large number of academic papers and high-profile security products, most organizations don't implement it—at all—with the predictable security problems as a result.

Regularly we read stories of employees abusing their database access-control privileges for personal reasons: medical records, tax records, passport records, police records. NSA eavesdroppers spy on their wives and girlfriends. Departing employees take corporate secrets.

A spectacular access control failure occurred in the UK in 2007. An employee of Her Majesty's Revenue & Customs had to send a couple of thousand sample records from a database on all children in the country to National Audit Office. But it was easier for him to copy the entire database of 25 million people onto a couple of disks and put it in the mail than it was to select out just the records needed. Unfortunately, the discs got lost in the mail, and the story was a huge embarrassment for the government.