._.

November 27, 2011
Home

#01

"When you get right down to it, most security is based on the honor system."

Here’s a scene straight from television. Two characters are at a computer terminal when suddenly intrusion warnings flare up.

"No way—I’m getting hacked! They’ve already burned through the NCIS public firewall."

"Well, isolate the node and dump ‘em on the public side of the router."

"I’m trying, it’s moving too fast!"

Now both characters are typing on the same computer, trying to outpace the brilliant hacker on the other end.

They blabber on at each other in technobabble, the writers’ communication of the characters’ innate mastery of the entire system as they unleash a formidable arsenal of counter-measures.

The hacker outmatches them at every turn.

"It’s not possible, this is DoD level-nine encryption. It would take months to…”

"I can’t stop him, do something McGee."

"I’ve never seen code like this!"

The screen goes black. The characters, despite their obvious talents, have been dwarfed by the formidable force on the other end of the connection.

It makes for exciting television, but the truth is even scarier.

In the real world, the majority of hackers don’t brilliantly blast through the defenses of slightly less brilliant computer whizzes.

They don’t win by being faster or smarter or more well-funded.

They win by waiting for smart people to do stupid things.

Modern systems — especially ones as complex as a computer network in a large organization — are not the work of a single brilliant designer. They’re collections of hundreds of smaller systems, each so complex that a person could fill eighty hours a week for the next fifty years maintaining any one of them.

A great system administrator might understand every single component of the system, but he’ll never be able to catch every security flaw. Even in an entirely open-source world, doing so would require the equivalent effort of perfectly proof-reading an entire library full of books for typos and factual inconsistencies.

This is where the hacker has the advantage. He has access to the same books, and all he has to do is identify one short-sighted paragraph before the system administrator. He doesn’t even have to do it himself.

Sure, sure, the system administrator has help. All the other system administrators are looking over the same books. Each one can easily proofread one chapter in a timely manner, and report their findings to everyone else. Hell, they can all double-check an extra chapter to make sure nobody misses anything.

Every time someone says, “Hm, this doesn’t look quite right,” a race starts. System administrators must patch their systems before someone takes advantage of the vulnerability.

If the person who recognizes the issue isn’t a good citizen and decides to keep the information to himself, it could be weeks before someone else finds it. It might only come to light by working backwards from an attack to determine the root cause. Then again, it might never come to light, depending on how busy the administrator is or how easy it is to pass off an alternate scapegoat.

No matter how much bang we can get from the off-the-shelf components of our system, we’re going to need people to cobble together the bits and pieces of those tools to make them handle the tasks specific to our business.

It all boils down to custom data at some point or another. Every organization of a considerable size needs custom applications to handle specific business tasks. This usually boils down to saving and spitting out data at the appropriate times. In the software world, these systems are called CRUD systems, since that’s what they do — Create, Read, Update and Delete.

Being such routine, you would think that every programmer would enter the field with the ability to create a stable, secure CRUD system off the top of their heads while drunk. You might assume that this is a core part of their training.

Not so, as the average programmer is trained not as an operator of simple input-output machines but as a Computer Scientist. And Computer Scientists are very clever. They’ve spent a lot of time and effort in figuring out how to successfully design complex applications.

They also have to justify the forty hours a week they sit at their desks. It simply wouldn’t be feasible for a team of four programmers to finish a six-month project in two weeks. The entire management schedule would be thrown off.

So, we have people specializing in complex systems with a vested interest in taking a large amount of time to create something. They’re faced with a simple problem. Clearly, the goal is to fit the problem into the system and time available, which necessitates taking the simple problem and making it more complex.

And now we have all sorts of nooks and crannies for oversights to hide, and we’re again stuck with the problem that even this small component of the system has become too complex for a single person to comprehend.

Add into the mix hundreds or thousands of employees of the organization. These employees don’t come into work each day with the goal of making the system more secure. They have their own work to get done.

Security is disregarded.

This is a segment of a realistic episode of a crime program. Investigators are working on a big case when they find out that a large portion of the evidence against the ring of criminals in question has mysteriously disappeared.

Our hero, the brilliant computer whizkid who knows everything about everything going on in the computers of the organization, finally cracks the case and tells his boss.

"I’ve been looking over these logs. I think something’s wrong."

"What?"

"Someone’s been hijacking evidence shipments in order to keep them from being used against certain criminals."

"How?"

"Well, I spent a few days looking into it, and it turns out that someone added a bit of malicious code into an exhibit description."

"Wait, wait, XSS?"

"Yes."

"Why are we letting people post arbitrary code to exhibit descriptions?"

"Well, the clerks writing the descriptions asked for it, so they could include tables of extra data and other images and things. It’s an internal application, so it wasn’t a huge security risk."

"So it’s an inside job?!"

"No, the hacker got in by guessing one of the clerks’ passwords."

"That’s not possible, we have a password security policy in place. Brute forcing that kind of thing would take hundreds of thousands of attempts."

"Well, normally, yes, but the clerk was using the same password for her email, and that server got hacked, and a file with all the usernames and plaintext passwords got dumped."

"But everyone knows user passwords need to be stored as one-way hashes so that getting the plaintext back out of them is impossible."

"They were hashed, but only md5! The hackers just used a rainbow table to do the lookups."

"How did the hacker even know to try the combination on our systems?”

"She used her work email."

"Even if he knew the username and password, he couldn’t have accessed the system. You have to be on the local network to access that system."

"Yes, well, I’m not positive on this, but one of the employees might have had too many devices and brought in a wireless hub from home, which wasn’t secured. The range of the hub reached across the street."

"Why didn’t anyone remove that device?!"

"We tried to, but the department director didn’t see anything wrong with it and didn’t want to jeopardize productivity. After all, everything is secured by username and password. It was his call."

"Ok, ok, so a hacker got into the system as a clerk and posted a fake evidence item. Big deal, clerks don’t have access to the shipment data."

"Right, but service agents do."

"So?"

"The hacker called up the main office pretending to be a lab tech and asked about the evidence item with the malicious code attached to it."

"Ok, but those are low-level employees, they can only read shipment data, they can’t write to it."

"Then he escalated the issue until supervisors were involved."

"So?"

"Supervisors have full permissions in the system to arbitrarily create shipment orders. The supervisor looked up the item in the system, and the malicious code created a new shipment order in the background."

"Wait, why do supervisors have the power to arbitrarily create shipment orders?”

"Well, because sometimes legitimate shipments to labs get totally fucked up and they need to cancel them and then reroute them by recreating the shipment."

"So why didn’t the programmers just create some sort of edit mechanism to modify a shipment?"

"Because we contracted out this system and the maintenance request would have taken too much time. One of the interns took initiative and made something that worked, but he could only access permissions configurations, not the application code, since that’s proprietary."

"Why didn’t anyone stop him?!"

"Why would they? Everyone was very happy with the results. We hired him over that. He’s a team lead now."

"Isn’t there a paper trail on this? Didn’t someone have to sign to let this evidence leave the warehouse?"

"Sure, but on paper everything’s legit."

"How do we stop him?"

"What?"

"How do we stop the hacker from destroying this evidence?"

"Well, this all happened weeks ago, so I’m pretty sure there’s nothing we can do. I only found out because some of the investigators were asking about missing items."

"So what now?"

"I’m writing up a postmortem. Once management reads it, maybe they’ll implement new policies."

"We already have policies on most of these things!”

"Maybe people will follow the policies."

That’s it. No valiant showdown between a small number of larger-than-life geniuses. The battle was lost six months ago, against human fallibility.

tl;dr I’m not sure people would watch my TV show.