Wednesday, February 28, 2007

MoAB Madness 1: Entomology

January 2007 was home to a project known as the "Month of Apple Bugs", aka MoAB. This is the first article in MoAB Madness, a multi-part series about the project. I had intended to run some of these articles in January, during MoAB, rather than in the weeks following it. It was supposed to be a MoABA (Month of Apple Bugs Articles). But, note to self, I've discovered that it's a good idea to start writing something in January if you intend to post it in January. Since I'm not exactly on schedule, I think that this is going to have to be a QoABA (Quarter of Apple Bugs Articles).

At any rate, MoAB received a fair amount of press, albeit mostly in the tech community. So what am I hoping to add to this, especially at this late date? Well, I think that a lot of the coverage didn't cater to the audience that I hope to reach, namely the educated layperson. Many of the computer literate yet non-technical people I've spoken to hadn't heard about the project, or didn't understand what it was about. So, what I hope to do in MoAB Madness is provide both a big picture view of the computer security landscape, as well as a non-technical discussion of the bugs themselves.

We'll start MoAB Madness with this installment, which will talk about what the project is, and start to give some background on the principles at play. Part 2 will talk about the people who look for bugs, and how they disclose them once they find them.

Additional articles will look at the bugs themselves. Part 3 will give a brief overview (in non-technical terms) of each of the bugs revealed in January. Parts 4, 5, and 6 (and possibly more, if I need them) will look at some of the bugs in greater detail. From a technical perspective, many of the bugs can be grouped together. So, each of these articles will examine one class of bugs, and, in layperson's terms, discuss what makes them tick. Each of these classes are textbook examples of extraordinarily common programming mistakes, and I hope to use the MoAB bugs to explain some Computer Science principles.

Finally, the last article will look at the lessons learned from the project.

It should be noted that I'm not going to link to the actual MoAB website. Call me paranoid1, but I never visited the MoAB site in a normal web browser2, and I don't want my readers to either. My rationale was that I didn't want to visit a website of a project dedicated to (flamboyantly) pointing out security problems in Mac software using software on my Mac. As it turns out, my gut instinct turned out to be right, since on Day 29 the MoAB entry contained an image that locked up Safari. Anyway, if you really, really want to visit the actual site, it's a simple Google search away.

--

So what was the Month of Apple Bugs? I'll give a brief synopsis here, and leave most of the commentary and editorializing to future articles.

Back in December, before the project got off the ground, we had a fair bit of info from the MoAB website:

This initiative aims to serve as an effort to improve Mac OS X, uncovering and finding security flaws in different Apple software and third-party applications designed for this operating system. A positive side-effect, probably, will be a more concerned (security-wise) user-base and better practices from the management side of Apple. Also, we want to develop and provide tools and documented techniques to aid security research in this platform. If nothing else, we had fun working on it and hope people out there will enjoy the results. (LMH and Kevin Finisterre, 2006).

So what's the initiative they're referring to? It's pretty simple, really. Every day in January (that's 31 days, for those without access to a calendar), the MoAB project was going to release details about a previously undisclosed bug in Apple software. Well, it wasn't all going to be Apple software. From their FAQ:

Are Apple products the only one target of this initiative?

Not at all, but they are the main focus. We'll be looking over popular OS X applications as well.

Why were they doing this? Well, they tell us that it isn't out of malice:

Is this an attack, revenge, conspiracy or some kind of evil plot against Apple and the users of Apple products?

Not at all, some of us use OS X on a daily basis. Getting problems solved makes that use a bit more safe each day, for everyone else. Flaws exist, with and without people disclosing them. If we wanted to make business out of this we would be selling the issues and the proper exploit for each one. Thus, business-wise, we are wasting a good cake with this project (although software by Apple isn't really of interest in these terms, except iTunes and other high-profile applications).

A tiny bit of editorializing: I will grant them that if they were really "out for evil" they would have been selling the information about the security bugs to the highest bidder. However, their actions were still somewhat irresponsible. We will talk about about this in detail in Part 2.

At the beginning of the month, a developer (and a former Apple employee) Landon Fuller launched a "Month of Apple Bugs Fixes" project where he hoped to provide unofficial patches to fix the previous day's bug. After a few days he set up a Google Group to coordinate his efforts with other volunteers. On Day 6 the MoAB organizers contacted Landon (note that this link contains a link to a MoAB page, which I don't recommend following) proposing that they give him early access to the bugs in order to expedite his repairs. After some deliberation, he declined due to a possible perception of conflict of interest. In the end, this group did indeed provide unofficial patches to many of the bugs.

Finally, it should be noted that Apple never made any formal comment on the MoAB project. I'll have a lot more to say about that in the final article of MoAB Madness. To date, they have released two security updates (Security Update 2007-001 and Security Update 2007-002) that give credit to the MoAB project for discovering the bugs, and fix a total five bugs.

--

As mentioned in the introduction, before we get to the bugs themselves, we're going to talk about some basic principles of computer security. But before we do that, we need to figure out exactly what we mean when we say "computer bug"?

That one's easy, right? Ask someone on the street, and you'll likely get an answer along the lines of "a bug is when that stupid computer doesn't do what I told it to". Or maybe "a bug is a computer glitch". But, to a Computer Scientist, neither of those answers is quite correct.

In order to understand what a bug actually is, we need to look at what I've termed Deber's First Law of Computer Science:

Deber's First Law of Computer Science

Computers do exactly what you tell them. No more, no less.

At a glance, that might seem at odds with our intuitive definition of a bug. After all, a bug is when something you didn't intend to happen actually does, right? Well, the answer lies in the meaning of you. In most cases, you is the developer who wrote the computer software that you (the user) are using. In other words, the computer is doing exactly what the programmers told it to do; the problem is that the programmers screwed up.

It's important to recognize that all computer software has bugs3. Period. Full stop. Death, taxes, and bugs in software. Software is simply too complicated for us imperfect humans to write correctly; modern software can contain tens of millions of lines of computer code. But, more to the point, computers are literal entities, while we humans are not. We know how to interpret the world around us and extract meaning when our information (or instructions) are fuzzy or unclear. Computer do not. Take a household example:

Apply to hair, lather, rinse, repeat.

A person reading those instructions knows that the shampoo manufacturer intended that you use two applications of their product. A computer reading those instructions would keep applying the shampoo until the bottle ran out, and then crash since there was nothing left to "apply to hair". Furthermore, the instructions don't specify all sorts of details that we humans automatically interpret, but would need to be explicitly stated for a computer. Do you have to wet your hair first? How wet? What temperature water? How much shampoo? How do you "lather"? How long do you lather for? How do you know when you've "rinsed" enough? And on and on and on. If a programmer doesn't specify each of these things correctly, the program might crash.

Finally, it should be noted that working on software is a Sisyphean task. Every time you add a new feature, you add new bugs. In fact, sometimes when you fix one bug, you add several new ones.

--

Some bugs are minor and insignificant (maybe some text is displayed 1 mm too far to the right), and some are major and potentially disastrous (maybe all of your files get deleted). It's some of the bugs in this second category that we're concerned with today. Or, more specifically, it's one type of major bug that we're concerned with today: the security bug. Security bugs are bugs that, in plain English, let nefarious evildoers do bad things to your computer. Best case, the bug can simply let them crash your computer and cause you to lose whatever you were doing at the moment. Worst case, the bug can let them take full control of your computer, and do anything from stealing your credit card numbers to deleting all of your files.

Terminology wise, we say that a security bug causes a security flaw, also known as a vulnerability, or a security hole. The computer code crafted to exploit these vulnerabilities are called, unsurprisingly, exploits.

We can further divide security bugs into three categories, depending on the amount of user interaction necessary to enable an attacker to exploit the security hole.

--

The least severe are those that require what I like to call active user interaction. In other words, an evildoer sends you an application of some sort (or gets you to download an application of his choosing), and you have to do something explicit (like double clicking on it) in order for the bug to be exploited. The key here is that the file in question is of a type that is known to be potentially risky (e.g., an application, not a picture or music file).

Some would argue that these types of security bugs aren't an issue at all, since once an adversary convinces you to run a program he sent you, all bets are off. Remember Deber's First Law of Computer Science? If you are running an application, your computer is doing exactly what someone else (the developer of the application) told it to. Is it a bug if the things that this developer told your computer to do are evil? Consider an example where an evildoer sends you an email message saying "Please delete all of your files immediately!". Would it be considered a security flaw in your email program if you followed these instructions and deleted all of your files? Think of it this way: any time you run an application, you are placing your trust in the fact that the authors of the application did not have malicious intent. Most of the time that's true, but when it's not, you can be in real trouble.

In some cases, I feel that bugs in this category are actually security problems (particularly those involving "privilege escalation", as we will see later on in the series), but in many cases they are not. But, call them what you want, these types of security problems are the most common in the wild. Most of the "email viruses" (e.g., classics like ILoveYou, MyDoom, etc) are in this category.

--

The second type of security flaw are those which requires passive user interaction. These are flaws where the user still needs to do something, but that something is an apparently innocuous action. In most cases, that action is visiting a web page. In other words, bugs in this class can cause something bad to happen to your computer if you go to an evildoer's webpage (or a webpage an evildoer has hacked and taken control of). Other paths of exploitation are viewing email messages (note that we're talking about viewing the email message, not opening any attachments) or opening "harmless" files such as Microsoft Word or Excel documents. Recent examples are the WMF image exploit or the recent spate of vulnerabilities in Microsoft Office products.

The astute reader might notice an ambiguous gray area between the aforementioned opening of a "harmless" file, and the opening of a "known risky" file mentioned in the first category. And that astute reader would be quite correct. The division isn't always clear, especially when security flaws are discovered in previously "harmless" formats, such as Microsoft Office documents. So yes, the division between these two categories can be hazy. But the point is that some actions (e.g., opening an application) clearly fall into the first category, while some (e.g., viewing a webpage) fall into the second.

That very same astute reader might also point out that they've been told that viewing a "cute picture" or a "funny video" that they get via email is a risky activity that might get them infected with a virus (an example was the Anna Kournikova virus that promised pictures of the tennis star). And doesn't that contradict what I've just said about "harmless" files? Well, not really. Almost every case of a virus appearing in a "cute picture" is actually a case of an application masquerading as a picture. In other words, the user might think that they are opening a picture, but they are actually opening an application. There is a security flaw in play here, but it's not a vulnerability in the way the image file is displayed. Instead, it's a poor (one might even say stupid) design decision in the Operating System that allows an application to masquerade as a "harmless" picture.

--

The final category is the worst. Those are bugs that require no user interaction. In other words, an evildoer can wreak havoc on your computer simply by virtue of it being turned on (and connected to the Internet). These are obviously the worst, since a user can behave perfectly and still be attacked. Evil programs that exploit these types of bugs are often called worms, and can spread themselves without user interaction (once a computer is infected, it automatically seeks out other computers to infect, thus perpetuating the infection). An example is the vulnerabilities used for the Code Red virus a few years back.

--

There is one final criterion that we have to consider, which is the idea of a default configuration. The meaning of the word "default" in this case is a relatively new usage coined by the Computer Science community, and not the more common "due to the exclusion of other candidates"4. In Computer Science parlance, it means the "standard way" or the "preset setting". So, a default configuration is a configuration that ships with the software from the manufacturer, and the configuration that will remain in place if a user doesn't explicitly change it. For example, the default configuration in many Word Processors is to use a 12 point Times New Roman font.

When it comes to computers, many users (especially less technical ones) will never change those default settings. So, a vulnerability that exists in a default configuration is far, far more significant than ones that exists in a configuration that a user has to explicitly set up.

--

With that, we're at the end of the probably-too-lengthy Part One of MoAB Madness. Part Two will talk about the people who look for security bugs, and the methods they use to disclose them once a bug is found.




  1. You're paranoid. 
  2. I used a web browser called lynx, a text-only browser than runs on the command line. 
  3. The etymology of the term "bug" isn't particular clear. (Do I get bonus points for using both entomology and etymology in the same article?) It certainly predates WWII, although at the time it referred to problems with electronic gear (e.g., radio equipment), rather than computers (since computers didn't really exist yet). A famous story (and one that is often incorrectly cited as being the genesis of the word) relates to Grace Hopper, one of the early greats of Computer Science (and one of the very few women in the field in that era). While trouble shooting a problem in the vacuum tube equipment, she determined that it was caused by a moth that had gotten into the components and met an untimely demise. The log book entry read "First actual case of bug being found", and had the moth taped to the book. This page is now in the Smithsonian. More info about the history of the word can be found here, if you're interested. 
  4. Scientist: [resigned] Well, Homer, I guess you're the winner by default.

    Homer: Default? Woo hoo! The two sweetest words in the English language: de-fault! De-fault! De-fault!

    [assistant clubs him]

    Scientist: Where'd you get that, anyway?

    Assistant: Sent away.5

     

  5. Episode 1F13 Deep Space Homer 

Friday, February 2, 2007

An Interesting Footnote

I've changed the style of footnotes that I'm using in this blog. I've also undertaken a bit of revisionist history, and updated the footnote formatting (but not the content) in all of the previous blog posts. I've also taken the opportunity to clean up a bit of other formatting.

Each footnote number in the main text is now a link to the footnote at the end of the document1. (Actually, I suppose that makes them endnotes, but in a single-page format (such as the web), the difference is tiny2.) Each footnote contains an arrow, ↩, which links back to the place in the article where the footnote came from.5

At least in theory it's an arrow. The arrow isn't a picture; it's actually a Unicode symbol (number ↩, to be exact). I'll write an article some time on what that means, but for now just think of Unicode as a way to describe characters that are more complicated than your run-of-the-mill "a" or "$". Most importantly, it allows for the characters necessary for non-English languages (e.g., Cyrillic characters, or those characters found in any number of Asian languages). However, it also provides all sorts of other cool stuff, ranging from musical notes to math symbols6. Now, this is all well and good assuming that your web browser is smart enough to display Unicode characters. If it's not, the arrow may appear as a box (which is the generic "I don't know how to display this character" character). That shouldn't happen on the Mac, nor with Firefox on either Windows or Linux. It may, however, occur with some versions of Internet Explorer on some versions of Windows. If you're using IE, might I use this opportunity to suggest trying out Firefox, a third-party web browser that is far more secure (and capable) than IE.

Anyway, this style of footnote is directly based on that of Daring Fireball. A discussion of these footnotes can be found in this article.

On a related note, I've started to use Markdown to prepare the text of this blog. It's a system that makes it easier to write the text of an article while still allowing easy access to features necessary for web publishing (like links). If you like using a text editor (as opposed to something like a Word Processor), it's worth a look. I've also hacked together a small script to expedite footnote processing, since Markdown does not provide "native" footnote support. This omission is particularly odd given that Markdown is written by John Gruber of Daring Fireball fame, whose footnote style is the direct inspiration for my script. When I get the code cleaned up a bit, I'll send it along to John.




  1. See, like this. 
  2. And blurry, just like the inside of a cataract3
  3. Episode 2F08 Fear of Flying 4
  4. Yes, this quote is a bit of a stretch. But you try working a Simpsons reference into a post about footnotes! 
  5. See, like this. 
  6. In fact, Unicode provides the capability to encode pretty well an infinite number of different characters, so we should be ok even if humanity invents a couple of hundred new languages.