What's Special About Programming?
This is chapter 3 of the iRi BlogBook Programming Wisdom.

Previous: What Is Programming?
Next: Nothing is Free

The first step to attaining wisdom is to understand why special "programming wisdom" is needed in the first place.

It's easy to get the idea that software is easy to create, because it is partially true. Computers get more powerful every year, and we trade in on that power to make programming easier. Every year results in more and better libraries. Changing software is very easy, and it's relatively easy to test compared to an equivalently-complex real-world object. J. Random User can write an Excel macro with a reasonable amount of effort that saves him a lot of time, and early programmers can become excited about what amazing things they can do just by assembling existing libraries and frameworks together which makes everything seem so easy.

This rosy picture is brought to you by confirmation bias, paying attention only to the uniquely easy characteristics and ignoring the things that make it uniquely challenging. Poke past the surface and you find a strange, complicated, chaotic beast. Learning to tame the power requires a lot of experience and wisdom.

blog entry, published Apr 09, 2007

Programming is Uniquely Difficult

Engineers of other disciplines often take offense at the claim that software is uniquely difficult. They do have a point. As pointed out by Fred Brooks in hyper-classic The Mythical Man Month, one reason software is hard because software is so uniquely easy.

We fundamentally built on top of components that have reliability literally in the 99.999999999% range and beyond; a slow 2GHz CPU that "merely" failed once every trillion operations would still fail on average in eight hours at full load, which would be considered highly unreliable in a server room. Physical engineers would kill for this sort of reliability in their products. Or an equivalent to our ability to re-use libraries. Or how easily we can test most of our functionality with the ability to replicate the tests 100% accurately. Or any number of other very nice things we get in the software domain. Our job is far easier in some ways than any discipline concerned with the physical world, where nothing ever works 100%.

Every library, every new computer, every new programming paradigm, and every other such new thing is designed to make programming easier. Some significant fraction of these things actually do make programming easier, though it can be surprisingly difficult to figure out exactly which. And with every task made easier, we face a choice: We can do the task in less time and then be done, or we can do the task in less time, then take on some other task.

Almost without fail, we choose the latter. This is not unique to software by any means; humans have been making this choice for centuries across a wide variety of fields. What is unique is that this interacts with the unique reliability of software; we can, and therefore do, create huge stacks of software for various purposes. The amount of software involved in running even the simplest of static web sites is well beyond what one human could fully understand. (Full understanding here means the ability to describe the reason for absolutely every design decision, and the ability to then make informed decisions about changes in such a way that minimal adverse affects occur. By this standard, it's likely nobody even fully understands things like the Linux Kernel; even the more central people in kernel development have sometimes made incorrect decisions about core components.)

It's because it's easy to build and build on top of software that the "simple" task of web development requires understanding at least three languages (one a full-fledged programming language), and then at least two more languages (another full-fledged programming language and SQL for the database) on the server, and the server code itself may be even more complicated than that. It's because it's easy to build and build on top of software that this language count is going up, not down, in the future. It's because of this that Windows development tends to get more complicated over time, as more and more abstractions and layers are created.

If these layers could perfectly seal off the layers below them, this wouldn't be so bad, because what really matters is the set of knowledge you have to have in order to do useful work. If the abstractions were perfect, you'd only need to understand the top layer, and that is much simpler than having to understand the whole stack. Unfortunately, since all abstractions leak, the result is increasing complexity over time.

We make these trades for good reasons. I would not trade my software stack for a Commodore 64. I look forward to the next iteration as much as the next programmer. But modern software development is complicated beyond belief, beyond comprehension. Where once a Renaissance Man might know "all there is to know" about the whole of science, today you are an above-average developer if you can stay fully competent in more than one language of the many tens of languages that are viable for doing large projects... and that's just the mainstream general-purpose language count. Go beyond the mainstream or into specialized languages and the count goes into the high hundreds or low thousands.

Ironically, it is exactly the unique ease of software development that ends up making it uniquely complicated.

blog entry, published Apr 09, 2007

The Standard Construction Metaphor

The construction metaphor is often applied to software, and I've seen several skillful criticisms of the metaphor. But if you really want to see how stupid the metaphor is, reverse it.

If we built buildings the way we wrote software, we wouldn't even call a contractor; we'd go down to our local hardware store and pick up a copy of Microsoft House. We'd poke the parameters into Microsoft House, push a button, and our house would be templated within seconds. Rather than mucking about with blueprints and plans, we'd walk through an actual physical house, and customize it in real time, because there's nothing they're going to ask for that Microsoft hasn't already heard and incorporated into Microsoft House. Building a house has been done.

Microsoft House costs $59.95 and is certified to comply with the building and housing code in every jurisdiction in North America. You could also use GnuHouse, which is Free and has a few more features but a bit less style, since nobody could afford to hire the best designers.

Anyone who has even built a shed in the back yard knows this is not how construction works. Construction is nothing like software engineering. The difficulty of doing what has been done before is nearly zero, and as a result we don't spend much time on that. If you have to draw an analogy with something, "engineering" in software is more like "research" in any other field; you can't know exactly how long something will take, even if you have a good idea about where you're going and how to get there, because at any moment something new and surprising may jump out at you and change everything.... and I'm not even considering the possibility of "changing requirements" when I say that.

blog entry, published Apr 11, 2007

Software is Uniquely Complicated

In 2007, with a well-loaded Linux desktop installation, my /usr/bin is 257 megabytes, with debugging off and dynamically-linked libraries not contributing to that count. My particular copy of the Linux kernel 2.6.19 with certain Gentoo patches has 202,381,268 bytes of C code alone. If I'm computing this correctly, at a constant 100 words per minute (5 chars/word), that's 281 24-hour days just to re-type the C code in the kernel.

One of the projects I was able to work on during my schooling years was a relatively obscure Learning Content Management System with over a decade of history behind it. At the moment, that project contains roughly 3000 files in its CVS repository, nearly 300,000 lines of Perl code in just under 9 megabytes, and still going. One rule of thumb says multiplying by five converts a line count from Perl to something like Java, which would be 1.5 million lines of code. And this is just the project-specific code; it is layered on top of also-complex tools like the Perl implementation, the Apache webserver, the Linux kernel, and numerous other libraries and frameworks of all shapes and sizes. Some of these things, like the library used to support internationalization, are tiny. Others like the Linux kernel or the Apache webserver dwarf this single project.

No matter how you slice it, software has a lot of moving parts, but there's no obvious way to compare source code complexity to mechanical complexity. Trying to do a straight part-count comparison is probably therefore disingenuous, so we can't make a straight quantitative comparison. I'd assert that even relatively simple pieces of software have more parts in them than even relatively complex machines like modern automobiles (minus their software), but I have no way to prove this.

There is a qualitative distinction we can draw between the physical world and the world of software, though: the interaction of the parts of a program qualitatively differ from a real-world device. A real world device's connectivity between parts is limited by physical three-dimensional space; with rare exceptions, parts that are interacting must be physically next to each other. In software, any part can interact with any other part at any time. It's as if they are all right next to each other, a physically untenable situation, the equivalent of zero-dimensional space. (There are some exceptions to physical proximity, like process boundaries, but these are often crossed as well.) Software can also include as a critical component arbitrary pieces from any place in the world, thanks to network communications; the Internet as a machine is the size of the planet. The software stack to run a Google search on the client side is already complex (web browser, OS, graphics driver, text services, graphics renderers, and more), but add in the need for Google's server system to be functioning correctly with it's own mind-boggling complexity, and you start to see why it's a miracle software ever works at all.

It's tempting to dismiss this as hyperbole, but the effects routinely manifest in real life. I've experienced many errors at the highest layer ultimately being traceable back to a bug in something several layers deeper, sometimes all the way down to the kernel. Even the simplest act, like typing a character into a text box in a web browser, will in a fraction of a second call tens or hundreds of software "pieces" into action, starting at the operating system handling the keypress, up through the userspace, into the windowing system, through the layers the window system may have in place to modify the keyboard (such as code for handling international layouts), passing through the code to decide which window gets the keypress, into the program's instance of the widget library it uses, which routes the keypress into an event loop with correct source annotations, into the application code, which then itself creates a Javascript-layer event which may then be hooked into by arbitrary Javascript on the web page which can proceed to do any number of other things that may trigger another cascade that is equally complicated, or perhaps even more complicated (like loading a new image from the network). And that's the simplified version of the simple process of handling a keypress. Everywhere you look in software, you get these sorts of interactions routinely, and a single subtle flaw at any layer can have odd effects at any time.

Fortunately, we can also harness some characteristics of software to reduce the complication that any one person needs to worry about at any one point in time to a reasonable level, or it really would be impossible to write a program that can be counted on to work. Again, it is the unique ease of software that makes all this complexity possible; part of the reason other fields don't deal with the kind of complexity that software can deal with is because they lack the reliability of the basic components, testability, and other such aspects of software. They are forced to keep it simple by the nature of the physical world. The only thing stopping medical doctors from dealing with equal or greater complexity is that they can't see into biological processes as well as we can see into software processes, so they are forced to deal with a simplified model of the human body. As we continue to master the physical world, physical engineers and biologists will begin to experience this complexity too. Programmers may be blazing a trail, and it may be unique today, but someday everybody will get to deal with the complexity of software.

(Mu-hu-ha-ha-ha!)

blog entry, published Apr 16, 2007

Software is Uniquely Chaotic

Here I refer to the mathematical definition of chaos, which I will define as: "A chaotic system is one in which small changes in the initial conditions can cause large and unpredictable changes in the system's iterated state." This is based on the mathematical definition(s), but simplified for our purposes.

Every clause of the definition is important. In particular, people often leave out the "unpredictable" part of the definition of chaos, but you do not have chaos if everything is predictable. If you are at the top of a very round, smooth hill with a heavy ball, the final destination of the ball is predictably determined by the initial direction you give it when you drop it. This is what physicists would call "unstable", but it is not chaotic.

Every computer science curriculum worth anything will talk about the fundamental limits of computing, such as the halting problem in all of its guises. One of the most important things to carry away from that seemingly-academic discussion, even if you have no interest in pursuing further academics, is that unpredictability is fundamental to the building blocks of software. Once you start using Turing Machines, you have entered the realm of chaos. A single bit changed in the data or the program can have arbitrarily large effects, and in the general case, you can not predict what those effects are, not even in theory.

Software is almost the canonical embodiment of mathematical chaos. You can control and limit the chaos to some degree, but there is a boundary beyond which you fundamentally may not pass, and the reality of this boundary is so thoroughly embedded in the way we program that it is almost invisible. (The people who can best help you see this boundary are those who are studying ways to prove correctness in programs. They push the frontier a little further back with great effort and cleverness, and for this I salute them, but they will never be able to completely remove the chaos.) Per my earlier discussion about the lack of spatial separation, the full state space of a system is inevitably incomprehensibly large, leading to a lot of "room" for chaos to manifest, more than we could ever hope to examine. ("Room" grows exponentially in the size of the computer's memory.) This makes the system more unpredictable in practice, even if in theory the full behavior of the program could be understood. And being discrete, even the smallest change of a single bit can have arbitrarily large changes in the evolution of a program's state space.

Many other engineering disciplines certainly encounter chaos, though most try to minimize it, because unpredictable systems are generally less preferable than predictable ones. Even those that embrace it try to contain and minimize it; studying chaos can help you build a better muffin batter mixer but you wouldn't build the entire bread factory's mechanisms to function mathematically chaotically. (If you did wish to invoke chaos for some reason, you'd do it with software managing the system. The machines would still be designed to function non-chaotically.)

It can be very valuable for a computer programmer to take some time to study some of the characteristics of chaotic systems; I don't think a truly mathematical approach is necessary, so an informal overview and some time with a fractal-generation program should suffice. Things like "attractors" have obvious correlations to how software functions. You'll get enough practical experience once you start coding on your own, once you know what you're looking for.

blog entry, published Apr 24, 2007

[Cheap] Good Practice is Unusually Hard to Create

Because software exists as an amorphous collection of numbers, and is mostly concerned with the manipulation of other amorphous numbers, when it fails, it is on average not as big of a problem as when other engineering artifacts fail. Software generally can't kill someone. (To the extent that it can, more care needs to be taken.) Thus, given a choice between a program that occasionally sort of eats your data but mostly works for $50, or a solid program that never ever eats your data but costs X*$50, people will generally take the former. Even if it's a bad idea. Even if the program will end up eating more than (X-1)*$50 worth of data. I'm not saying it's rational, I'm just saying that's how people are. The more expensive, higher quality program often won't even get made because nobody will buy it.

How many of you out there in the audience have complained about Microsoft's OS products? How many of you have even seriously considered spending many thousands of dollars more on robust UNIX-based systems? A few hands, yes, but not many. (Note that the quoted price includes some estimated training costs and such.) How many of you would actually shell out $2000 for a hypothetical version of Windows that never crashed, but didn't actually have any more features than your current Windows OS? Not many, I see. What about during the Windows 3.1 days, back when Windows itself crashed more often? Ah, that's a few more, but most of you are still picking the cheap-but-crashy software. Don't lie, I can see it in your spending patterns.

Here lies the core problem with finding good practice for software engineering. We can adapt the same basic processes used in other engineering disciplines. We have the examples from NASA and select other applications to show that software can be created with extremely high reliability. However, in the "real world" people simply aren't willing to spend the money necessary to create software with these heavyweight good practices, because thanks to the previously mentioned unique aspects of software (the number of interacting parts, mathematical chaos), this sort of software is extremely expensive. People want cheaper software. This is perfectly rational; often the thing that costs $X and does 90% of what you need is honestly the better choice than the thing that costs $100*X and does everything you need perfectly; it all comes down to a complicated and situation-dependent set of calculations for each choice.

The other problem is that it's not necessarily clear what the best practice actually is after all. Non-software developers will often be seen accusing software developers as a whole of not caring about process, but the truth is almost the exact opposite: Software engineering as a whole is nearly obsessed with process. From the Agile Methodology proponents, to those pushing UML, to any number of management methodologies ranging from the heavy to the light and everything in between, everything has been tried at one point or another. Metrics? Tried 'em, from the simple ("lines of code") to the obscure and mathematical ("cyclomatic complexity"). None of them are worthwhile. Testing methodologies all fail in the face of exponential state space. Design methodologies have experienced some ups and downs, but still there's nothing like a "one true answer". It's not that software engineers haven't tried to produce good process, it's that it's really hard to create a good process that meets all the constraints placed on us by customers.

Research into better methodologies is an ongoing process. Progress is slow due to the near impossibility of doing true scientific research on the topic, but some progress is being made. It's actually an amazing accomplishment for a 2007 program to have the same number of apparent bugs as a 1987 program; the same number of apparent bugs is spread out over a much larger code base, which implies that code bases are in fact improving in quality. This quality improvement happens as we improve our libraries, as we improve our methodologies slowly but surely, and as we tune our tools and libraries for these improved methodologies.

"Cheap, good, soon - pick two." In engineering terms, we are in fact learning how to make things cheaply and well, just as critics want, but it's at the cost of "soon". It's an extremely hard problem, so it's taking a long time. There's a long way yet to go. The way people want software to be all of "cheap, good, and soon" isn't really unique, but the degree which software is affected by these pressures is unusal... and as far as I can tell, the sanctimonious pronouncements about how we should do our job "better" from non-programmers do seem to be unique.

(One note: Throughout this section, when I talk about the costs of software, I am mostly talking about production costs, not actually the cost to the user. Thus, "free" software is not an issue here, because there is no such thing as software that is free to produce. "Free" or "open source" software simply pays for production costs in ways other than directly charging users; the mechanisms of such production are way out of scope of this book.)

blog entry, published May 21, 2007

Programming Is Not Uniquely Unique

I want to be clear about my purpose here. My point is not to claim that the uniqueness of programming is itself unique. Every interesting field is unique in its own special way. For each field, it is helpful to understand why it is unique if you wish to truly excel, or you may bring inappropriate concepts from other domains in, or export inappropriate programming concepts to other domains. I say that programming has several unique aspects and that these aspects are worth thinking about, but this does not mean that programming is privileged somehow.

In fact, that would be a very bad attitude to have since the very purpose of a professional programmer is to serve somebody else, and service workers don't succeed with a holier-than-thou attitude.

This chapter is intended both to combat the perception I have seen that programming is somehow equivalent to some other task, prompting bad suggestions and in the worst cases bad decisions, and to explicitly call out the things that are special about programming to encourage people to think clearly about them. None of this takes away from the specialness or uniqueness of anything else.

There is a delicate balance to be had here. There are powerful underlying similarities shared by many disciplines, but everything is also unique. Ideal skill development can only be had both truths are correctly balanced, when you learn how to correctly leverage your past experiences while at the same time seeing how the new task is different.

(This work is of course about programming because a programmer is what I am. I am not qualified to write Baseball Wisdom or Accounting Wisdom, presuming I'm even qualified to write Programming Wisdom. In a way, nobody ever really is, but it's better that somebody try.)

blog entry, published May 23, 2007


Next Chapter: Nothing is Free

 

Site Links

 

RSS
All Posts

 

Blogroll