It's 2015. Why do we still write insecure software?

I've read a lot of programming blogs, and if you're reading this, you probably have too. So let me tell you up-front this is not your usual security rant that boils down to "just try harder!" Let's talk about smart, experienced programmers who are trying to write secure code, even if they are not security "experts" per se. This is an important set of people, because there is more security-related software in the world to write than can be written by security experts.

In a perfect world, setting that as the target audience would conclude this essay. As your browser's scrollbar shows in the full view, this essay continues on for quite a while. Alas, decades of experience and a trained reasonably high intelligence are not sufficient to write secure software in the current coding environment.

That's also the highest amount of qualifications that can be feasibly brought to bear at any reasonable scale, so in practice that's equivalent to saying it's impossible to write secure software in the current coding environment.

Let's talk about why it's so hard. My thesis is simple:

We write insecure software because our coding environment makes it easier to write insecure software than secure software.

But exploring what it fully means can lead some surprising places. Please join me on a journey as I try to show you why that is not trivially true, but in fact, profoundly true. We do not occasionally pick up insecure tools, like a broken encryption routine or misusing a web framework; we are fish swimming in an ocean of insecurity, oblivious to how steeped in it we are.

First, I must establish that...

Writing Secure Programs Is Hard

We all have a limited cognitive budget to work with. No matter how much we like to pose as invincible rock stars or ninjas of mythic power, we are not. No matter how experienced we become, doing a task in three tokens instead of a hundred spends less of our cognitive budget. It is not "laziness" to want to spend as little effort as possible doing a task, it is a recognition of the fact that resources are always limited.

So it is a reasonable thing to do to examine coding environments for how much cognitive effort they require to write secure code. It is reasonable to compare environments to each other on this measure. It is reasonable to discard arguments based on the idea that it has nothing to do with the tools and the only thing that matters is the skill of the programmer.

(Sidebar: It is a common misconception in the programming world for some reason that the adage "It's a poor craftsman that blames his tools." means that someone blaming their tools proves they are a poor workman. The true meaning of the phrase is that it is a poor craftsman that has bad tools in the first place. A skilled craftsman does not use a dull saw without complaint. They get or make a sharp one, without complaint. A craftsman who uses poor tools without complaint is even poorer than the one who is at least complaining!)

Here we encounter the problem that it becomes difficult to objectively measure the cognitive costs that a language imposes, but I'd suggest that in practice, "token count" is close enough, and to the extent that is not true, it is increasingly true as time goes on. (Unfortunately, justifying that would be another essay entirely. Hypershort version: Common languages in use are squeezing out their accidental complexities, in the Fred Brooks sense. So let me emphasize again: close enough. Not perfect.) So to a first approximation, as I talk about "ease" and "work" below, you can imagine I'm talking about "token count".

It's A Small World-Writeable Variable After All

Let's warm up with a non-controversial example of how the Right Thing being cognitively expensive results in developers doing the easy wrong thing: Global variables. As I've said before,

All programmers will verbally agree they are bad, yet mysteriously, despite the fact that nobody ever writes one they continue to show up in code bases and libraries. (I blame evil code fairies.)

Now I'm going to explain why they keep showing up.

(Spoiler: It's not code fairies. They're only responsible for mixed tabs and spaces, and the occasional insulting comment.)

I think it's because almost all languages make global variables a single line, and the rest still make it easier than any right thing.

Essay structure at this point says I should show you an example of how easy it is to set a global variable, but in most languages it's so freaking trivial that it's almost an embarrassment to show. But let it not be said my formal education has gone to waste, so in slavish compliance to formula, here it is:

variable = "value";

Javascript can use that directly. In Python the semi-colon is extraneous but it will work. With just two or three more tokens I suspect I get the majority of imperative languages. And even in the languages that require more, like Java, it's still as easy as doing anything else, as you're going to be babbling about classes no matter what you do.

Meanwhile, doing the "right" thing with a global value is significantly harder. It has to be collected, and passed around with varying degrees of difficulty, and could potentially involve hundreds of lines of code if for instance you need to change the type of what is passed around, which is evidence that this is not a simple problem. Nobody "wants" to use a global variable, but if you make it even slightly more convenient than the local "right answer", whatever that may be for a language, programmers are going to use them. Even Haskellers can succumb to the lure of unsafePerformIO $ newIORef 0.

By contrast, in many languages doing the right thing is beyond the scope of a blog post, since it's full of tradeoffs, and certainly involves a lot of tokens. But since essay structure calls for it here anyhow, here's one I did for Go. See how I'm almost giddy at how easy it is to do it right, and yet, see how much more than one line that inevitably is. (For reference, a global in Go is:

var Global = "value"

placed at the top level. One var token added, and the name has to start with uppercase. Still far easier than any "environment" usage.)

So let me take this opportunity to try to justify my "token count" metric based on this example. When we measure how easy a language makes something in this manner, we can factor out the question of "developer competence". We can justly criticize a developer for making a wrong choice, and we can criticize the environment for making the wrong choice appealing. Everybody loses!

The advantage of starting with this topic is that few people will argue in favor of global variables, and those few are mostly agreeing with me that they are easier than doing it "right"; the remainder I dismiss with a magisterial wave of my hand. So let me take a moment to restate my main thesis in terms of this non-controversial example before we get to the interesting stuff: The reason we still see so many global variables despite how wrong we all know they are really is nothing more and nothing less than the fact that they are easier than doing it right. At the moment that probably still sounds trivially obvious, but buckle up, everything up to this point was just making sure that we are all are looking at the same thing and agree that we are staring at a rabbit hole.

Now put on your blue dress and white apron, it's time to jump in.

Or... uh... just jump in as you are. It's the Internet. I can't see you.

Moving On To Security

First, what do I mean by "security"? Well, the best general-purpose security definition is something like "a system which is more expensive to break than the value of what it protects", but that's too broad for my purposes today. In the software arena, there's a subset where we can reasonably talk about a nearly-binary case: Does the software permit somebody to do something which the owner of the system does not wish to permit them to do? Can I hack it by hacking the querystring? If I click the right sequence of buttons, does it decide I'm an administrator? Can I send in a buffer overflow and execute it?

This is a pragmatically-valid shortcut when discussing software because usually once a code vulnerability is known, exploiting it is so easy that it is virtually guaranteed to cost less than whatever it is protecting. Counterexamples exist, such as the BEAST attack which may indeed be more expensive than anything being protected, but they are dominated by the sheer quantity of vulnerabilities that are cheap and easy. (If this ever ceases to be a valid shortcut, software security will have come a long way indeed!) And on the flip side, if the software is "secure", no feasible amount of computation less than the sum total of what is available in the universe should be able to brute force its way in, reducing the problem to an entirely different one about physical and human security. (For the purposes of this definition, a mere brute-force DOS is not a security vulnerability for the software itself; any software can be overloaded.) So we can for convenience reasonably speak of software being "secure" or "insecure", since in practice few things manage to land on the narrow band in between those two.

So let's talk about how software on the "secure" side comes to be. But that's too large a task on its own, so let's start with something simple, like, oh, the single-character token +. Hard to imagine what's much simpler than that. Hard to imagine what could go wrong, right?

Addition

So. Pop quiz, what's one hundred plus one hundred?

Obviously, if I'm asking in this context, that's a trap. It is, of course, negative 56.

    var a = int8(100)
    var b = int8(100)
    fmt.Println("100 + 100 =", a + b)

I do not object to the fact that real computers have limits and preferred number sizes. I do not object to the existence of languages that are unwilling to sacrifice the efficiency necessary to have a Python-like experience where numbers transparently upgrade to BigInts if necessary, and I certainly don't object to type systems that don't take kindly to such things. I do not object to the fact that such numbers will be intrinsically limited to some range and we need to do "something" on overflow.

What I object to is how easy that was. The language does not even blink before yielding nonsense.

This is, of course, no big deal in practice... yes, trap again, it totally has been. Let me underline that... has been, not "could be". Integer overflow has been the foundation of countless security vulnerabilities, UDP packets that trigger arbitrary code execution (I'd guess in the kernel context, based on that description). Arbitrary binary code execution in a browser via Javascript. Arbitrary code execution in Quicktime from malicious 3GP files. And these are just from the top 5 hits for integer overflow arbitrary code. I could go on for a very. Long. Time.

Whether you want an exception or an error or some sort of Either/Option type or whatever depends on your language, but something ought to twitch here. Maybe the CPU should do it better, but where I think the comments on that article sort of wander into the weeds is that regardless, the high-level languages ought to do something about this.

How are you supposed to build secure software in a world where 100 + 100 = -56 is a perfectly sensible statement? Heck, how are you supposed to build bug-free software? We shouldn't have to spend time shoring up the foundation. Especially since in general we do not in fact take that time and just operate with a rotten foundation.

Programmers can easily spend years programming in that world. Our minds can be bent and contorted until that seems perfectly reasonable. Any ten-year-old could tell you that's stupid, though, and the ten-year-old is in this case perfectly correct... and the security vulnerabilities prove this.

(Spare me the claim that they're all operating in an algebraic ring on purpose, by the way. Humans are terrible at this sort of thing; we remember the exceptions and forget the normal case, and can mis-compare the two by orders of magnitude. Go run grep -ro '\+' * | wc -l over your code base and tell me that's all in a ring, with whatever the right set of files for * is. Seriously, tell me that with a straight face. Even if you're writing heavy-duty crypto, the actual algorithms are often quite small compared to the support code.)

The people writing the UDP network code in the Windows kernel are not stupid. The people writing Javascript engines for Mozilla are not stupid. The people writing codec code for Quicktime are not stupid.

Yet they all made a serious error, with very significant security implications. Why do we so freely use a + operator that in so many languages can bite so deep? Well, it's one token, and it all but begs you to ignore thinking about overflow. To compare that to the token count of doing it correctly, in a language that lacks any ability to catch an overflow, how would you even do safe addition? Well, it'll vary from language to language. Some might have more or less clever ways of doing it, but here's Go, again because I like the fact I can just link to a runnable example you can play with (I mean, yes, I like it OK, but that's a real killer advantage for a blog post):

var ErrOverflow = errors.New("Addition overflows")

func SafeAddInt8(a int8, b int8) (int8, error) {
    if a > 0 && b > 0 {
        if a+b <= 0 {
            return 0, ErrOverflow
        }
        return a + b, nil
    } else if a < 0 && b < 0 {
       if a+b >= 0 {
           return 0, ErrOverflow
       }
       return a + b, nil
    }
    return a + b, nil
}

Gosh, I can hear your mouse cursors frantically highlighting that so you can immediately copy that into your code base and start search & replacing all instances of + right away.

Say, what does it look like to use that anyhow?

sum, err := SafeAddInt8(a, b)
if err == ErrOverflow {
    fmt.Println(a, "+", b, "= overflow")
} else {
    fmt.Println(a, "+", b, "=", sum)
}

Every time you use SafeAddInt8, you need to worry about the error case. Every. Time.

But, why is that? It's actually not SafeAddInt8's fault that that is true. The truth comes from +. So it's more accurate to say that every time you use +, you need to worry about the error case. Every. Time.

Before you rev up your email client or your comment box to brag about how easy Haskell makes it or something, first, quick question: Do you use it? No credit for how easy your language makes it if you're not doing it! If you use Int instead of Integer and are using machine ints, do you actually take this much care? Or does the seductive ease of Int -> Int -> Int win over Int -> Int -> Maybe Int in practice? It doesn't matter how easy it may be in theory if you don't use it... and it still isn't as easy as +!

Of course, a language could throw an exception... but then, most don't, so again, no credit. C# lets you turn it on with a single keyword and a block... of course, you don't use it, do you? And it defaults off, of course. (Yes, reasons; CPU support, performance, etc. I'd suggest we ought to have a much higher priority on fixing those reasons, and making performance an optional temporary downgrade of safety rather than running without safety all the time.)

In the vast majority of runtimes, integer + is a tool with unexpected sharp spiky bits that can become catastrophic in a security context, as is the way with things that have sharp spiky bits. It has in the real world been responsible for numerous security vulnerabilities and unknowable orders of magnitude more just-plain-bugs. Not everything is as vulnerable as C, but that doesn't mean it's just peachy in those other languages; even being able to generate an ArrayIdxOutOfBounds exception may be a low-grade DOS, and it's not hard to imagine real-world programs where one could still bypass some sort of program-level security restrictions (accessing database ids that shouldn't be accessible, etc). The error doesn't have to produce a memory-access error to be a security error.

So let me pound this point once again: It is not the existence of the standard + that I object to, or the existence of its behaviors in the abstract. Sometimes the current behaviors are even useful, though more rarely than you might think. My objection is that it is a bad default. We should not have to ask for safe addition; we should have to ask for the current version.

But...

integer + is the friendly plus. It has a far worse brother.

String Concaten<script>alert(document.cookie)</script>ation

In many languages, the string + will, of course, take two plain-text string values and return a new plain-text string value consisting of the two in sequence. There are a variety of variations on the theme, some differences depending on whether your strings are mutable or not, and some conveniences for doing lots of concatenations at once, but that's what it boils down to.

The only problem with that is that we programmers are almost never dealing with "plain text".

"What?", you may ask, in rhetorically-convenient-for-me horror. "I deal with plain text all the time!"

No, you don't. In this case, I'm taking a hardline view of "plain text" as text that has no additional structure, beyond words, spaces, and newlines. Not "text-like", not "containing mostly text", but literally containing text and nothing else, because as we'll see, this definition is highly relevant from a security perspective.

And to a first approximation, you never deal with plain text.

Markdown? Markdown is not plain text. Markdown is formatted text that happens to serialize in a conveniently-editable format, but it is not plain text.

HTML? Goodness gracious no. HTML isn't even remotely plain text. HTML is Turing complete once we accept the <script> tag.

Emails? The payload may nearly be true plaintext (ignoring encoding for now), though one still needs to be careful of accidentally emitting the "end email" marker, but emails themselves are full of structure, especially once you accept MIME.

"Plain text" configuration files? Can't say I've ever seen a configuration file that doesn't place any restrictions on what values go where. Configuration files almost by definition must have some structure imposed on them.

Programming language code? Actually highly structured, no matter how "textual" it may appear.

Comma separated values? While this specifies a family of formats rather than a single format, all of them involve some sort of rules about what values can go where and how to escape them.

And so on. It's fairly rare for a programmer to be dealing with true plain text, as I've defined it. What we call "plain text" formats usually have at least one "structure layer" that consists of some form of control code, which may happen to be textual themselves, and some "textual" layer that contains the "text payload", but into which one must be careful not to emit the structural indicators accidentally. For example, in HTML, if you wish to embed a less-than character into the text itself, you must use &lt;. If you simply put a < directly into the text layer accidentally, you are likely to get undesirable results, and even if you luck out, it is by depending on the browser to correctly guess, which is already putting you in dangerous security territory even before we know exactly how that can be abused.

HTML itself provides a very complicated example, where there are many different layers floating around: There's the "plain text", the markup tags, the attributes, the attribute values, the scripting language and its arbitrarily-deep nesting of various different control structures, the interaction between the scripting and the HTML, comments, and so on. All of this adds up to a format where the equivalent of the following code:

print("<p>" + someValue + "</p>)

is almost certainly a bad idea, regardless of what format is being output. (With suitable modifications for the specific format at hand, of course.)

It's tempting to want to jump straight to explaining "script injection" but if you're still reading you probably already know what that is anyhow, and I think it's educational to slow down and consider this from a simpler, non-hostile perspective before we lose sight of this tree in the forest. I've already explained this elsewhere in detail, but the short version is that the code clearly intends to wrap someValue in a paragraph, but without checking whether or not someValue contains some of the "structural" characters of the target format leads to the ability to "cross levels" in the format and accidentally introduce structural tokens where you expected to have "plain text". (To the extent you feel I'm handwaving here, follow that link.)

Here is why I took such a hard line on my "plain text" definition above; it is rare for us to be dealing with a format that truly lacks any sort of "structural character". Be it less-than, various forms of whitespace, double quotes, more complicated markers like <script>, or even just Unicode issues like homograph attacks or fun with right-to-left markers, there's almost always some way of altering the structure in a manner the programmer may not have intended, and often some way to parlay that into some sort of security attack.

In a Unicode world, even plain text is not plain text!

Once we do resume considering this as a security matter and not just a correctness matter, it's not hard to imagine scenarios where even just corrupting a CSV file by placing commas somewhere could produce unexpected security scenarios where input ends up more trusted than it should be or something.

String concatenation encourages this error by making it so the easiest thing to do with two strings is to simply concatenate them, regardless of whether or not that is a safe thing to do. As I've said before, once buffer overflows retire as the biggest security extant security hazard, string concatenation failures will take over as the new number one, in all of its guises as cross-site scripting, SQL injection, header injection, and just about every other security vulnerability with the word "injection" into it. This problem is pervasive, and not going anywhere.

What's the solution, then?

A Sketch of a Solution

Whenever concatenating two strings, you should always be required in some manner to explain what the encoding of the strings actually is. That is, if you want to assemble some HTML, rather than saying

print("&ltaa href='" + url + "'>" + linktext + "</a>")

you should only be able to do something like

print(interp("<a href='%attrval;'>%cdata;</a>", url, linktext))

When I say the "only" way, I can't possibly mean that, can I?

Well... yes, actually I do. The best way to make the right thing easier than the wrong thing is to truly make the right thing easier. But sometimes, when going up against the competition of a one-token built-in that does the wrong thing, the only way to make the right thing easier is to eliminate the wrong thing. There's just no competing with +, no matter how I gussy up the syntax... or, I suppose, gussy down the syntax... for string interpolation.

If one were writing a language from scratch with these ideas, we could nearly get it down to a single interpolation token. Since once we tab out of this window and resume writing code we're probably going to use real languages rather than ones that exist only in my fevered imagination, you're still most likely looking at a function/method call to correctly put strings together.

What about the need for raw concatenation? Well, it's undeniable that sometimes you really need to smash two or more things together into one thing. So in the interests of not merely being critical, let me propose a concrete, positive way of approaching this problem in a safer manner. Suppose instead we had to write this to concatenate two strings directly:

interp("%RAW;%RAW;", first, second)

The idea of the all-caps RAW here is that while you can't prevent someone from incorrectly doing a:

print(interp("<a href='%RAW;'>%RAW;</a>"))

you can at least make it jump out during even a cursory code review that something bad is happening.

In fact, take a moment to stare at that until it does jump out at you, if it did not right away, because this highlights an important point. In the final analysis, there isn't a single bit of functionality that we have in our hands today that we can actually throw away. There are times and places where that is actually the correct code, because for other reasons we may be sure that the strings being interpolated in are already escaped correctly. We can not write code in which our string concatenations must be escaped in some manner. But what we can do is expose the danger of smashing strings together in a way that is extremely easy to review, instead of buried in a sea of usages of +. We give "grep" targets to reviewers. We allow people to write tools to verify that perhaps the %RAW; usage is properly commented or something.

For more points, we can create a syntax that allows for easy piping of escaping methods together due to the number of places where such things can be useful, for instance:

print(intrep("<a href='javascript:x.innerHTML=
        "%cdata|cdata;"'>))

In this case indicating that the inner content being interpolated in needs to be double-escaped using HTML CDATA encoding.

Even setting aside security matters, this can be a useful thing to have called out. One of the evergreen "new programmer" projects is to create yet another configuration file templating system. This is simpler and safer than trying to write an HTML templating system, yet as the template use cases grow, it becomes inevitable that a user is going to template something in a way that makes it invalid. If the template author is writing

print(fieldName + ": " + fieldValue + "\n")

there's no opportunity to stop and wonder if there's a problem. Heck, a lot of you reading this now probably wouldn't blink if you encountered this in a code base; it's taken me years to develop a near-instinctive revulsion to that line of code and it's still too easy for me to miss in a pile of dozens of other concatenations. But if you had to write:

print(interp("%RAW;: %RAW\n", fieldName, fieldValue))

there's a place in the documentation to talk about RAW and why it's dangerous. Sure, a novice programmer will blow past them, but the question is, would this accelerate the rate at which the novice will learn to write safer code? I think we can see that it can when we see the difference between Perl's database library usage, which from pretty much day one the entire community sang as one about the important of using ? to indicate parameters in your SQL query, to the PHP world where it was very easy to find people just concatenating queries together in a tutorial. We can't necessarily prevent all errors, but we can contribute to a culture that will more rapidly fix them.

I have focused this section on providing an answer that will be more appealing to many programmers than some of the harsher answers. The real key is requiring programmers to somehow specify how to encode strings when concatenating them together, and to not provide a silent default of simple concatenation. This is not the only possible way. Strong typing enables other options, for instance see the BlazeHtml tutorial, which makes it very difficult to screw up HTML. My suggested approach requires less work per format you extend it into, but also provides fewer guarantees. There are many possible viable ways of forcing programmers to be more specific with many combinations of tradeoffs.

Any manifestation of this idea is better than raw string concatenation, though.

Now, one pet peeve of mine is blog posts exactly like, well, this section here, that handwave some glorious API that will solve all your problems, but don't provide any actual code. In the interests of avoiding my own pet peeve, I have provided a manifestation of this idea in Go, which I called strinterp. It, uh, feature crept away from me a bit, and what started out as "String Interpolator" turned into a "Stream Interpolator". Even though this is really intended only as a proof of concept, I couldn't stand to publish this to github without the jerf-standard 100% test coverage, golint, and adequate documentation, so have a look if you're interested. Pull requests welcome! It also shows how this all manifests in code, and may let you get your hands dirty with playing with the idea.

The Long Zoom Out

One of my favorite recurring motifs in cinema is the zoom out from some scene on the surface, up through the clouds, and into space. Metaphorically, I'm going to do that now. I've covered two very low-level, very local, very small examples of how a single token in our programs beg us to indulge in insecure coding practices. I could go on for a very long time with example after example of this sort, but I think now is the time to do the big zoom out here. What I want you to take away from this blog post is not that I particularly hate +, but that it happened to be a convenient exemplar of a pattern so pervasive, one must strain to find things that are not examples of this problem!

How many of us work in environments where the default presumption is that errors should be simply discarded, unless you add tokens to catch them? Where executing something in the command shell is, by default, unchecked? I was recently writing some code in Perl that was going to manipulate some security-sensitive code (yes, I phrased that correctly), and the first thing I had to do was go implement all the most basic Perl bits of functionality to scream and die when something went wrong instead of blithely advancing on, maybe with some obscure $! variable set or something. I don't ever want to open a file, have the open fail, and life just sort of go on. I do not want this to be a problem if even once I forget to append the magic invocation or die to the open command, to say nothing of the fact that I'd really rather default to something more sophisticated than that anyhow and I really don't want to be typing that from scratch each time, either. And it's not just one thing, like an old "open" or something; it's systemic that the runtime tries to just keep going when it should long since have stopped.

How many of us operate in environments that provide essentially no string escaping support whatsoever, encouraging you to do business with just strings and some standard functionality that entirely ignores the fact that almost every time you are dealing with strings you also have some structure to deal with? In fact had you ever seen a language ship with a default string library that incorporated something like what I sketched above in such a way as to suggest that dealing with not-plain text is the default world we really live in, rather than focusing on the plain text we're not actually ever quite manipulating?

Security code must always pay attention to the authorization context it is operating under, or, more colloquially, must always know on whose behalf it is running. Whereever persistent data is touched, there should always be a corresponding authorization check, preferably as close to that touching as possible. How many of us use languages in which functions require an authorization context to run at all?

Well, for that one, actually, all of us, sort of... all modern operating systems have some concept of permissions. Of course, your application's permissions scheme is probably far more granular than that will permit, since OSes are generally concerned simply with reading and writing of bytes, and are certainly not capable of, say, validating the correctness of a credit card number and the corresponding authorization to use it. So, side bar, how many of your applications even correctly use OS services for authorizations?

But beyond that, how many of us have every last access to data properly shielded at the lowest possible level, guaranteeing that permissions will be checked? Very few of us. Those that did often had to create the abstractions ourselves since data access libraries tend not to ship with any sort of permission hook system.

There's a bare handful of libraries out there that do provide this out of the box... kudos! However, please only email me flame mail disagreeing with my premise here if A: You use one of these libraries B: You use the security functionality correctly and pervasively C: The library did not default out-of-the-box to full access to everything for everybody and D: You are confident that all programmers must also be using libraries like this properly. I suspect nobody can get past C, although who knows; certainly nobody well-connected to reality can get past D.

What about...

"What about capabilities-based security, strongly-typed string concatenation, database permissions, SELinux, dependent types mumble mumble etc. etc. etc.?"

First, to the extent that these solve your problems, great! Use them. Those are concrete solutions, and I'm proposing a heuristic for analyzing and comparing them. My claim is that concrete code is often quite bad on this heuristic and that we as a community need to promote this heuristic better. By no means am I claiming to have just discovered the idea of security or something; there's tons of options out there... if, of course, one takes the time to use them.

But I would also observe that many of these sorts of things actually have a slightly different focus than what I'm suggesting here. Many of them focus on making bad things impossible. Now, at the limit where you successfully make 100% of the bad things impossible, by definition in that system the right thing is easier than the impossible wrong thing. However, that says nothing about the difficulty of the "right thing" in absolute terms, and many of these systems have produced situations where the "right thing" is so impossibly complicated that it verges on unusable (SELinux is notorious for that). A great deal of the remainder require a "big-bang" change to a code base and how a programmer programs, which explains why they have not penetrated down to the "general programmer" very well.

Focusing on making the right thing easier than the wrong thing may be the same at the theoretical limit, but in the practical world we occupy I think produces more concrete actions that people can take today, and by its nature tends to focus on, well, making right things easier in general. If following this approach incrementally occasionally sticks us in some local optima and we'll have to make bold moves to explore different solutions like dependent types, well, wouldn't you at least rather in the meantime that we get stuck for a while in said local optima rather than where we are now?

What To Do About It

So let me close with exactly that: Is there anything you can change about your approach to programming that will let you start improving your life?

First, and most important, acknowledge to yourself that this is an issue. Acknowledge that there are worse and better coding environments. Acknowledge that, statistically, you're probably working in a "worse" one. Acknowledge that blame for errors can be shared between the developers and the coding environment.

There's still a lot of people who persist in blaming the users of tools for hurting themselves, no matter how many razor blades are glued to the tool's handles. The idea that environments are blameless in their affordances and all errors are 100% programmers fault needs to be stomped out, because that idea prevents fixing the environments.

In fact, let me say that again, because this is important: The idea that all errors are 100% the programmer's fault needs to be killed stone dead, so we can get to work improving our tools. It's sophmoric, a pose calculated to look sophisticated but in reality worse than the "simpler" view that the tools are indeed different in quality and that the differences in quality really matter.

And let me emphasize one more time that this is by far the most important step, to realize there really is a problem here, and there really are things you can do about it.

Second, at the micro scale, find where your tools have sharp pointy bits and start fixing them.

Yeah, yeah, it sounds obvious when I say it that way, but you need to do it for it to happen. Are you doing it? Because I'm looking out there in the world, and on average, even a reader of this essay is not doing this, let alone the general community.

Do you use some form of a "system" command that will happily ignore errors? Create a local wrapper that forces them to be handled somehow, ban direct use of the built-in, and convert all usages of the command to the safer variant. Do you use an AJAX wrapper that does not require some form of error handler? Wrap it with one that does, and ban direct use of the AJAX invoker that happily throws errors on the floor by default. And so on.

Consistently apply this on any non-trivial code base, and I guarantee you will find and fix bugs. Use it on a sufficiently large code base and you will probably find and fix bugs that programmers had been unable to locate previously, as the error occurs buried under numerous layers of errors being discarded in various ways obvious and subtle.

This can be done one function at a time, and with just a bit of cleverness, can be done incrementally over time. No big bang conversion required.

Third, for those of you who are up to the point where you may be expected to write APIs for other programmers, use this to examine the APIs and higher-order constructs you create. Do your APIs encourage safe usage? Do they make correct things easier than wrong things? Or do they have baroque and easy-to-screw-up initializations? Or default to letting errors pass without comment? Or are so hard to use that they encourage users to do simpler-but-flawed things? It isn't enough merely to make it possible to use the API to do the thing it was designed to do; that's merely the minimum bar, not success. The right thing needs to be the easiest thing. Even if, as a last resort, one must deliberately break an easy-but-wrong option that simply can not be otherwise fixed.

I'd also observe that if you do these things, much of the need for the solutions in the previous section will diminish. Not "be eliminated", but diminish. And it will be easier for them to be designed to be "easier to use correctly than incorrectly" if they aren't expected to solve everything from top to bottom.

As an optional bonus fourth, google some of those terms I mentioned in the previous section and start learning about them.

We can't fix the world we live in. But one step at a time, if you learn to see the sea of insecurity we swim in, you can take concrete strokes towards dry land. And if developers collectively start analyzing libraries and systems this way, one library and one language at a time, we might just be able to start making our world an incrementally better place to program.