Why Go Getting Generics Will Not Change Idiomatic Go

posted Aug 03, 2020
in Programming, Golang, Haskell

After years of overly-acrimonious online discussion, Go is continuing down the chute towards getting generics. I'm already seeing the inevitable "functional" libraries (plural) coming out. I'm going to explain why none of these libraries are going to materially impact what the community considers good style.

They may see some use. I may even use some of them myself on small scales. Some iconoclasts will program with them extensively, and their code will work, in the sense that it will do what it was designed to do regardless of how the code reads. But they are not going to change what generally-good style is considered to be.


For context, I like Haskell. I don't get to do a lot of work in it, but I know it reasonably well, I could take a job in it if one came up. I'm not saying what I'm about to say out of ignorance of how functional programming works or how to build non-trivial programs out of those tools. If anything, it is out of an abundance of respect.

For this post, "functional programming" is the modern strain exemplified by Haskell, with strong typing, pervasive immutability, languages that have enough differences that you end up with different optimal idioms than "imperative" languages. This is as opposed to an earlier definition around functions being first-class citizens and having closures available. (This is no longer a very useful definition, because this has simply won. The only remaining languages that are not "functional" by this standard are that way only because they are legacy, and most of them have added if it was even remotely possible, leaving behind only pure C. Also Go is already "functional" by this standard, as everything else is.)

So, that said: Functional constructs are highly overrated outside of functional programming languages.

They are not of zero value. I use them myself in various appropriate places in non-Go languages. But they are overrated.

Why do I consider them overrated out of the functional programming context?

In FP languages, the touchstones of what people consider FP are building blocks, the beginning of the utility of the functional paradigm, not the end.

Consider the map, filter, and reduce operations.

Rather than the sine qua non of functional programming, these are three examples of the general pattern that functional programming uses of extracting out recursion schemes into functions (or typeclasses/interfaces, depending on which language you identify with more). Defining these recursion schemes and then building on top of those schemes is arguably the core idiom of functional programming.

Within that context, the utility of map, filter, and reduce are not the functions they directly perform, but their ability to compose with other recursion schemes. If you look at the source code for libraries like parsec (a parser combinator library) or pipes (compositional component-oriented pipeline library), you'll find they don't directly use map/filter/reduce hardly at all, instead defining their own schemes.

In fact, if you take the strictest possible definition of map/filter/reduce, in that they operate only on lists/arrays, they aren't used that much at all in Haskell. Their generalizations are used all over the place. But many functional programming libraries for imperative languages will only offer the very specialized versions, because for any of several reasons, the programming language in question can't offer the generalized version.

To analyze their cost/benefits in another environment requires starting with a fresh sheet of paper, because when ripped out of their FP home, they aren't even remotely the same functions, but new things demanding new analysis. Just because you define a function called M-A-P in your favorite language does not mean you've got Haskell's map now.

Consider the the costs of a functional programming style in Go, post-generics.

That is a whackload of costs. Does this style carry enough benefits to make up for it in Go post-generics? Some iconoclasts here and there will disagree, but I submit the community as a whole is quite inevitably going to come down with no.

After all, what are the benefits here? "I can write code to operate on arrays without overflows?" map is a solution to this problem, it is not the solution to this problem. The vast majority of loops on arrays in Go use range, and can't overflow. I am not in the market for a language construct to solve the handful of times I've written loops more directly, and over half those loops are basically

for i := 0; i < numberOfWorkers; i++ {
     go runWorker()
}

anyhow. Much of the remaining long tail are those odd cases where I'm doing something so odd it probably doesn't fit the map/reduce paradigm anyhow.

What else? "Now my Go code kinda looks more like the code that is idiomatic in another language"? That's a disadvantage, not an advantage.

It's not faster. It's not going to be fewer tokens (because you'll never compete with the non-functional solutions with all the function boilerplates you'll be adding). It's not easier to refactor. It's not going to give access to advanced features like STM. It isn't a superior solution to isolating pure code from IO code (there are better ways to do that in current Go). It's not like you're going from totally unsafe C code to some totally safe solution; at most you're going from an in-practice 99% safe solution to an in-practice 99.4% safe solution. That's a fairly steep price to pay for that level of improvement.

Programming functionally isn't about just having particular functions around. By a similar line of logic, it isn't about having monads or an Either/Option type or a Maybe type. It's about using those things as building blocks in solutions to problem. But non-functional languages range from simply lacking the abstractions necessary to put those together into large structures, through making it extremely syntactically inconvenient and making it trivial to accidentally violate the constraints needed for this style of programming to make sense. (Go, for what it is worth, is in the first category. Even post-generics it lacks too many things.) Functional programming is not intrinsically good, such that no matter where you implement it, it is better than the other possibilities, it is a contingent good, dependent on very specific preconditions and a certain support structure necessary for it to make sense which is missing from non-functional languages. The same can be said about any paradigm.

Ripping functional programming out of its contingent environment and jamming it into others is not a sign of a good programmer. It is the sign of a programmer who does not understand functional programming. It is exactly the same as a programmer who comes to Haskell and immediately starts asking questions about how to write object oriented programming in it.

In Go specifically, generics are not going to cause some sort of functional programming renaissance in the community. There are still far too many disadvantages to the style in Go post-generics, and the advantages too tiny to make up for them.

Generics in Go will give us data structures, tools to manipulate channels, maps, and slices in various generic ways beyond what we have now, possibly type-safe iterators, and a handful of other useful things. I suspect there are going to be some changes to the web frameworks, some moves on the numeric computation front, and some other similar things. I've got a few little places where I can avoid a bit of casting. But overall, what is "idiomatic" isn't going to change much.

I wrote this in terms of map/filter/reduce as the easiest ways to see what I'm talking about, but for similar reasons, I'm not expecting much traction from the inevitable "Option" libraries, or other similar things. Replacing

res, err := Something()
if err != nil {
    // handle error
}

with

res := SomethingOption()
if err := res.Error(); err != nil  {
    // handle error
}

isn't a win. Haskell has the structures so that

do
    x <- somethingOption arg1
    y <- otherOption x arg1
    z <- moreOption y
    return $ z + 10

is handling errors and short-circuiting, despite appearing not to. If post-generics Go could even express that properly via some method on Result (again without the ability for methods to introduce a new type this is not going to fly in Go), it will be a nightmare of syntax by comparison to Haskell, like the first map/filter/reduce above, as well as have performance issues. It's not going to be preferable to the current mechanisms for handling code.

This post is about Go. Some of these arguments may partially apply to other languages, but in exactly the same way I'm pointing out the analyses based on FP languages have to be redone if you switch them to a different context, these points would have to be redone in any other languages as well. Having something like Javascript's (x => x + 1) changes the syntactic convenience; a dozen other syntax differences can have their own effect. Having an optimizer that runs longer than Go's and is more capable of inlining functions or noticing it can do a fusion optimization can change the performance analysis. None of what I'm saying here is a criticism of functional programming in an appropriate environment (I quite like Haskell), none of what I'm saying here changes one tiny bit what idiomatic Rust code should look like, and so on.

This post is not a universal claim about the utility of functional programming constructs embedded in imperative programming languages... with the possible exception of my claim that they're overrated when the real lesson of FP is about generalized recursion scheme support rather than a particular set of blessed functions, but being overrated doesn't mean it isn't sometimes useful even so. When I'm in Python, I use more itertools. When I'm in Javascript, I use futures stuff. When I'm in C#, I use LINQ. But they're just occasionally useful tools; you're not "programming functionally" just because you have a couple of maps sprinkled into your imperative language. You've just refactoring your imperative code with some ideas inspired from FP.

This isn't even a claim that this sort of thing will never be useful in Go. I expect to use some maps every so often, for instance. There will be some small changes here and there.

What I am saying is simply that generic support isn't going to radically change what constitutes idiomatic Go code.

PS: I've posted things like this before. Inevitably, someone will write a Map that mitigates one of my objections or something. But the goal isn't to produce a particular solution that mitigates exactly one of my objections; the goal is to produce something that is so much better than the current alternatives, in practice, to get the entire community to agree en masse that you've got a better way to do it, across the entire cost/benefits analysis.

I submit that while there's probably half-a-dozen different ways of writing a Map or Result in Go that mitigates one or another particular objection, no matter how you write it it'll have major weaknesses.

When I say that, I'm thinking beyond mere tweaks to what is above; consider defunctionalizing map/filter/reduce. You would then have to write an interpreter to execute the defunctionalized data structure. This interpreter could be smart enough to avoid the issues I wrote about above with multiple array instantiation by fusing the functions internally.

But while this takes a bite out of the performance problems (ultimately, you can't solve it in Go because using closures is just inevitably going to bite you when you blind the inliner vs. simply writing code, and so many of the payloads of these functions aren't large enough to push the interpreter costs below the noise threshold), it's worse on every other dimension. Pardon if I don't implement it to show you, as I think I've done enough work for this post. Goodness help you if you try to make the interpreter extensible; working with that will look more like "implementing a language runtime" than "writing a couple of for loops".

If you think you've got better, show me a real function, like, let's say, path.Base. Show me how it's not just a bit better, but worth switching for. Fair warning: I will be looking at performance.

PPS: This discussion is not complete without a link to the inner platform effect; all of this amounts to implementing functionality that the language already has functionality for. Part of why I keep saying the alternatives need to be a lot better is to make up for the disadvantages of any inner platform.

 

Site Links

 

RSS
All Posts

 

Blogroll