AI and Programming Language Communities
This is a lengthy post explaining my reasoning for certain decisions we’re making in the Go subreddit. It is posted primarily to share with other moderators of all sorts of content as we all struggle through the implications of AI. If anyone else gets any benefit from this, that’s a bonus. For anyone else uninterested in that, you know where the back button is.
When I agreed to be a moderator on the Go subreddit a couple of years back, I didn’t expect to be on the frontlines of dealing with AI’s impact on human online communities. My brain doesn’t do chronologies very well, so I don’t recall if it was entirely before ChatGPT was a thing, but if it was a thing, it was back when its programming abilities were laughable. There was a time not so long ago when one could ask ChatGPT to code something, and you could see that there was some promise there, but at the same time, it tended to have difficulty putting out so much as a single line of syntactically-correct code, let alone code that did what you wanted.
On Reddit-Style Communities
I’ve had a low-key hobby of studying the impact of technological choices in the implementation of online communities for a while now. That is, for instance, the effects of various moderation techniques, the presence of a “karma” system and the details of what gets surfaced and hidden by them, the details of threading systems (as opposed to, say, one long unstructured log of replies under a blog post), and so on.
Despite the several technological differences and their impact on the
Go subreddit, it’s pretty clear that Go, and most other
programming-language-specific subreddits I’ve encountered, are
analogous to the comp.lang.*
Usenet hierarchy. I spent quite a
lot of time in comp.lang.python
back in the day and learned a lot
from it, and one of the reasons I agree to moderate the
Go subreddit is to “pay it forward” for the people who helped
me back then.
These sorts of communities have a major structural problem, which is that there are far more people who want or need help, and are willing to ask for it, than there are people willing to give it.
That is, if we say that people asking questions are asking for attention and those answering are providing it, there is a major attention imbalance in the community.
I hate to emulate marketers in treating “attention” as a fungible scalar value, but when thinking at the scale of an entire community, the concept fits.
Managing Attention
A successful community requires that this attention be successfully balanced.
One of the ways of balancing this back in the pure-human era was a social contract that if you wanted help, you needed to put some work into the question. Whether formally or otherwise, the community reserved the right to limit their effort in answering to the amount of work you put in.
Over the years an ethos about the right way to ask technical questions developed. This raises the bar on asking questions and in theory brings the balance between attention request and provision back into balance.
It’s a beautiful theory. In practice, these sorts of communities are still inundated with people who are seeking answers to questions, somehow find the community, and don’t learn about this social contract before firing away with their question, at a relatively low effort cost to themselves.
Nevertheless, the contract needs a certain amount of enforcement.
Failing to balance the attention causes a first-order and a second-order problem.
The first order problem is that this consumes the answering time of the already relatively few people willing to provide these free answers.
The second order effect is that it also tends to piss these people off, reducing the total attention pool available.
I think our higher intellectual processes can process the idea that there are many distinct people posting online, but our lower brains tend to treat all external online interactions with people not in our Dunbar circle1 as one amalgamated “The Internet Other”.
When “The Internet Other” brings dozens of stupid questions a day, ignoring all social signals to stop, our lower brain just wants to send electric shocks over the internet to the offending “Other”. The fact that this an irrational conception doesn’t change the way it feels. The fact that the elephant rider knows he is riding an elephant does nothing on its own to tame the elephant.
Of course, since that is not actually an option, most providers just burn out and leave.
If the utility of the community is to be retained for anyone, this mismatch in attention consumption versus attention provision must be addressed.
Prior Art Solutions
As one of the solutions to this problem, Usenet developed the FAQ - Frequently Asked Questions post. This was a post voluntarily run by some number of the community members that collected up the most frequently asked questions, wrote a high-quality answer to them, and was posted periodically into the group. It was then considered OK to say to someone asking these questions to just “Read the FAQ, please”, hopefully provide some sort of usable link to it, consider the question answered, and then the rest of the community should indeed consider the question answered.
Over the past few months I’ve worked through a sort of closest equivalent on Reddit to create an FAQ for the Go Subreddit. It’s not exactly the same, because the structures of Reddit are different than Usenet, and the idea needs to be updated to account for that, but it comes from the same spirit.
It is unfortunate that having your post removed from reddit feels a bit like a slap in the face, because in most objective senses this is ideal for everyone, including the person posting their question. Instead of responses trickling in over the course of hours, they get an answer right now, and the answer to a lot of other questions they are likely to have besides. I wish this was not the case, but I don’t know what to do about it beyond already trying to explain that concept in the FAQ itself, even though I know it mostly won’t work. It’s the best I’ve got at the moment.
AI in June 2025
Right, so the title of this post promised a discussion of the impact on AI on the community and I’ve hardly talked about yet. The reason for that is I wanted to at least try to convey a baseline of how I’m thinking about the situation in the Go subreddit so the rest of the conversation makes sense.
I also want to avoid the easy categorization of simply being reactionary against AI or something. While I won’t deny my personal experiences with it have been somewhat less glowing than others, and I’m more cautious than some, if not most, I’m also not interested in trying to stand athwart progress and trying to yell “stop!”
Moreover, I don’t see that as the role of a Reddit moderator. One might analogize a Reddit moderator as a gardener, but in the case of a programming subreddit, it’s not their job to plant the garden, or manage it. Just to tend it. The “leadership” of the garden is handled by the gestalt of the community as a whole, and the community as a whole is free to express their various opinions about and share their various experiences with AI in programming without my opinion or experiences overshadowing anybody else’s.
So, no, my problem with AI isn’t simply that people are using it and they need to get off my lawn while I yell at this cloud.
AI Breaks The Attention Balance
The problem I’m having with AI is the already barely-functional balance between attention providers and consumers is broken in a new and exciting way.
AI is reducing the effort to consume attention in a programming subreddit without (at least as the situation stands now) providing any improvement to the providing of attention. Consequently, frustrations have been bubbling over among the attention providers. Again I want to reiterate that you can’t just tell them to “be more patient” or anything like that. The cats won’t be herded, they will just leave. The frustrations must be taken seriously, and if they all leave everyone loses.
The long-term problem is that arguably, AIs can’t provide any corresponding benefit to the people providing replies. The point of a programming subreddit is to provide access to humans. No subreddit could ever be a good way to provide access to AIs, after all. If you want that, you just go get it. You don’t need a human community to mediate that for you.
We also can’t just declare the concept of programming language subreddits, and by extension any similar sort of human communication site, obsolete. Even if in 10 years they’re just for trading tips about how to drive AIs, humans are still going to benefit from networking with each other. Human communication sites only become irrelevant if we get to the point where you can basically just wish for software, even massive bits of software that makes modern-day browsers look small, and the AIs do it all, and they’re so good at it that it isn’t even a “skill” to use them, and there’s no point trying to predict what the world looks like at that point.
What’s the payload of all this? Despite the cognitively-appealing framing of “we’re reacting to AI”, what I’m actually reacting to is the increased attention imbalance between the consumers and the providers. AI is the reason for that imbalance, but the imbalance is the problem.
We can’t do much about making it easier to provide attention, so, even though it is in some sense unfortunate, we have no choice but to raise the bar on the effort required to ask for attention. The nature of this will change over time, so if you’re reading this in 2027 this may be entirely out of date. The balancing between consuming and providing attention is an ongoing concern, not a static problem what we can solve once and for all.
The Actions
Therefore, the actions we are taking is to raise the bar on making a post in attempt to match the attention. To do that, we’re addressing what are today the biggest signals of a low-effort, high-attention post that we see today. These signals can and will shift over time.
These requirements will be considered holistically. Not having one of them may not be a problem for a post, but the more a post misses, the more likely it is to be removed.
State Post’s Purpose Clearly
I’d bet this has been a simmering problem in a lot of programming reddits, but when a project is posted it is often unclear what the intent is. Is it intended for review? Is it being offered as a production-quality library ready-to-go? Is it a project in early phases seeking testers and collaborators?
Posts about projects will be encouraged to clearly state what the goals are.
State Amount of AI Used Clearly
It is not an automatic disqualification to use AI in code; that would definitely be trying to yell “STOP” at progress. However, this feeds into the clear statement of intent, in that reviewing AI code is largely a waste of everyone’s time.
We also reserve the right to remove projects that are just “vibe coded” and have no human contribution to them. This is not a statement of AI’s badness, but a reaction to its success… the fact that you wrote yet another caching library is no longer an achievement of note, when anyone can do it now.
State Clearly Whate Are Goals Versus Results
This is not specifically related to AI, but it is annoying to see a two-day old project with three commits in it describe itself as a “reliable, fast, feature-rich, minimalist, robust and scalable database”. Those may be your goals, but they are not your results. Projects will be required to be able to substantiate their descriptions.
While LLMs did not create this project, I strongly suspect (but can’t quite prove) that LLMs are not making it better, as LLMs are used to write project README.mds and LLMs pick up the same bad habits as humans used to have.
Being Concise
LLMs are fantastic at taking a little bit of text, and expanding it into a lot of text, without necessarily adding any value to it. We are encouraging people to either post what is more-or-less the prompt, suitably modified for posting, or if you must use LLMs, prompt them to be concise. Posts should be used for highlighting the things that make a project special or the most important feature in some new release; they should not simply have an LLM-generated README dumped into them.
Moving Forward
This is not going to be the last time we make rule changes, probably. This is just good for today.
But what I will be keeping my eye on is that balance of attention between consumers and providers, because it is that balance that gives the community health.
-
There is some legitimate debate about the size of the Dunbar number, and I have no strong opinion on it. I merely observe that whatever it is, it is a great deal smaller than the number of people we interact with one way or another. ↩︎