Agentic Development and why Go is an excellent choice
- Go's simplicity reduces the number of options, making life easier for agents by reducing the space of possible solutions.
- Code written in 2012 is still valid, making AI model training more durable and reliable.
- For agent-assisted development, prioritize simplicity.
If you're building systems with AI agents (or planning to), the choice of technology stack makes all the difference. Anthropic proposed a definition that I found quite interesting:
What are agents?
The term "agent" can be defined in several ways. Some customers see it as a fully autonomous system that operates independently for long periods and uses different tools to perform complex tasks. Others use the word to describe more prescriptive implementations that follow predefined workflows. At Anthropic, we consider all these variations as agentic systems, but we make a crucial architectural distinction between workflows and agents:
- Workflows are systems where LLMs and tools are orchestrated through predefined paths.
- Agents are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they perform tasks.
When developing software with LLM assistance, the question isn't which language/stack has more features, but which offers the clearest and most predictable path for AI agents to operate effectively.
Go is intentionally "boring" (and that's perfect for AI)
"Choose Boring Technology" isn't just a slogan, it's a strategic philosophy. When you're building agentic systems, you're already spending your "innovation tokens" on AI, LLMs, and complex orchestration. The last thing you want is to fight with the programming language.
I've previously explored in detail how Go favors minimalism through its philosophy of radical simplicity, eliminating unnecessary abstractions and complexity.
Let's look at an example:
func ProcessRequest(ctx context.Context, req Request) (*Response, error) {
if err := validateRequest(req); err != nil {
return nil, fmt.Errorf("invalid request: %w", err)
}
result, err := callProvider(ctx, req.input)
if err != nil {
return nil, fmt.Errorf("provider call failed: %w", err)
}
return &Response{Data: result}, nil
}
Code written with old version syntax still compiles and runs without problems. This is because Go has no mysterious metaprogramming, no experimental syntax, no implicit magic.
"Exciting" languages come with operational costs that make LLMs' work much harder:
- Fragmented documentation
- Unstable ecosystem
- Complex debugging
- Steep learning curve
Rob Pike, one of Go's creators, described the language as being suitable for developers who aren't equipped to handle a complex language. The phrase may be controversial, but it's revealing. Now replace "developers" with "AI agents".
Go's simplicity is a virtue for AI code generation:
- One way to do things:
for
loops are the only repetition structure. Formatting is standardized withgofmt
. - Concise and clear syntax: No inheritance, complex constructors, annotations, or exceptions.
- Explicit error handling: The
if err != nil
pattern forces agents to handle errors locally.
The option paralysis problem
For an LLM, a smaller grammar and less "syntactic sugar" mean a lower probability of generating bizarre (the famous hallucination), inefficient, or simply incorrect code:
# Python offers multiple ways to do the same thing
# An agent can choose any of them, not always the best
# Option 1: Traditional list
users = []
for user in get_users():
if user.is_active:
users.append(user.name)
# Option 2: List comprehension
users = [user.name for user in get_users() if user.is_active]
# Option 3: Filter + map
users = list(map(lambda u: u.name, filter(lambda u: u.is_active, get_users())))
# Option 4: With walrus operator
users = [name for user in get_users() if user.is_active and (name := user.name)]
Go eliminates these unnecessary decisions:
// In Go, there's essentially one idiomatic way to do this
var users []string
for _, user := range getUsers() {
if user.IsActive {
users = append(users, user.Name)
}
}
Stability over time
One of the biggest challenges for LLMs is "knowledge deterioration". The model is trained with data from a specific point in time. In rapidly evolving ecosystems like JavaScript, a tutorial or Stack Overflow answer from two years ago might be completely obsolete.
Go has millions of public projects on GitHub that were used to train AI models. Crucially:
- Code written for Go 1.1 still works on Go 1.22
- In JavaScript or Python, frameworks and patterns change drastically every few years
This stability means that the vast body of Go code available on the internet remains relevant for much longer. Result:
- Agents generate code with a higher probability of being correct
- Learned patterns remain valid
- Fewer errors from using obsolete APIs
- Greater confidence in generated code
Conclusion
Go's culture of avoiding breaking changes directly benefits the quality of AI-generated code. While other languages compete on elegance or expressiveness, Go leads because of its simplicity and predictability and that's why I would bet on this language when building software with AI assistance nowadays.
Subscribe to the newsletter to receive links, insights, and analysis about software engineering, architecture, and technical leadership directly in your email.