Habr AI→ original

From LLM to action: how to build an AI agent with Go and GigaChat

Driven by curiosity, a developer built an AI agent in Go using LangChainGo, tools, prompt chains, and integration with the Russian GigaChat. The article offers

From LLM to action: how to build an AI agent with Go and GigaChat
Source: Habr AI. Collage: Hamidun News.
◐ Listen to article

A developer was inspired by a talk about AI agents and decided to try building one in Go. It wasn't easy, but it worked — now he shares his experience and explains how to make a Go application not just generate text, but think and take action.

Architecture: How an Agent Chooses Actions

An AI agent differs from a standard LLM in that it's not a text generator, but a decision-making system. When a user gives a task, the agent analyzes it, selects an appropriate tool (search, computation, API call), executes it, gets the result, and thinks further. This is the chain: receive task → LLM chooses step → execute tool → update context → repeat until task is solved. The challenge lies in describing tools correctly in the prompt so the LLM understands when and which one to use. Incorrect description = agent chooses the wrong tool and goes down the wrong path.

LangChainGo: A Framework for Go Developers

The developer chose LangChainGo — a Go binding for LangChain, a framework for LLM applications. It takes the best from the Python version but runs on Go, which is important for production systems. LangChainGo provides:

  • Request chains (Chains) — templates for sequences of steps with LLMs
  • Tools — functions that an agent can call (search, computation)
  • Memory — conversation context between user requests
  • MCP (Model Context Protocol) — a standard for connecting external tools
  • Support for various LLMs — from OpenAI to local models

The main advantage: you can write production code in Go without jumping to Python. This means faster deployment, fewer dependencies, better performance.

Integration with GigaChat and MCP

Instead of OpenAI API, the author decided to use GigaChat — a Russian LLM from Sber. This is more convenient for working with Cyrillic text and local data, plus it meets compliance requirements. Connecting GigaChat through LangChainGo wasn't straightforward. The documentation only had examples for OpenAI. He had to write a custom adapter so LangChainGo could communicate with GigaChat: correct request format, token handling, error translations.

At first glance, MCP seemed like a complex protocol, but it turned out to be a simple idea: a tool is a function that receives a JSON request and returns a JSON response.

"At first, it seemed like the Go ecosystem for AI was thinner than Python.

In reality, it's just more compact — and that's a plus."

Challenges and Lessons

The first problem: the request chain breaks non-obviously. If the agent chooses the wrong tool, the context gets confused and it won't return to the right path. He had to add prompt validation and fallback logic — if the first tool doesn't work, try another.

Second: debugging. When an LLM thinks incorrectly, it's hard to understand why. He had to log every step in the chain, every tool choice, every result. Without this, you're blindly catching errors.

Third: integrating custom tools. MCP looks good on paper, but when you need to integrate an internal API or complex tool — there's extra work. He had to write a wrapper that transforms the result into something the LLM can understand.

Why This Matters

For Go developers, this means AI agents are no longer just for Python developers. LangChainGo makes it possible to embed an intelligent system into a production Go service. This means fast systems that don't just generate text, but solve problems.

ZK
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…