How to Build MCP Servers in Go | Model Context Protocol Tutorial
Step-by-step guide to building custom MCP servers in Go. Learn tool registration, resource handling, and deployment for AI-powered developer tools.
thelacanians
Why Go for MCP Servers
We have written about the MCP revolution before: how the Model Context Protocol gives AI models structured access to your tools and services through a standard interface. Our early MCP servers were built in TypeScript, and that remains a great choice for many use cases.
But as we built more ambitious tools — vecgrep for semantic code search, tinyvault for secret management, noted for knowledge bases — we kept reaching for Go. The reasons are practical, not ideological.
Single binary distribution. An MCP server written in Go compiles to one binary. No runtime dependencies, no node_modules, no version conflicts. Users download the binary and point their AI client at it. That is the entire setup process.
Low resource overhead. MCP servers are long-running processes. A Go binary serving MCP requests uses 10-15 MB of memory at idle. The equivalent Node.js process starts at 50+ MB. When you are running four or five MCP servers alongside your editor and AI client, this adds up.
Concurrency built in. MCP servers frequently need to perform I/O — reading files, querying databases, calling APIs. Go’s goroutines and channels handle concurrent tool calls naturally, without callback chains or async/await gymnastics.
Fast startup. MCP clients typically launch servers on demand. A Go binary starts in milliseconds. A Node.js server needs to parse and compile JavaScript before responding to its first request.
If you are building an MCP server that will be distributed as a standalone tool, Go is hard to beat.
Project Setup
Start with a standard Go module. The MCP ecosystem has settled on the mcp-go library as the community standard for building MCP servers in Go.
mkdir my-mcp-server && cd my-mcp-server
go mod init github.com/yourorg/my-mcp-server
go get github.com/mark3labs/mcp-go
Here is the minimal project structure we use for our MCP servers:
my-mcp-server/
├── main.go # Entry point, server setup
├── tools/ # Tool implementations
│ ├── search.go
│ └── analyze.go
├── resources/ # Resource providers
│ └── config.go
└── internal/ # Shared utilities
└── format.go
The Minimal Server
Every MCP server starts the same way: create a server instance, register capabilities, and connect to a transport. Here is the smallest functional server:
package main
import (
"context"
"fmt"
"os"
"os/exec"
"strings"
"github.com/mark3labs/mcp-go/mcp"
"github.com/mark3labs/mcp-go/server"
)
func main() {
s := server.NewMCPServer(
"my-tool",
"1.0.0",
server.WithToolCapabilities(true),
server.WithResourceCapabilities(true, false),
)
registerTools(s)
if err := server.ServeStdio(s); err != nil {
fmt.Fprintf(os.Stderr, "server error: %v\n", err)
os.Exit(1)
}
}
That is the entire skeleton. The ServeStdio function handles the MCP protocol over stdin/stdout, which is the standard transport for local MCP servers. The AI client launches your binary as a subprocess and communicates through these streams.
Registering Tools
Tools are the primary way an AI model interacts with your server. Each tool has a name, a description, a JSON Schema for its input, and a handler function.
func registerTools(s *server.MCPServer) {
searchTool := mcp.NewTool("search_codebase",
mcp.WithDescription("Search the codebase for files matching a pattern or content query"),
mcp.WithString("query",
mcp.Required(),
mcp.Description("Search query: a file pattern or content to find"),
),
mcp.WithString("scope",
mcp.Description("Directory scope to limit the search"),
),
mcp.WithNumber("limit",
mcp.Description("Maximum number of results to return"),
),
)
s.AddTool(searchTool, handleSearch)
}
The handler function receives the parsed arguments and returns structured content. This is where your actual logic lives:
func handleSearch(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
query, _ := req.Params.Arguments["query"].(string)
scope, _ := req.Params.Arguments["scope"].(string)
if query == "" {
return mcp.NewToolResultError("query parameter is required"), nil
}
if scope == "" {
scope = "."
}
// Use ripgrep for fast content search
cmd := exec.CommandContext(ctx, "rg",
"--json", "--max-count", "5", query, scope,
)
output, err := cmd.Output()
if err != nil {
return mcp.NewToolResultText(
fmt.Sprintf("No results found for query: %s", query),
), nil
}
return mcp.NewToolResultText(string(output)), nil
}
Notice the error handling pattern. We do not return Go errors for “no results found” — that is a normal outcome, not a failure. We return a ToolResult with a helpful message so the AI model can adjust its approach. Reserve actual errors for situations where the tool genuinely cannot function.
Design Principles for Tools
After building the MCP tools we ship as part of our AI development services, we have learned a few things:
Keep tools focused. A tool called search_codebase that takes a query string is better than a generic execute tool that takes a command. The AI model reads the tool name and description to decide when to use it. Specific names lead to better tool selection.
Return structured data. JSON is almost always the right format. The AI model can parse structured data and present it however is appropriate for the conversation. Avoid formatting results for human consumption inside the tool.
Use the context. The context.Context parameter carries deadlines and cancellation signals from the MCP client. Respect them. Long-running operations should check ctx.Done() periodically.
Adding Resources
Resources give the AI model read access to data without requiring a tool call. They are ideal for configuration, documentation, and reference material that the model might need during a conversation.
func registerResources(s *server.MCPServer) {
s.AddResource(
mcp.NewResource(
"config://project",
"Project Configuration",
mcp.WithResourceDescription("Current project settings and metadata"),
mcp.WithMIMEType("application/json"),
),
handleProjectConfig,
)
}
func handleProjectConfig(
ctx context.Context,
req mcp.ReadResourceRequest,
) ([]mcp.ResourceContents, error) {
config, err := os.ReadFile("project.json")
if err != nil {
return nil, fmt.Errorf("failed to read project config: %w", err)
}
return []mcp.ResourceContents{
mcp.TextResourceContents{
URI: "config://project",
MIMEType: "application/json",
Text: string(config),
},
}, nil
}
The AI model can read resources at any point during the conversation to gather context. This is how our noted knowledge base works — it exposes project documentation as MCP resources so the model can look up architectural decisions, API specifications, and prior context without the user needing to paste anything.
Testing Strategies
MCP servers are straightforward to test because tools are just functions. Test them directly without spinning up the full protocol layer.
func TestHandleSearch(t *testing.T) {
// Create a temp directory with known content
dir := t.TempDir()
os.WriteFile(
filepath.Join(dir, "main.go"),
[]byte("package main\nfunc hello() {}"),
0644,
)
req := mcp.CallToolRequest{}
req.Params.Arguments = map[string]any{
"query": "hello",
"scope": dir,
}
result, err := handleSearch(context.Background(), req)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if result.IsError {
t.Fatalf("tool returned error: %v", result.Content)
}
// Verify the result contains our test file
text := result.Content[0].(mcp.TextContent).Text
if !strings.Contains(text, "main.go") {
t.Errorf("expected result to contain main.go, got: %s", text)
}
}
For integration tests, you can use the MCP client library to connect to your server programmatically:
func TestServerIntegration(t *testing.T) {
s := server.NewMCPServer("test", "0.1.0",
server.WithToolCapabilities(true),
)
registerTools(s)
// Create an in-process client for testing
testClient := server.NewTestClient(s)
// List tools and verify registration
tools, err := testClient.ListTools(context.Background(), mcp.ListToolsRequest{})
if err != nil {
t.Fatalf("failed to list tools: %v", err)
}
if len(tools.Tools) != 1 {
t.Fatalf("expected 1 tool, got %d", len(tools.Tools))
}
if tools.Tools[0].Name != "search_codebase" {
t.Errorf("expected tool name search_codebase, got %s", tools.Tools[0].Name)
}
}
This two-tier testing approach — unit tests for handlers, integration tests for protocol behavior — catches issues at both the logic and transport layers.
Deployment
Single Binary
The simplest deployment is a compiled binary. Cross-compile for every platform your users need:
GOOS=linux GOARCH=amd64 go build -o my-mcp-server-linux-amd64
GOOS=darwin GOARCH=arm64 go build -o my-mcp-server-darwin-arm64
GOOS=windows GOARCH=amd64 go build -o my-mcp-server-windows-amd64.exe
Distribute via GitHub releases, Homebrew, or a simple download link. Users configure their AI client to launch the binary:
{
"mcpServers": {
"my-tool": {
"command": "/usr/local/bin/my-mcp-server",
"args": ["--project", "/path/to/project"]
}
}
}
Docker
For MCP servers that require external dependencies (databases, system libraries), Docker works well:
FROM golang:1.23-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /mcp-server .
FROM alpine:3.19
RUN apk add --no-cache ca-certificates
COPY --from=builder /mcp-server /usr/local/bin/mcp-server
ENTRYPOINT ["/usr/local/bin/mcp-server"]
The multi-stage build keeps the final image small. Our production MCP server images are typically under 20 MB.
Remote Servers with SSE
For shared or team-wide MCP servers, you can use the Server-Sent Events transport instead of stdio. This lets multiple clients connect to a single server instance over HTTP:
func main() {
s := server.NewMCPServer("shared-tool", "1.0.0",
server.WithToolCapabilities(true),
)
registerTools(s)
sseServer := server.NewSSEServer(s, server.WithBaseURL("http://localhost:8080"))
fmt.Println("MCP server listening on :8080")
if err := sseServer.Start(":8080"); err != nil {
fmt.Fprintf(os.Stderr, "server error: %v\n", err)
os.Exit(1)
}
}
This is how we run our internal shared MCP servers — one instance serves the entire team, backed by a shared database and common configuration.
Lessons from Production
We have been building and running MCP servers in Go for over a year now. Here is what we have learned:
Start with stdio, graduate to SSE. The stdio transport is simpler to develop and debug. Build your server with stdio first, get the tools right, then add SSE if you need multi-client support.
Version your tool schemas. When you change a tool’s parameters, older clients may send requests with the old shape. Handle missing fields gracefully rather than crashing.
Log to stderr. Stdout is the MCP transport channel. Any stray fmt.Println will corrupt the protocol stream and crash the connection. Use log.SetOutput(os.Stderr) at the top of your main function.
Measure tool latency. AI models have timeout expectations. If your tool takes more than a few seconds, the user experience degrades. Add instrumentation early and optimize the slow paths.
Go’s combination of fast compilation, minimal runtime overhead, and straightforward concurrency makes it an excellent fit for MCP servers that need to be distributed, performant, and reliable. The tools we have built — from tinyvault to vecgrep — prove out this approach daily.
If you are exploring MCP for your own development workflow, check out our open-source tools or reach out about our AI-native development services. We are always interested in what people are building with the protocol.