Category: Programming

  • Escape Analysis in Go: How the Compiler Decides Where Your Variables Live

    When Go developers talk about performance, conversations often turn to allocation and garbage collection. But underneath those topics lies a subtle, powerful compiler optimization that determines how efficiently your program runs: escape analysis.

    It’s the mechanism that decides whether your variables are stored on the stack — fast, cheap, and automatically reclaimed — or on the heap, where they incur the cost of garbage collection. Understanding escape analysis helps you write Go code that’s both clear and efficient, without micro-optimizing blindly.

    What Is Escape Analysis?

    In simple terms, escape analysis is a process the Go compiler uses to determine the lifetime and visibility of variables.

    If the compiler can prove that a variable never escapes the function where it’s defined — meaning no other part of the program can access it after the function returns — it can safely allocate that variable on the stack.

    If not, the variable “escapes” to the heap, ensuring it lives long enough to be used elsewhere but at a higher performance cost.

    A Simple Example

    Let’s look at how Go decides where to place a variable.

    func a() *Resp {
        s := Resp{Status: "OK"}
        return &s
    }

    At first glance, s looks like a local variable. But since its address is returned, s must survive after a() returns. The compiler detects that and allocates it on the heap.

    We can verify this using the compiler flag:

    go build -gcflags="-m" main.go

    Output:

    ./main.go:3:6: moved to heap: Resp

    Now consider a variant:

    func b() {
        s := Resp{Status: "OK"}
        fmt.Println(s.Status)
    }

    Here, s doesn’t escape — it’s only used within the function. The compiler can safely put it on the stack:

    ./main.go:3:6: s does not escape

    Why Escape Analysis Matters for Performance

    Escape analysis directly affects allocation patterns, garbage collector load, and ultimately, latency.

    1. Fewer Heap Allocations

    Fewer escapes mean fewer heap allocations — less GC work, smaller memory footprint, and reduced pauses.

    2. Predictable Performance

    Stack allocation is deterministic. Heap allocation involves runtime bookkeeping and garbage collection cycles.

    3. Inlining and Optimizations

    Escape analysis interacts closely with other compiler optimizations like function inlining. Sometimes, inlining can expose more information to the compiler, allowing it to keep variables on the stack.

  • How Radix Trees Power High-Performance Web Server Routers

    When we think of web performance, our minds often jump to caching layers, CDNs, or optimized databases. Yet, one of the most overlooked contributors to web speed lies at the heart of every web framework — the router.

    Each time a request arrives, the router decides which piece of code should handle it. This decision must be made fast, thousands of times per second, often across hundreds or thousands of possible routes. To meet this demand, high-performance web servers increasingly rely on a clever data structure: the radix tree.

    What Is a Radix Tree?

    A radix tree, also known as a Patricia trie or compact prefix tree, is a space-optimized structure designed for prefix matching. It’s a close cousin of the traditional trie, but instead of storing one character per edge, a radix tree compresses chains of single-child nodes into multi-character strings.

    This makes it particularly well-suited for hierarchical data such as URL paths, file systems, and IP addresses, all of which share common prefixes.

    Example: Routes in a Web Server

    Consider the following web routes:

    /users
    /users/:id
    /users/:id/settings
    /articles
    /articles/:year/:month

    A radix tree representation would look like this:

    (root)
     ├── "users"
     │     ├── ""
     │     └── "/:id"
     │          └── "/settings"
     └── "articles"
           └── "/:year"
                 └── "/:month"

    Instead of scanning all routes linearly, the tree enables the router to walk through matching prefixes (/users/:id/settings) in O(k) time, where k is the length of the path.

    How Radix Trees Are Used in Web Routers

    In a modern web server, the router’s job is to map URL patterns to handler functions. When a request arrives, the router must find the correct handler as quickly as possible.

    Using a radix tree, the router:

    1. Stores routes as compressed prefixes, minimizing redundancy.
    2. Matches requests by walking the tree from root to leaf, comparing chunks of the path.
    3. Supports dynamic parameters (like :id or :slug) by treating them as special wildcard edges.
    4. Selects the best matching route (the longest prefix match).

    This structure is particularly efficient because most routes share prefixes — for example, /api/users and /api/posts both begin with /api.