Basic Building Blocks

Building powerful AI applications is like building with blocks. LangGraphGo provides a set of lean yet powerful core primitives that allow you to flexibly orchestrate complex logic. This guide will deep dive into these basic building blocks: Nodes, Edges, and Conditional Routing.

1. Nodes: Execution Units

Background & Functionality

In any workflow system, "what to do" is the most fundamental question. Nodes are the execution units in LangGraphGo, responsible for specific computational tasks. Whether it's calling a Large Language Model (LLM), querying a database, executing code, or simply formatting data, it's all done within nodes.

The design philosophy for nodes is functional and stateless (as much as possible). They receive the current state as input and return state updates as output. This design makes nodes easy to test, reuse, and parallelize.

Implementation Principle

Under the hood, a node is defined as a function that accepts context.Context and input state, and returns output state and an error. The LangGraphGo runtime is responsible for scheduling these functions, handling concurrency, and managing state passing.

Code Showcase: Integrating LLM

The most common node type is calling an LLM. LangGraphGo integrates seamlessly with langchaingo, making it very simple to use various LLMs within nodes.

import (
    "context"
    "github.com/tmc/langchaingo/llms"
    "github.com/tmc/langchaingo/llms/openai"
)

// Define a node function that calls LLM
func callLLM(ctx context.Context, state interface{}) (interface{}, error) {
    // 1. Extract message history from state
    messages := state.([]llms.MessageContent)
    
    // 2. Initialize LLM (usually initialized once outside graph construction)
    model, err := openai.New()
    if err != nil {
        return nil, err
    }
    
    // 3. Call LLM to generate response
    response, err := model.GenerateContent(ctx, messages)
    if err != nil {
        return nil, err
    }
    
    // 4. Return new message, which will be appended to state via Reducer
    return append(messages, llms.TextParts("ai", response.Choices[0].Content)), nil
}

// Add node to graph
g.AddNode("chatbot", callLLM)

2. Edges: Control Flow

Background & Functionality

After defining "what to do", we need to define "in what order". Edges define the connection relationship between nodes, i.e., the control flow. The simplest edge is a normal edge, representing deterministic sequential execution.

By combining simple edges, you can build complex Directed Acyclic Graphs (DAGs) or even Cyclic Graphs, which are crucial for implementing Agent cyclic reasoning (Think-Act-Observe).

Implementation Principle

The graph data structure maintains an adjacency list. When a node finishes execution, the runtime looks up the outgoing edges of that node and adds downstream nodes to the execution queue. If a node has multiple outgoing edges, downstream nodes will be scheduled in parallel.

Code Showcase

// Define linear flow: A -> B -> C -> END
g.AddEdge("node_a", "node_b")
g.AddEdge("node_b", "node_c")
g.AddEdge("node_c", graph.END) // END is a special virtual node representing graph execution end

3. Conditional Edges: Dynamic Routing

Background & Functionality

Real-world logic is often not linear. Agents need to dynamically decide what to do next based on LLM output, tool execution results, or external inputs. For example, if the LLM decides to call a tool, jump to the tool node; if the LLM decides to reply directly, end.

Conditional edges introduce routing logic. They allow checking state at runtime and choosing the next node based on the check result.

Implementation Principle

Conditional edges are associated with a routing function. When the source node finishes execution, the runtime calls this routing function. The routing function returns the name of the target node (string). The runtime dynamically hands over control to the corresponding node based on this return value.

Code Showcase: Tool Call Routing

Here is a routing logic in the classic ReAct pattern: check if the LLM has issued a tool call request.

// Routing function: decide whether to execute tool or end
func routeToolOrEnd(ctx context.Context, state interface{}) string {
    messages := state.([]llms.MessageContent)
    lastMessage := messages[len(messages)-1]
    
    // Check if the last message contains tool calls
    if len(lastMessage.ToolCalls) > 0 {
        return "tools" // Jump to tool execution node
    }
    return graph.END // Otherwise end
}

// Add conditional edge
// From "llm_node", decide where to go based on routeToolOrEnd return value
g.AddConditionalEdge("llm_node", routeToolOrEnd)

// Optional: Explicitly define path mapping, helpful for visualization and validation
g.AddConditionalEdge("llm_node", routeToolOrEnd, map[string]string{
    "tools": "tools",
    graph.END: graph.END,
})