LangGraph Basics
LangGraph is a framework for building stateful AI agents as directed graphs. If you're an Angular developer building AI-powered applications, this page teaches you how LangGraph agents work and why streamResource() is the natural bridge between your frontend and your agent backend.
Graphs give you explicit control over agent behavior. Instead of a black-box prompt-and-pray approach, you define exactly how your agent reasons, when it calls tools, and where it pauses for human input. Every step is visible, testable, and debuggable.
The Core Concepts
A LangGraph agent has three building blocks:
Nodes — Functions That Do Work
A node is a Python function that receives the current state, does something, and returns updated state. Every node has the same signature:
def my_node(state: State, config: RunnableConfig) -> dict:
# Read from state
messages = state["messages"]
# Do work (call LLM, query DB, invoke tool)
response = llm.invoke(messages)
# Return state updates (merged into existing state)
return {"messages": [response]}Nodes don't replace state — they return updates that get merged into the existing state. For lists like messages, LangGraph uses reducers (like operator.add) to accumulate entries instead of overwriting.
Edges — Connections Between Nodes
Edges define the execution flow. There are two types:
Normal edges — always route to the next node:
builder.add_edge(START, "call_model") # Start → call_model
builder.add_edge("call_model", END) # call_model → EndConditional edges — route based on state:
def should_continue(state: State) -> str:
last_msg = state["messages"][-1]
if last_msg.tool_calls:
return "tools" # Agent wants to use a tool
return END # Agent is done, return response
builder.add_conditional_edges("call_model", should_continue)State — The Shared Memory
All nodes read from and write to a shared state object. You define its shape as a Python TypedDict:
from typing_extensions import TypedDict, Annotated
from operator import add
class State(TypedDict):
messages: Annotated[list, add] # Accumulates messages
plan: list[str] # Agent's current plan
results: dict # Tool resultsThis state is exactly what streamResource() exposes to your Angular app through Signals.
Building Your First Agent
Here's the simplest possible agent — a chat model that takes messages and responds:
from langgraph.graph import END, START, MessagesState, StateGraph
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-5-mini")
def call_model(state: MessagesState) -> dict:
response = llm.invoke(state["messages"])
return {"messages": [response]}
# Build the graph: START → call_model → END
builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
builder.add_edge("call_model", END)
graph = builder.compile()Agent Patterns
The power of LangGraph is in the patterns you can build. Each pattern maps to specific streamResource() signals.
Pattern 1: ReAct Agent (Tool Calling)
The agent reasons, decides to call a tool, observes the result, and loops until it has an answer.
from langgraph.prebuilt import ToolNode
@tool
def search_docs(query: str) -> str:
"""Search the knowledge base."""
return vector_store.similarity_search(query)
tools = [search_docs]
def call_model(state: State) -> dict:
response = llm.bind_tools(tools).invoke(state["messages"])
return {"messages": [response]}
def should_continue(state: State) -> str:
if state["messages"][-1].tool_calls:
return "tools"
return END
builder = StateGraph(State)
builder.add_node("model", call_model)
builder.add_node("tools", ToolNode(tools))
builder.add_edge(START, "model")
builder.add_conditional_edges("model", should_continue)
builder.add_edge("tools", "model") # Loop back after tool execution
graph = builder.compile()Angular connection: Track tool execution in real-time:
const agent = streamResource<AgentState>({
assistantId: 'react_agent',
});
// Watch tools execute
const activeTools = computed(() => agent.toolProgress());
const completedTools = computed(() => agent.toolCalls());Pattern 2: Human-in-the-Loop (Approval)
The agent proposes an action and pauses. Your Angular UI shows an approval dialog. The user decides, and the agent resumes.
from langgraph.types import Interrupt
def propose_action(state: State) -> dict:
action = llm.invoke(state["messages"])
# Pause execution — Angular will show approval UI
raise Interrupt(value={
"action": "send_email",
"to": "client@example.com",
"body": action.content,
})
def execute_action(state: State) -> dict:
# Only runs after human approves
send_email(state["pending_action"])
return {"messages": [{"role": "assistant", "content": "Email sent."}]}Angular connection: The interrupt surfaces automatically:
const agent = streamResource<AgentState>({
assistantId: 'approval_agent',
});
// Show approval UI when agent pauses
const pendingAction = computed(() => agent.interrupt());
// User clicks approve → resume the agent
approve() {
agent.submit(null, { resume: { approved: true } });
}Pattern 3: Multi-Agent Orchestration
A supervisor agent delegates work to specialist sub-agents. Each sub-agent is its own graph.
def supervisor(state: State) -> dict:
routing = llm.invoke([
{"role": "system", "content": "Route to: researcher, analyst, or writer"},
*state["messages"]
])
return {"next_agent": routing.tool_calls[0].args["agent"]}
builder = StateGraph(State)
builder.add_node("supervisor", supervisor)
builder.add_node("researcher", researcher_subgraph)
builder.add_node("analyst", analyst_subgraph)
builder.add_conditional_edges("supervisor", lambda s: s["next_agent"])Angular connection: Track each sub-agent independently:
const orchestrator = streamResource<OrchestratorState>({
assistantId: 'orchestrator',
subagentToolNames: ['researcher', 'analyst', 'writer'],
});
// See all active sub-agents
const workers = computed(() => orchestrator.activeSubagents());
const workerCount = computed(() => workers().length);Pattern 4: Persistent Conversations
Thread-based persistence means conversations survive page refreshes, browser restarts, and even server deployments.
from langgraph.checkpoint.postgres import PostgresSaver
checkpointer = PostgresSaver.from_connection_string(DATABASE_URL)
graph = builder.compile(checkpointer=checkpointer)
# Each thread_id is a persistent conversation
result = graph.invoke(
{"messages": [user_message]},
config={"configurable": {"thread_id": "user_123_session"}}
)Angular connection: Thread persistence is built into streamResource:
const chat = streamResource<ChatState>({
assistantId: 'chat_agent',
threadId: signal(localStorage.getItem('threadId')),
onThreadId: (id) => localStorage.setItem('threadId', id),
});
// User returns tomorrow — same thread, full history restored
// No code needed — streamResource handles itHow streamResource() Bridges the Gap
Here's why streamResource() is the natural Angular companion for LangGraph:
Calls submit({ messages: [userMsg] }) to send user input
Passes input to the transport layer
Sends HTTP POST to LangGraph Platform, opens SSE connection
Executes graph nodes, calls tools, streams SSE events back
Parses SSE chunks into BehaviorSubjects
Converts BehaviorSubjects to Angular Signals via toSignal()
Templates re-render automatically via OnPush change detection
You don't configure SSE, parse events, manage WebSocket connections, or handle reconnection. streamResource() does all of that. You call submit() and read Signals — that's the entire API surface for your Angular code.
Graph API vs Functional API
LangGraph offers two ways to define agents:
Graph API (recommended for most cases):
builder = StateGraph(State)
builder.add_node("model", call_model)
builder.add_edge(START, "model")
graph = builder.compile()Functional API (for simpler workflows):
from langgraph.func import entrypoint, task
@entrypoint
async def agent(messages):
response = await call_model(messages)
return responseBoth APIs produce the same output and work identically with streamResource(). Choose the Graph API when you need conditional routing, subgraphs, or interrupts. Choose the Functional API for simple, linear workflows.
What's Next
Deep dive into the planning, tool-calling, and execution lifecycle
Stream token-by-token responses with multiple stream modes
Build human-in-the-loop approval flows
Compose multi-agent systems with orchestrators
Thread-based conversation persistence
How Signals power streamResource's reactive model