DocsGetting StartedIntroduction

Introduction

StreamResource brings full parity with React's useStream() hook to Angular 20+. Build streaming AI applications with Angular Signals, connect to LangGraph agents, and ship production-ready frontends for your AI products.

What you'll learn

This guide walks you through the complete workflow: build a LangGraph agent in Python, run it locally, connect it to an Angular app with streamResource(), and deploy to production.

What is streamResource()?

streamResource() is an Angular function that creates a reactive, streaming connection to a LangGraph agent. It returns an object whose properties are Angular Signals — meaning your templates update automatically as the agent streams responses, token by token.

const chat = streamResource<{ messages: BaseMessage[] }>({
  assistantId: 'chat_agent',
});
 
// Every property is a Signal — reactive, synchronous, no subscriptions
chat.messages()     // Signal<BaseMessage[]>
chat.status()       // Signal<'idle' | 'loading' | 'resolved' | 'error'>
chat.error()        // Signal<Error | null>
chat.interrupt()    // Signal<Interrupt | undefined>
chat.history()      // Signal<ThreadState[]>

No RxJS. No manual subscriptions. No async pipes. Just Signals that work with Angular's OnPush change detection out of the box.

The Architecture

Watch a full conversation turn flow through the stack — from user input to rendered response:

streamResource() — live architecture flowlocalhost:4200
Chat Interface
Developer Console
Waiting for interaction...

Build Your Agent

LangGraph agents are Python programs defined as directed graphs. Here's a minimal chat agent using the example from this repository:

# examples/chat-agent/src/chat_agent/agent.py
from langchain_core.messages import SystemMessage
from langchain_core.runnables import RunnableConfig
from langgraph.graph import END, START, MessagesState, StateGraph
from langchain_openai import ChatOpenAI
 
llm = ChatOpenAI(model="gpt-5-mini")
 
def call_model(state: MessagesState, config: RunnableConfig) -> dict:
    """Invoke the LLM with the current message history."""
    system_prompt = config.get("configurable", {}).get(
        "system_prompt", "You are a helpful assistant."
    )
    messages = [SystemMessage(content=system_prompt)] + state["messages"]
    response = llm.invoke(messages)
    return {"messages": [response]}
 
# Build the graph: START -> call_model -> END
builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
builder.add_edge("call_model", END)
 
graph = builder.compile()
What's happening here?

MessagesState manages a list of messages. The call_model node takes the current messages, adds a system prompt, and calls the LLM. The graph runs this single node and returns the response. LangGraph handles streaming, checkpointing, and thread management automatically.

Run Your Agent Locally

1
Install the LangGraph CLI
pip install -U "langgraph-cli[inmem]"
2
Set up your environment

Create a .env file with your API keys:

OPENAI_API_KEY=sk-...
LANGSMITH_API_KEY=lsv2_...
3
Start the dev server
cd examples/chat-agent
langgraph dev

Your agent is now running at http://localhost:2024. You can test it in LangGraph Studio at https://smith.langchain.com/studio/.

Connect with Angular

Now connect your Angular app to the running agent using streamResource().

1
Install the package
npm install @cacheplane/stream-resource
2
Configure the provider
// app.config.ts
import { ApplicationConfig } from '@angular/core';
import { provideStreamResource } from '@cacheplane/stream-resource';
 
export const appConfig: ApplicationConfig = {
  providers: [
    provideStreamResource({
      apiUrl: 'http://localhost:2024',
    }),
  ],
};
3
Build your chat component
// chat.component.ts
import { Component, signal, computed } from '@angular/core';
import { streamResource } from '@cacheplane/stream-resource';
import type { BaseMessage } from '@langchain/core/messages';
 
@Component({
  selector: 'app-chat',
  templateUrl: './chat.component.html',
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class ChatComponent {
  input = signal('');
 
  // Create the streaming resource — this is the core API
  chat = streamResource<{ messages: BaseMessage[] }>({
    assistantId: 'chat_agent',
    threadId: signal(localStorage.getItem('threadId')),
    onThreadId: (id) => localStorage.setItem('threadId', id),
  });
 
  // Derived signals — compose with computed()
  isStreaming = computed(() => this.chat.status() === 'loading');
  messageCount = computed(() => this.chat.messages().length);
 
  send() {
    const msg = this.input();
    if (!msg.trim()) return;
    this.chat.submit({
      messages: [{ role: 'user', content: msg }],
    });
    this.input.set('');
  }
}
4
Run your Angular app
ng serve

Open http://localhost:4200 and start chatting with your agent. Messages stream in real-time as the LLM generates them.

Key Concepts

Everything streamResource() gives you out of the box — click any to learn more:

Deploy to Production

When you're ready to go live, deploy your agent to LangGraph Cloud and point your Angular app to the deployment URL.

1
Push your agent to GitHub

Your agent code (the Python project with langgraph.json) needs to be in a GitHub repository. Make sure your langgraph.json references the correct graph entry point.

git init && git add . && git commit -m "initial agent"
gh repo create my-agent --public --source=. --push
2
Deploy via LangSmith

Go to LangSmith Deployments and click + New Deployment. Connect your GitHub account, select your repository, and deploy. The first deployment takes about 15 minutes.

You'll receive a deployment URL like https://my-agent-abc123.langsmith.dev.

3
Update your Angular config

Point apiUrl to your deployment URL and set up environment-based configuration:

// environment.ts
export const environment = {
  langgraphUrl: 'http://localhost:2024', // dev
};
 
// environment.prod.ts
export const environment = {
  langgraphUrl: 'https://my-agent-abc123.langsmith.dev', // prod
};
 
// app.config.ts
provideStreamResource({
  apiUrl: environment.langgraphUrl,
})
4
Deploy your Angular app

Deploy your Angular frontend to any hosting platform — Vercel, Netlify, AWS, or your own infrastructure. Since streamResource() is a stateless client, your frontend has no server-side state requirements.

ng build --configuration production
# Deploy dist/ to your hosting platform
Stateless architecture

Your Angular app is a stateless client. All agent state — threads, checkpoints, memory — lives on LangGraph Platform. This means you can deploy your frontend anywhere (CDN, edge, SSR) without state management concerns. Scale your frontend independently of your agent infrastructure.

What's Next