Run a Local Server

Status: ACTIVE (pulled from docs.langchain.com) Source: https://docs.langchain.com/oss/python/langgraph/local-server Timestamp: 2026-05-11

Run a LangGraph application locally for development and testing.

Steps

1. Install the LangGraph CLI

pip install -U "langgraph-cli[inmem]"

2. Create a LangGraph App

langgraph new path/to/your/app --template new-langgraph-project-python

3. Install Dependencies

cd path/to/your/app
pip install -e .

4. Create a .env File

LANGSMITH_API_KEY=lsv2...

Optionally set LANGSMITH_TRACING=false to prevent data from leaving your machine.

5. Launch Agent Server

langgraph dev

Output:

- 🚀 API: http://127.0.0.1:2024
- 🎨 Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- 📚 API Docs: http://127.0.0.1:2024/docs

This in-memory server is for development. For production, use LangSmith Deployment.

6. Test the API

Python SDK (async):

from langgraph_sdk import get_client

client = get_client(url="http://localhost:2024")

async def main():
    async for chunk in client.runs.stream(
        None, "agent",
        input={"messages": [{"role": "human", "content": "What is LangGraph?"}]},
    ):
        print(chunk.data)

asyncio.run(main())

Python SDK (sync):

from langgraph_sdk import get_sync_client

client = get_sync_client(url="http://localhost:2024")
for chunk in client.runs.stream(
    None, "agent",
    input={"messages": [{"role": "human", "content": "What is LangGraph?"}]},
    stream_mode="messages-tuple",
):
    print(chunk.data)

REST API:

curl -s --request POST \
    --url "http://localhost:2024/runs/stream" \
    --header 'Content-Type: application/json' \
    --data '{
        "assistant_id": "agent",
        "input": {"messages": [{"role": "human", "content": "What is LangGraph?"}]},
        "stream_mode": "messages-tuple"
    }'