| 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950 |
- Metadata-Version: 2.4
- Name: langgraph-sdk
- Version: 0.3.0
- Summary: SDK for interacting with LangGraph API
- Project-URL: Source, https://github.com/langchain-ai/langgraph/tree/main/libs/sdk-py
- Project-URL: Twitter, https://x.com/LangChainAI
- Project-URL: Slack, https://www.langchain.com/join-community
- Project-URL: Reddit, https://www.reddit.com/r/LangChain/
- License-Expression: MIT
- License-File: LICENSE
- Requires-Python: >=3.10
- Requires-Dist: httpx>=0.25.2
- Requires-Dist: orjson>=3.10.1
- Description-Content-Type: text/markdown
- # LangGraph Python SDK
- This repository contains the Python SDK for interacting with the LangSmith Deployment REST API.
- ## Quick Start
- To get started with the Python SDK, [install the package](https://pypi.org/project/langgraph-sdk/)
- ```bash
- pip install -U langgraph-sdk
- ```
- You will need a running LangGraph API server. If you're running a server locally using `langgraph-cli`, SDK will automatically point at `http://localhost:8123`, otherwise
- you would need to specify the server URL when creating a client.
- ```python
- from langgraph_sdk import get_client
- # If you're using a remote server, initialize the client with `get_client(url=REMOTE_URL)`
- client = get_client()
- # List all assistants
- assistants = await client.assistants.search()
- # We auto-create an assistant for each graph you register in config.
- agent = assistants[0]
- # Start a new thread
- thread = await client.threads.create()
- # Start a streaming run
- input = {"messages": [{"role": "human", "content": "what's the weather in la"}]}
- async for chunk in client.runs.stream(thread['thread_id'], agent['assistant_id'], input=input):
- print(chunk)
- ```
|