| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908 |
- Metadata-Version: 2.3
- Name: openai
- Version: 2.11.0
- Summary: The official Python library for the openai API
- Project-URL: Homepage, https://github.com/openai/openai-python
- Project-URL: Repository, https://github.com/openai/openai-python
- Author-email: OpenAI <support@openai.com>
- License: Apache-2.0
- Classifier: Intended Audience :: Developers
- Classifier: License :: OSI Approved :: Apache Software License
- Classifier: Operating System :: MacOS
- Classifier: Operating System :: Microsoft :: Windows
- Classifier: Operating System :: OS Independent
- Classifier: Operating System :: POSIX
- Classifier: Operating System :: POSIX :: Linux
- Classifier: Programming Language :: Python :: 3.9
- Classifier: Programming Language :: Python :: 3.10
- Classifier: Programming Language :: Python :: 3.11
- Classifier: Programming Language :: Python :: 3.12
- Classifier: Programming Language :: Python :: 3.13
- Classifier: Programming Language :: Python :: 3.14
- Classifier: Topic :: Software Development :: Libraries :: Python Modules
- Classifier: Typing :: Typed
- Requires-Python: >=3.9
- Requires-Dist: anyio<5,>=3.5.0
- Requires-Dist: distro<2,>=1.7.0
- Requires-Dist: httpx<1,>=0.23.0
- Requires-Dist: jiter<1,>=0.10.0
- Requires-Dist: pydantic<3,>=1.9.0
- Requires-Dist: sniffio
- Requires-Dist: tqdm>4
- Requires-Dist: typing-extensions<5,>=4.11
- Provides-Extra: aiohttp
- Requires-Dist: aiohttp; extra == 'aiohttp'
- Requires-Dist: httpx-aiohttp>=0.1.9; extra == 'aiohttp'
- Provides-Extra: datalib
- Requires-Dist: numpy>=1; extra == 'datalib'
- Requires-Dist: pandas-stubs>=1.1.0.11; extra == 'datalib'
- Requires-Dist: pandas>=1.2.3; extra == 'datalib'
- Provides-Extra: realtime
- Requires-Dist: websockets<16,>=13; extra == 'realtime'
- Provides-Extra: voice-helpers
- Requires-Dist: numpy>=2.0.2; extra == 'voice-helpers'
- Requires-Dist: sounddevice>=0.5.1; extra == 'voice-helpers'
- Description-Content-Type: text/markdown
- # OpenAI Python API library
- <!-- prettier-ignore -->
- [)](https://pypi.org/project/openai/)
- The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.9+
- application. The library includes type definitions for all request params and response fields,
- and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
- It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).
- ## Documentation
- The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs/api-reference). The full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).
- ## Installation
- ```sh
- # install from PyPI
- pip install openai
- ```
- ## Usage
- The full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).
- The primary API for interacting with OpenAI models is the [Responses API](https://platform.openai.com/docs/api-reference/responses). You can generate text from the model with the code below.
- ```python
- import os
- from openai import OpenAI
- client = OpenAI(
- # This is the default and can be omitted
- api_key=os.environ.get("OPENAI_API_KEY"),
- )
- response = client.responses.create(
- model="gpt-4o",
- instructions="You are a coding assistant that talks like a pirate.",
- input="How do I check if a Python object is an instance of a class?",
- )
- print(response.output_text)
- ```
- The previous standard (supported indefinitely) for generating text is the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use that API to generate text from the model with the code below.
- ```python
- from openai import OpenAI
- client = OpenAI()
- completion = client.chat.completions.create(
- model="gpt-4o",
- messages=[
- {"role": "developer", "content": "Talk like a pirate."},
- {
- "role": "user",
- "content": "How do I check if a Python object is an instance of a class?",
- },
- ],
- )
- print(completion.choices[0].message.content)
- ```
- While you can provide an `api_key` keyword argument,
- we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
- to add `OPENAI_API_KEY="My API Key"` to your `.env` file
- so that your API key is not stored in source control.
- [Get an API key here](https://platform.openai.com/settings/organization/api-keys).
- ### Vision
- With an image URL:
- ```python
- prompt = "What is in this image?"
- img_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/2023_06_08_Raccoon1.jpg/1599px-2023_06_08_Raccoon1.jpg"
- response = client.responses.create(
- model="gpt-4o-mini",
- input=[
- {
- "role": "user",
- "content": [
- {"type": "input_text", "text": prompt},
- {"type": "input_image", "image_url": f"{img_url}"},
- ],
- }
- ],
- )
- ```
- With the image as a base64 encoded string:
- ```python
- import base64
- from openai import OpenAI
- client = OpenAI()
- prompt = "What is in this image?"
- with open("path/to/image.png", "rb") as image_file:
- b64_image = base64.b64encode(image_file.read()).decode("utf-8")
- response = client.responses.create(
- model="gpt-4o-mini",
- input=[
- {
- "role": "user",
- "content": [
- {"type": "input_text", "text": prompt},
- {"type": "input_image", "image_url": f"data:image/png;base64,{b64_image}"},
- ],
- }
- ],
- )
- ```
- ## Async usage
- Simply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:
- ```python
- import os
- import asyncio
- from openai import AsyncOpenAI
- client = AsyncOpenAI(
- # This is the default and can be omitted
- api_key=os.environ.get("OPENAI_API_KEY"),
- )
- async def main() -> None:
- response = await client.responses.create(
- model="gpt-4o", input="Explain disestablishmentarianism to a smart five year old."
- )
- print(response.output_text)
- asyncio.run(main())
- ```
- Functionality between the synchronous and asynchronous clients is otherwise identical.
- ### With aiohttp
- By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
- You can enable this by installing `aiohttp`:
- ```sh
- # install from PyPI
- pip install openai[aiohttp]
- ```
- Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
- ```python
- import os
- import asyncio
- from openai import DefaultAioHttpClient
- from openai import AsyncOpenAI
- async def main() -> None:
- async with AsyncOpenAI(
- api_key=os.environ.get("OPENAI_API_KEY"), # This is the default and can be omitted
- http_client=DefaultAioHttpClient(),
- ) as client:
- chat_completion = await client.chat.completions.create(
- messages=[
- {
- "role": "user",
- "content": "Say this is a test",
- }
- ],
- model="gpt-4o",
- )
- asyncio.run(main())
- ```
- ## Streaming responses
- We provide support for streaming responses using Server Side Events (SSE).
- ```python
- from openai import OpenAI
- client = OpenAI()
- stream = client.responses.create(
- model="gpt-4o",
- input="Write a one-sentence bedtime story about a unicorn.",
- stream=True,
- )
- for event in stream:
- print(event)
- ```
- The async client uses the exact same interface.
- ```python
- import asyncio
- from openai import AsyncOpenAI
- client = AsyncOpenAI()
- async def main():
- stream = await client.responses.create(
- model="gpt-4o",
- input="Write a one-sentence bedtime story about a unicorn.",
- stream=True,
- )
- async for event in stream:
- print(event)
- asyncio.run(main())
- ```
- ## Realtime API
- The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a WebSocket connection.
- Under the hood the SDK uses the [`websockets`](https://websockets.readthedocs.io/en/stable/) library to manage connections.
- The Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be found [here](https://platform.openai.com/docs/api-reference/realtime-client-events) and a guide can be found [here](https://platform.openai.com/docs/guides/realtime).
- Basic text based example:
- ```py
- import asyncio
- from openai import AsyncOpenAI
- async def main():
- client = AsyncOpenAI()
- async with client.realtime.connect(model="gpt-realtime") as connection:
- await connection.session.update(
- session={"type": "realtime", "output_modalities": ["text"]}
- )
- await connection.conversation.item.create(
- item={
- "type": "message",
- "role": "user",
- "content": [{"type": "input_text", "text": "Say hello!"}],
- }
- )
- await connection.response.create()
- async for event in connection:
- if event.type == "response.output_text.delta":
- print(event.delta, flush=True, end="")
- elif event.type == "response.output_text.done":
- print()
- elif event.type == "response.done":
- break
- asyncio.run(main())
- ```
- However the real magic of the Realtime API is handling audio inputs / outputs, see this example [TUI script](https://github.com/openai/openai-python/blob/main/examples/realtime/push_to_talk_app.py) for a fully fledged example.
- ### Realtime error handling
- Whenever an error occurs, the Realtime API will send an [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling) and the connection will stay open and remain usable. This means you need to handle it yourself, as _no errors are raised directly_ by the SDK when an `error` event comes in.
- ```py
- client = AsyncOpenAI()
- async with client.realtime.connect(model="gpt-realtime") as connection:
- ...
- async for event in connection:
- if event.type == 'error':
- print(event.error.type)
- print(event.error.code)
- print(event.error.event_id)
- print(event.error.message)
- ```
- ## Using types
- Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- - Serializing back into JSON, `model.to_json()`
- - Converting to a dictionary, `model.to_dict()`
- Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
- ## Pagination
- List methods in the OpenAI API are paginated.
- This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
- ```python
- from openai import OpenAI
- client = OpenAI()
- all_jobs = []
- # Automatically fetches more pages as needed.
- for job in client.fine_tuning.jobs.list(
- limit=20,
- ):
- # Do something with job here
- all_jobs.append(job)
- print(all_jobs)
- ```
- Or, asynchronously:
- ```python
- import asyncio
- from openai import AsyncOpenAI
- client = AsyncOpenAI()
- async def main() -> None:
- all_jobs = []
- # Iterate through items across all pages, issuing requests as needed.
- async for job in client.fine_tuning.jobs.list(
- limit=20,
- ):
- all_jobs.append(job)
- print(all_jobs)
- asyncio.run(main())
- ```
- Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
- ```python
- first_page = await client.fine_tuning.jobs.list(
- limit=20,
- )
- if first_page.has_next_page():
- print(f"will fetch next page using these details: {first_page.next_page_info()}")
- next_page = await first_page.get_next_page()
- print(f"number of items we just fetched: {len(next_page.data)}")
- # Remove `await` for non-async usage.
- ```
- Or just work directly with the returned data:
- ```python
- first_page = await client.fine_tuning.jobs.list(
- limit=20,
- )
- print(f"next page cursor: {first_page.after}") # => "next page cursor: ..."
- for job in first_page.data:
- print(job.id)
- # Remove `await` for non-async usage.
- ```
- ## Nested params
- Nested parameters are dictionaries, typed using `TypedDict`, for example:
- ```python
- from openai import OpenAI
- client = OpenAI()
- response = client.chat.responses.create(
- input=[
- {
- "role": "user",
- "content": "How much ?",
- }
- ],
- model="gpt-4o",
- response_format={"type": "json_object"},
- )
- ```
- ## File uploads
- Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.
- ```python
- from pathlib import Path
- from openai import OpenAI
- client = OpenAI()
- client.files.create(
- file=Path("input.jsonl"),
- purpose="fine-tune",
- )
- ```
- The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
- ## Webhook Verification
- Verifying webhook signatures is _optional but encouraged_.
- For more information about webhooks, see [the API docs](https://platform.openai.com/docs/guides/webhooks).
- ### Parsing webhook payloads
- For most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap()`, which parses a webhook request and verifies that it was sent by OpenAI. This method will raise an error if the signature is invalid.
- Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `.unwrap()` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.
- ```python
- from openai import OpenAI
- from flask import Flask, request
- app = Flask(__name__)
- client = OpenAI() # OPENAI_WEBHOOK_SECRET environment variable is used by default
- @app.route("/webhook", methods=["POST"])
- def webhook():
- request_body = request.get_data(as_text=True)
- try:
- event = client.webhooks.unwrap(request_body, request.headers)
- if event.type == "response.completed":
- print("Response completed:", event.data)
- elif event.type == "response.failed":
- print("Response failed:", event.data)
- else:
- print("Unhandled event type:", event.type)
- return "ok"
- except Exception as e:
- print("Invalid signature:", e)
- return "Invalid signature", 400
- if __name__ == "__main__":
- app.run(port=8000)
- ```
- ### Verifying webhook payloads directly
- In some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verify_signature()` to _only verify_ the signature of a webhook request. Like `.unwrap()`, this method will raise an error if the signature is invalid.
- Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.
- ```python
- import json
- from openai import OpenAI
- from flask import Flask, request
- app = Flask(__name__)
- client = OpenAI() # OPENAI_WEBHOOK_SECRET environment variable is used by default
- @app.route("/webhook", methods=["POST"])
- def webhook():
- request_body = request.get_data(as_text=True)
- try:
- client.webhooks.verify_signature(request_body, request.headers)
- # Parse the body after verification
- event = json.loads(request_body)
- print("Verified event:", event)
- return "ok"
- except Exception as e:
- print("Invalid signature:", e)
- return "Invalid signature", 400
- if __name__ == "__main__":
- app.run(port=8000)
- ```
- ## Handling errors
- When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openai.APIConnectionError` is raised.
- When the API returns a non-success status code (that is, 4xx or 5xx
- response), a subclass of `openai.APIStatusError` is raised, containing `status_code` and `response` properties.
- All errors inherit from `openai.APIError`.
- ```python
- import openai
- from openai import OpenAI
- client = OpenAI()
- try:
- client.fine_tuning.jobs.create(
- model="gpt-4o",
- training_file="file-abc123",
- )
- except openai.APIConnectionError as e:
- print("The server could not be reached")
- print(e.__cause__) # an underlying Exception, likely raised within httpx.
- except openai.RateLimitError as e:
- print("A 429 status code was received; we should back off a bit.")
- except openai.APIStatusError as e:
- print("Another non-200-range status code was received")
- print(e.status_code)
- print(e.response)
- ```
- Error codes are as follows:
- | Status Code | Error Type |
- | ----------- | -------------------------- |
- | 400 | `BadRequestError` |
- | 401 | `AuthenticationError` |
- | 403 | `PermissionDeniedError` |
- | 404 | `NotFoundError` |
- | 422 | `UnprocessableEntityError` |
- | 429 | `RateLimitError` |
- | >=500 | `InternalServerError` |
- | N/A | `APIConnectionError` |
- ## Request IDs
- > For more information on debugging requests, see [these docs](https://platform.openai.com/docs/api-reference/debugging-requests)
- All object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.
- ```python
- response = await client.responses.create(
- model="gpt-4o-mini",
- input="Say 'this is a test'.",
- )
- print(response._request_id) # req_123
- ```
- Note that unlike other properties that use an `_` prefix, the `_request_id` property
- _is_ public. Unless documented otherwise, _all_ other `_` prefix properties,
- methods and modules are _private_.
- > [!IMPORTANT]
- > If you need to access request IDs for failed requests you must catch the `APIStatusError` exception
- ```python
- import openai
- try:
- completion = await client.chat.completions.create(
- messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4"
- )
- except openai.APIStatusError as exc:
- print(exc.request_id) # req_123
- raise exc
- ```
- ## Retries
- Certain errors are automatically retried 2 times by default, with a short exponential backoff.
- Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
- 429 Rate Limit, and >=500 Internal errors are all retried by default.
- You can use the `max_retries` option to configure or disable retry settings:
- ```python
- from openai import OpenAI
- # Configure the default for all requests:
- client = OpenAI(
- # default is 2
- max_retries=0,
- )
- # Or, configure per-request:
- client.with_options(max_retries=5).chat.completions.create(
- messages=[
- {
- "role": "user",
- "content": "How can I get the name of the current day in JavaScript?",
- }
- ],
- model="gpt-4o",
- )
- ```
- ## Timeouts
- By default requests time out after 10 minutes. You can configure this with a `timeout` option,
- which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
- ```python
- from openai import OpenAI
- # Configure the default for all requests:
- client = OpenAI(
- # 20 seconds (default is 10 minutes)
- timeout=20.0,
- )
- # More granular control:
- client = OpenAI(
- timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
- )
- # Override per-request:
- client.with_options(timeout=5.0).chat.completions.create(
- messages=[
- {
- "role": "user",
- "content": "How can I list all files in a directory using Python?",
- }
- ],
- model="gpt-4o",
- )
- ```
- On timeout, an `APITimeoutError` is thrown.
- Note that requests that time out are [retried twice by default](https://github.com/openai/openai-python/tree/main/#retries).
- ## Advanced
- ### Logging
- We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
- You can enable logging by setting the environment variable `OPENAI_LOG` to `info`.
- ```shell
- $ export OPENAI_LOG=info
- ```
- Or to `debug` for more verbose logging.
- ### How to tell whether `None` means `null` or missing
- In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
- ```py
- if response.my_field is None:
- if 'my_field' not in response.model_fields_set:
- print('Got json like {}, without a "my_field" key present at all.')
- else:
- print('Got json like {"my_field": null}.')
- ```
- ### Accessing raw response data (e.g. headers)
- The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
- ```py
- from openai import OpenAI
- client = OpenAI()
- response = client.chat.completions.with_raw_response.create(
- messages=[{
- "role": "user",
- "content": "Say this is a test",
- }],
- model="gpt-4o",
- )
- print(response.headers.get('X-My-Header'))
- completion = response.parse() # get the object that `chat.completions.create()` would have returned
- print(completion)
- ```
- These methods return a [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.
- For the sync client this will mostly be the same with the exception
- of `content` & `text` will be methods instead of properties. In the
- async client, all methods will be async.
- A migration script will be provided & the migration in general should
- be smooth.
- #### `.with_streaming_response`
- The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
- To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
- As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.
- ```python
- with client.chat.completions.with_streaming_response.create(
- messages=[
- {
- "role": "user",
- "content": "Say this is a test",
- }
- ],
- model="gpt-4o",
- ) as response:
- print(response.headers.get("X-My-Header"))
- for line in response.iter_lines():
- print(line)
- ```
- The context manager is required so that the response will reliably be closed.
- ### Making custom/undocumented requests
- This library is typed for convenient access to the documented API.
- If you need to access undocumented endpoints, params, or response properties, the library can still be used.
- #### Undocumented endpoints
- To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
- http verbs. Options on the client will be respected (such as retries) when making this request.
- ```py
- import httpx
- response = client.post(
- "/foo",
- cast_to=httpx.Response,
- body={"my_param": True},
- )
- print(response.headers.get("x-foo"))
- ```
- #### Undocumented request params
- If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
- options.
- #### Undocumented response properties
- To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
- can also get all the extra fields on the Pydantic model as a dict with
- [`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
- ### Configuring the HTTP client
- You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- - Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- - Custom [transports](https://www.python-httpx.org/advanced/transports/)
- - Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
- ```python
- import httpx
- from openai import OpenAI, DefaultHttpxClient
- client = OpenAI(
- # Or use the `OPENAI_BASE_URL` env var
- base_url="http://my.test.server.example.com:8083/v1",
- http_client=DefaultHttpxClient(
- proxy="http://my.test.proxy.example.com",
- transport=httpx.HTTPTransport(local_address="0.0.0.0"),
- ),
- )
- ```
- You can also customize the client on a per-request basis by using `with_options()`:
- ```python
- client.with_options(http_client=DefaultHttpxClient(...))
- ```
- ### Managing HTTP resources
- By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
- ```py
- from openai import OpenAI
- with OpenAI() as client:
- # make requests here
- ...
- # HTTP client is now closed
- ```
- ## Microsoft Azure OpenAI
- To use this library with [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/overview), use the `AzureOpenAI`
- class instead of the `OpenAI` class.
- > [!IMPORTANT]
- > The Azure API shape differs from the core API shape which means that the static types for responses / params
- > won't always be correct.
- ```py
- from openai import AzureOpenAI
- # gets the API Key from environment variable AZURE_OPENAI_API_KEY
- client = AzureOpenAI(
- # https://learn.microsoft.com/azure/ai-services/openai/reference#rest-api-versioning
- api_version="2023-07-01-preview",
- # https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource
- azure_endpoint="https://example-endpoint.openai.azure.com",
- )
- completion = client.chat.completions.create(
- model="deployment-name", # e.g. gpt-35-instant
- messages=[
- {
- "role": "user",
- "content": "How do I output all files in a directory using Python?",
- },
- ],
- )
- print(completion.to_json())
- ```
- In addition to the options provided in the base `OpenAI` client, the following options are provided:
- - `azure_endpoint` (or the `AZURE_OPENAI_ENDPOINT` environment variable)
- - `azure_deployment`
- - `api_version` (or the `OPENAI_API_VERSION` environment variable)
- - `azure_ad_token` (or the `AZURE_OPENAI_AD_TOKEN` environment variable)
- - `azure_ad_token_provider`
- An example of using the client with Microsoft Entra ID (formerly known as Azure Active Directory) can be found [here](https://github.com/openai/openai-python/blob/main/examples/azure_ad.py).
- ## Versioning
- This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
- 1. Changes that only affect static types, without breaking runtime behavior.
- 2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
- 3. Changes that we do not expect to impact the vast majority of users in practice.
- We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
- We are keen for your feedback; please open an [issue](https://www.github.com/openai/openai-python/issues) with questions, bugs, or suggestions.
- ### Determining the installed version
- If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
- You can determine the version that is being used at runtime with:
- ```py
- import openai
- print(openai.__version__)
- ```
- ## Requirements
- Python 3.9 or higher.
- ## Contributing
- See [the contributing documentation](https://github.com/openai/openai-python/tree/main/./CONTRIBUTING.md).
|