METADATA 28 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908
  1. Metadata-Version: 2.3
  2. Name: openai
  3. Version: 2.11.0
  4. Summary: The official Python library for the openai API
  5. Project-URL: Homepage, https://github.com/openai/openai-python
  6. Project-URL: Repository, https://github.com/openai/openai-python
  7. Author-email: OpenAI <support@openai.com>
  8. License: Apache-2.0
  9. Classifier: Intended Audience :: Developers
  10. Classifier: License :: OSI Approved :: Apache Software License
  11. Classifier: Operating System :: MacOS
  12. Classifier: Operating System :: Microsoft :: Windows
  13. Classifier: Operating System :: OS Independent
  14. Classifier: Operating System :: POSIX
  15. Classifier: Operating System :: POSIX :: Linux
  16. Classifier: Programming Language :: Python :: 3.9
  17. Classifier: Programming Language :: Python :: 3.10
  18. Classifier: Programming Language :: Python :: 3.11
  19. Classifier: Programming Language :: Python :: 3.12
  20. Classifier: Programming Language :: Python :: 3.13
  21. Classifier: Programming Language :: Python :: 3.14
  22. Classifier: Topic :: Software Development :: Libraries :: Python Modules
  23. Classifier: Typing :: Typed
  24. Requires-Python: >=3.9
  25. Requires-Dist: anyio<5,>=3.5.0
  26. Requires-Dist: distro<2,>=1.7.0
  27. Requires-Dist: httpx<1,>=0.23.0
  28. Requires-Dist: jiter<1,>=0.10.0
  29. Requires-Dist: pydantic<3,>=1.9.0
  30. Requires-Dist: sniffio
  31. Requires-Dist: tqdm>4
  32. Requires-Dist: typing-extensions<5,>=4.11
  33. Provides-Extra: aiohttp
  34. Requires-Dist: aiohttp; extra == 'aiohttp'
  35. Requires-Dist: httpx-aiohttp>=0.1.9; extra == 'aiohttp'
  36. Provides-Extra: datalib
  37. Requires-Dist: numpy>=1; extra == 'datalib'
  38. Requires-Dist: pandas-stubs>=1.1.0.11; extra == 'datalib'
  39. Requires-Dist: pandas>=1.2.3; extra == 'datalib'
  40. Provides-Extra: realtime
  41. Requires-Dist: websockets<16,>=13; extra == 'realtime'
  42. Provides-Extra: voice-helpers
  43. Requires-Dist: numpy>=2.0.2; extra == 'voice-helpers'
  44. Requires-Dist: sounddevice>=0.5.1; extra == 'voice-helpers'
  45. Description-Content-Type: text/markdown
  46. # OpenAI Python API library
  47. <!-- prettier-ignore -->
  48. [![PyPI version](https://img.shields.io/pypi/v/openai.svg?label=pypi%20(stable))](https://pypi.org/project/openai/)
  49. The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.9+
  50. application. The library includes type definitions for all request params and response fields,
  51. and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
  52. It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).
  53. ## Documentation
  54. The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs/api-reference). The full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).
  55. ## Installation
  56. ```sh
  57. # install from PyPI
  58. pip install openai
  59. ```
  60. ## Usage
  61. The full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).
  62. The primary API for interacting with OpenAI models is the [Responses API](https://platform.openai.com/docs/api-reference/responses). You can generate text from the model with the code below.
  63. ```python
  64. import os
  65. from openai import OpenAI
  66. client = OpenAI(
  67. # This is the default and can be omitted
  68. api_key=os.environ.get("OPENAI_API_KEY"),
  69. )
  70. response = client.responses.create(
  71. model="gpt-4o",
  72. instructions="You are a coding assistant that talks like a pirate.",
  73. input="How do I check if a Python object is an instance of a class?",
  74. )
  75. print(response.output_text)
  76. ```
  77. The previous standard (supported indefinitely) for generating text is the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use that API to generate text from the model with the code below.
  78. ```python
  79. from openai import OpenAI
  80. client = OpenAI()
  81. completion = client.chat.completions.create(
  82. model="gpt-4o",
  83. messages=[
  84. {"role": "developer", "content": "Talk like a pirate."},
  85. {
  86. "role": "user",
  87. "content": "How do I check if a Python object is an instance of a class?",
  88. },
  89. ],
  90. )
  91. print(completion.choices[0].message.content)
  92. ```
  93. While you can provide an `api_key` keyword argument,
  94. we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
  95. to add `OPENAI_API_KEY="My API Key"` to your `.env` file
  96. so that your API key is not stored in source control.
  97. [Get an API key here](https://platform.openai.com/settings/organization/api-keys).
  98. ### Vision
  99. With an image URL:
  100. ```python
  101. prompt = "What is in this image?"
  102. img_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/2023_06_08_Raccoon1.jpg/1599px-2023_06_08_Raccoon1.jpg"
  103. response = client.responses.create(
  104. model="gpt-4o-mini",
  105. input=[
  106. {
  107. "role": "user",
  108. "content": [
  109. {"type": "input_text", "text": prompt},
  110. {"type": "input_image", "image_url": f"{img_url}"},
  111. ],
  112. }
  113. ],
  114. )
  115. ```
  116. With the image as a base64 encoded string:
  117. ```python
  118. import base64
  119. from openai import OpenAI
  120. client = OpenAI()
  121. prompt = "What is in this image?"
  122. with open("path/to/image.png", "rb") as image_file:
  123. b64_image = base64.b64encode(image_file.read()).decode("utf-8")
  124. response = client.responses.create(
  125. model="gpt-4o-mini",
  126. input=[
  127. {
  128. "role": "user",
  129. "content": [
  130. {"type": "input_text", "text": prompt},
  131. {"type": "input_image", "image_url": f"data:image/png;base64,{b64_image}"},
  132. ],
  133. }
  134. ],
  135. )
  136. ```
  137. ## Async usage
  138. Simply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:
  139. ```python
  140. import os
  141. import asyncio
  142. from openai import AsyncOpenAI
  143. client = AsyncOpenAI(
  144. # This is the default and can be omitted
  145. api_key=os.environ.get("OPENAI_API_KEY"),
  146. )
  147. async def main() -> None:
  148. response = await client.responses.create(
  149. model="gpt-4o", input="Explain disestablishmentarianism to a smart five year old."
  150. )
  151. print(response.output_text)
  152. asyncio.run(main())
  153. ```
  154. Functionality between the synchronous and asynchronous clients is otherwise identical.
  155. ### With aiohttp
  156. By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
  157. You can enable this by installing `aiohttp`:
  158. ```sh
  159. # install from PyPI
  160. pip install openai[aiohttp]
  161. ```
  162. Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
  163. ```python
  164. import os
  165. import asyncio
  166. from openai import DefaultAioHttpClient
  167. from openai import AsyncOpenAI
  168. async def main() -> None:
  169. async with AsyncOpenAI(
  170. api_key=os.environ.get("OPENAI_API_KEY"), # This is the default and can be omitted
  171. http_client=DefaultAioHttpClient(),
  172. ) as client:
  173. chat_completion = await client.chat.completions.create(
  174. messages=[
  175. {
  176. "role": "user",
  177. "content": "Say this is a test",
  178. }
  179. ],
  180. model="gpt-4o",
  181. )
  182. asyncio.run(main())
  183. ```
  184. ## Streaming responses
  185. We provide support for streaming responses using Server Side Events (SSE).
  186. ```python
  187. from openai import OpenAI
  188. client = OpenAI()
  189. stream = client.responses.create(
  190. model="gpt-4o",
  191. input="Write a one-sentence bedtime story about a unicorn.",
  192. stream=True,
  193. )
  194. for event in stream:
  195. print(event)
  196. ```
  197. The async client uses the exact same interface.
  198. ```python
  199. import asyncio
  200. from openai import AsyncOpenAI
  201. client = AsyncOpenAI()
  202. async def main():
  203. stream = await client.responses.create(
  204. model="gpt-4o",
  205. input="Write a one-sentence bedtime story about a unicorn.",
  206. stream=True,
  207. )
  208. async for event in stream:
  209. print(event)
  210. asyncio.run(main())
  211. ```
  212. ## Realtime API
  213. The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a WebSocket connection.
  214. Under the hood the SDK uses the [`websockets`](https://websockets.readthedocs.io/en/stable/) library to manage connections.
  215. The Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be found [here](https://platform.openai.com/docs/api-reference/realtime-client-events) and a guide can be found [here](https://platform.openai.com/docs/guides/realtime).
  216. Basic text based example:
  217. ```py
  218. import asyncio
  219. from openai import AsyncOpenAI
  220. async def main():
  221. client = AsyncOpenAI()
  222. async with client.realtime.connect(model="gpt-realtime") as connection:
  223. await connection.session.update(
  224. session={"type": "realtime", "output_modalities": ["text"]}
  225. )
  226. await connection.conversation.item.create(
  227. item={
  228. "type": "message",
  229. "role": "user",
  230. "content": [{"type": "input_text", "text": "Say hello!"}],
  231. }
  232. )
  233. await connection.response.create()
  234. async for event in connection:
  235. if event.type == "response.output_text.delta":
  236. print(event.delta, flush=True, end="")
  237. elif event.type == "response.output_text.done":
  238. print()
  239. elif event.type == "response.done":
  240. break
  241. asyncio.run(main())
  242. ```
  243. However the real magic of the Realtime API is handling audio inputs / outputs, see this example [TUI script](https://github.com/openai/openai-python/blob/main/examples/realtime/push_to_talk_app.py) for a fully fledged example.
  244. ### Realtime error handling
  245. Whenever an error occurs, the Realtime API will send an [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling) and the connection will stay open and remain usable. This means you need to handle it yourself, as _no errors are raised directly_ by the SDK when an `error` event comes in.
  246. ```py
  247. client = AsyncOpenAI()
  248. async with client.realtime.connect(model="gpt-realtime") as connection:
  249. ...
  250. async for event in connection:
  251. if event.type == 'error':
  252. print(event.error.type)
  253. print(event.error.code)
  254. print(event.error.event_id)
  255. print(event.error.message)
  256. ```
  257. ## Using types
  258. Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
  259. - Serializing back into JSON, `model.to_json()`
  260. - Converting to a dictionary, `model.to_dict()`
  261. Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
  262. ## Pagination
  263. List methods in the OpenAI API are paginated.
  264. This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
  265. ```python
  266. from openai import OpenAI
  267. client = OpenAI()
  268. all_jobs = []
  269. # Automatically fetches more pages as needed.
  270. for job in client.fine_tuning.jobs.list(
  271. limit=20,
  272. ):
  273. # Do something with job here
  274. all_jobs.append(job)
  275. print(all_jobs)
  276. ```
  277. Or, asynchronously:
  278. ```python
  279. import asyncio
  280. from openai import AsyncOpenAI
  281. client = AsyncOpenAI()
  282. async def main() -> None:
  283. all_jobs = []
  284. # Iterate through items across all pages, issuing requests as needed.
  285. async for job in client.fine_tuning.jobs.list(
  286. limit=20,
  287. ):
  288. all_jobs.append(job)
  289. print(all_jobs)
  290. asyncio.run(main())
  291. ```
  292. Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
  293. ```python
  294. first_page = await client.fine_tuning.jobs.list(
  295. limit=20,
  296. )
  297. if first_page.has_next_page():
  298. print(f"will fetch next page using these details: {first_page.next_page_info()}")
  299. next_page = await first_page.get_next_page()
  300. print(f"number of items we just fetched: {len(next_page.data)}")
  301. # Remove `await` for non-async usage.
  302. ```
  303. Or just work directly with the returned data:
  304. ```python
  305. first_page = await client.fine_tuning.jobs.list(
  306. limit=20,
  307. )
  308. print(f"next page cursor: {first_page.after}") # => "next page cursor: ..."
  309. for job in first_page.data:
  310. print(job.id)
  311. # Remove `await` for non-async usage.
  312. ```
  313. ## Nested params
  314. Nested parameters are dictionaries, typed using `TypedDict`, for example:
  315. ```python
  316. from openai import OpenAI
  317. client = OpenAI()
  318. response = client.chat.responses.create(
  319. input=[
  320. {
  321. "role": "user",
  322. "content": "How much ?",
  323. }
  324. ],
  325. model="gpt-4o",
  326. response_format={"type": "json_object"},
  327. )
  328. ```
  329. ## File uploads
  330. Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.
  331. ```python
  332. from pathlib import Path
  333. from openai import OpenAI
  334. client = OpenAI()
  335. client.files.create(
  336. file=Path("input.jsonl"),
  337. purpose="fine-tune",
  338. )
  339. ```
  340. The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
  341. ## Webhook Verification
  342. Verifying webhook signatures is _optional but encouraged_.
  343. For more information about webhooks, see [the API docs](https://platform.openai.com/docs/guides/webhooks).
  344. ### Parsing webhook payloads
  345. For most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap()`, which parses a webhook request and verifies that it was sent by OpenAI. This method will raise an error if the signature is invalid.
  346. Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `.unwrap()` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.
  347. ```python
  348. from openai import OpenAI
  349. from flask import Flask, request
  350. app = Flask(__name__)
  351. client = OpenAI() # OPENAI_WEBHOOK_SECRET environment variable is used by default
  352. @app.route("/webhook", methods=["POST"])
  353. def webhook():
  354. request_body = request.get_data(as_text=True)
  355. try:
  356. event = client.webhooks.unwrap(request_body, request.headers)
  357. if event.type == "response.completed":
  358. print("Response completed:", event.data)
  359. elif event.type == "response.failed":
  360. print("Response failed:", event.data)
  361. else:
  362. print("Unhandled event type:", event.type)
  363. return "ok"
  364. except Exception as e:
  365. print("Invalid signature:", e)
  366. return "Invalid signature", 400
  367. if __name__ == "__main__":
  368. app.run(port=8000)
  369. ```
  370. ### Verifying webhook payloads directly
  371. In some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verify_signature()` to _only verify_ the signature of a webhook request. Like `.unwrap()`, this method will raise an error if the signature is invalid.
  372. Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.
  373. ```python
  374. import json
  375. from openai import OpenAI
  376. from flask import Flask, request
  377. app = Flask(__name__)
  378. client = OpenAI() # OPENAI_WEBHOOK_SECRET environment variable is used by default
  379. @app.route("/webhook", methods=["POST"])
  380. def webhook():
  381. request_body = request.get_data(as_text=True)
  382. try:
  383. client.webhooks.verify_signature(request_body, request.headers)
  384. # Parse the body after verification
  385. event = json.loads(request_body)
  386. print("Verified event:", event)
  387. return "ok"
  388. except Exception as e:
  389. print("Invalid signature:", e)
  390. return "Invalid signature", 400
  391. if __name__ == "__main__":
  392. app.run(port=8000)
  393. ```
  394. ## Handling errors
  395. When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openai.APIConnectionError` is raised.
  396. When the API returns a non-success status code (that is, 4xx or 5xx
  397. response), a subclass of `openai.APIStatusError` is raised, containing `status_code` and `response` properties.
  398. All errors inherit from `openai.APIError`.
  399. ```python
  400. import openai
  401. from openai import OpenAI
  402. client = OpenAI()
  403. try:
  404. client.fine_tuning.jobs.create(
  405. model="gpt-4o",
  406. training_file="file-abc123",
  407. )
  408. except openai.APIConnectionError as e:
  409. print("The server could not be reached")
  410. print(e.__cause__) # an underlying Exception, likely raised within httpx.
  411. except openai.RateLimitError as e:
  412. print("A 429 status code was received; we should back off a bit.")
  413. except openai.APIStatusError as e:
  414. print("Another non-200-range status code was received")
  415. print(e.status_code)
  416. print(e.response)
  417. ```
  418. Error codes are as follows:
  419. | Status Code | Error Type |
  420. | ----------- | -------------------------- |
  421. | 400 | `BadRequestError` |
  422. | 401 | `AuthenticationError` |
  423. | 403 | `PermissionDeniedError` |
  424. | 404 | `NotFoundError` |
  425. | 422 | `UnprocessableEntityError` |
  426. | 429 | `RateLimitError` |
  427. | >=500 | `InternalServerError` |
  428. | N/A | `APIConnectionError` |
  429. ## Request IDs
  430. > For more information on debugging requests, see [these docs](https://platform.openai.com/docs/api-reference/debugging-requests)
  431. All object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.
  432. ```python
  433. response = await client.responses.create(
  434. model="gpt-4o-mini",
  435. input="Say 'this is a test'.",
  436. )
  437. print(response._request_id) # req_123
  438. ```
  439. Note that unlike other properties that use an `_` prefix, the `_request_id` property
  440. _is_ public. Unless documented otherwise, _all_ other `_` prefix properties,
  441. methods and modules are _private_.
  442. > [!IMPORTANT]
  443. > If you need to access request IDs for failed requests you must catch the `APIStatusError` exception
  444. ```python
  445. import openai
  446. try:
  447. completion = await client.chat.completions.create(
  448. messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4"
  449. )
  450. except openai.APIStatusError as exc:
  451. print(exc.request_id) # req_123
  452. raise exc
  453. ```
  454. ## Retries
  455. Certain errors are automatically retried 2 times by default, with a short exponential backoff.
  456. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
  457. 429 Rate Limit, and >=500 Internal errors are all retried by default.
  458. You can use the `max_retries` option to configure or disable retry settings:
  459. ```python
  460. from openai import OpenAI
  461. # Configure the default for all requests:
  462. client = OpenAI(
  463. # default is 2
  464. max_retries=0,
  465. )
  466. # Or, configure per-request:
  467. client.with_options(max_retries=5).chat.completions.create(
  468. messages=[
  469. {
  470. "role": "user",
  471. "content": "How can I get the name of the current day in JavaScript?",
  472. }
  473. ],
  474. model="gpt-4o",
  475. )
  476. ```
  477. ## Timeouts
  478. By default requests time out after 10 minutes. You can configure this with a `timeout` option,
  479. which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
  480. ```python
  481. from openai import OpenAI
  482. # Configure the default for all requests:
  483. client = OpenAI(
  484. # 20 seconds (default is 10 minutes)
  485. timeout=20.0,
  486. )
  487. # More granular control:
  488. client = OpenAI(
  489. timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
  490. )
  491. # Override per-request:
  492. client.with_options(timeout=5.0).chat.completions.create(
  493. messages=[
  494. {
  495. "role": "user",
  496. "content": "How can I list all files in a directory using Python?",
  497. }
  498. ],
  499. model="gpt-4o",
  500. )
  501. ```
  502. On timeout, an `APITimeoutError` is thrown.
  503. Note that requests that time out are [retried twice by default](https://github.com/openai/openai-python/tree/main/#retries).
  504. ## Advanced
  505. ### Logging
  506. We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
  507. You can enable logging by setting the environment variable `OPENAI_LOG` to `info`.
  508. ```shell
  509. $ export OPENAI_LOG=info
  510. ```
  511. Or to `debug` for more verbose logging.
  512. ### How to tell whether `None` means `null` or missing
  513. In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
  514. ```py
  515. if response.my_field is None:
  516. if 'my_field' not in response.model_fields_set:
  517. print('Got json like {}, without a "my_field" key present at all.')
  518. else:
  519. print('Got json like {"my_field": null}.')
  520. ```
  521. ### Accessing raw response data (e.g. headers)
  522. The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
  523. ```py
  524. from openai import OpenAI
  525. client = OpenAI()
  526. response = client.chat.completions.with_raw_response.create(
  527. messages=[{
  528. "role": "user",
  529. "content": "Say this is a test",
  530. }],
  531. model="gpt-4o",
  532. )
  533. print(response.headers.get('X-My-Header'))
  534. completion = response.parse() # get the object that `chat.completions.create()` would have returned
  535. print(completion)
  536. ```
  537. These methods return a [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.
  538. For the sync client this will mostly be the same with the exception
  539. of `content` & `text` will be methods instead of properties. In the
  540. async client, all methods will be async.
  541. A migration script will be provided & the migration in general should
  542. be smooth.
  543. #### `.with_streaming_response`
  544. The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
  545. To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
  546. As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.
  547. ```python
  548. with client.chat.completions.with_streaming_response.create(
  549. messages=[
  550. {
  551. "role": "user",
  552. "content": "Say this is a test",
  553. }
  554. ],
  555. model="gpt-4o",
  556. ) as response:
  557. print(response.headers.get("X-My-Header"))
  558. for line in response.iter_lines():
  559. print(line)
  560. ```
  561. The context manager is required so that the response will reliably be closed.
  562. ### Making custom/undocumented requests
  563. This library is typed for convenient access to the documented API.
  564. If you need to access undocumented endpoints, params, or response properties, the library can still be used.
  565. #### Undocumented endpoints
  566. To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
  567. http verbs. Options on the client will be respected (such as retries) when making this request.
  568. ```py
  569. import httpx
  570. response = client.post(
  571. "/foo",
  572. cast_to=httpx.Response,
  573. body={"my_param": True},
  574. )
  575. print(response.headers.get("x-foo"))
  576. ```
  577. #### Undocumented request params
  578. If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
  579. options.
  580. #### Undocumented response properties
  581. To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
  582. can also get all the extra fields on the Pydantic model as a dict with
  583. [`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
  584. ### Configuring the HTTP client
  585. You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
  586. - Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
  587. - Custom [transports](https://www.python-httpx.org/advanced/transports/)
  588. - Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
  589. ```python
  590. import httpx
  591. from openai import OpenAI, DefaultHttpxClient
  592. client = OpenAI(
  593. # Or use the `OPENAI_BASE_URL` env var
  594. base_url="http://my.test.server.example.com:8083/v1",
  595. http_client=DefaultHttpxClient(
  596. proxy="http://my.test.proxy.example.com",
  597. transport=httpx.HTTPTransport(local_address="0.0.0.0"),
  598. ),
  599. )
  600. ```
  601. You can also customize the client on a per-request basis by using `with_options()`:
  602. ```python
  603. client.with_options(http_client=DefaultHttpxClient(...))
  604. ```
  605. ### Managing HTTP resources
  606. By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
  607. ```py
  608. from openai import OpenAI
  609. with OpenAI() as client:
  610. # make requests here
  611. ...
  612. # HTTP client is now closed
  613. ```
  614. ## Microsoft Azure OpenAI
  615. To use this library with [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/overview), use the `AzureOpenAI`
  616. class instead of the `OpenAI` class.
  617. > [!IMPORTANT]
  618. > The Azure API shape differs from the core API shape which means that the static types for responses / params
  619. > won't always be correct.
  620. ```py
  621. from openai import AzureOpenAI
  622. # gets the API Key from environment variable AZURE_OPENAI_API_KEY
  623. client = AzureOpenAI(
  624. # https://learn.microsoft.com/azure/ai-services/openai/reference#rest-api-versioning
  625. api_version="2023-07-01-preview",
  626. # https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource
  627. azure_endpoint="https://example-endpoint.openai.azure.com",
  628. )
  629. completion = client.chat.completions.create(
  630. model="deployment-name", # e.g. gpt-35-instant
  631. messages=[
  632. {
  633. "role": "user",
  634. "content": "How do I output all files in a directory using Python?",
  635. },
  636. ],
  637. )
  638. print(completion.to_json())
  639. ```
  640. In addition to the options provided in the base `OpenAI` client, the following options are provided:
  641. - `azure_endpoint` (or the `AZURE_OPENAI_ENDPOINT` environment variable)
  642. - `azure_deployment`
  643. - `api_version` (or the `OPENAI_API_VERSION` environment variable)
  644. - `azure_ad_token` (or the `AZURE_OPENAI_AD_TOKEN` environment variable)
  645. - `azure_ad_token_provider`
  646. An example of using the client with Microsoft Entra ID (formerly known as Azure Active Directory) can be found [here](https://github.com/openai/openai-python/blob/main/examples/azure_ad.py).
  647. ## Versioning
  648. This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
  649. 1. Changes that only affect static types, without breaking runtime behavior.
  650. 2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
  651. 3. Changes that we do not expect to impact the vast majority of users in practice.
  652. We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
  653. We are keen for your feedback; please open an [issue](https://www.github.com/openai/openai-python/issues) with questions, bugs, or suggestions.
  654. ### Determining the installed version
  655. If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
  656. You can determine the version that is being used at runtime with:
  657. ```py
  658. import openai
  659. print(openai.__version__)
  660. ```
  661. ## Requirements
  662. Python 3.9 or higher.
  663. ## Contributing
  664. See [the contributing documentation](https://github.com/openai/openai-python/tree/main/./CONTRIBUTING.md).