|
|
@@ -349,13 +349,13 @@ Additionally, PaddleX provides three other deployment methods, detailed as follo
|
|
|
|
|
|
🚀 <b>High-Performance Inference</b>: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that aim to deeply optimize model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance inference procedures, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_inference.md).
|
|
|
|
|
|
-☁️ <b>Service-Oriented Deployment</b>: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy.en.md).
|
|
|
+☁️ <b>Serving</b>: Serving is a common deployment strategy in real-world production environments. By encapsulating inference functions into services, clients can access these services via network requests to obtain inference results. PaddleX supports various solutions for serving pipelines. For detailed pipeline serving procedures, please refer to the [PaddleX Pipeline Serving Guide](../../../pipeline_deploy/serving.md).
|
|
|
|
|
|
-Below are the API references and multi-language service invocation examples:
|
|
|
+Below are the API reference and multi-language service invocation examples for the basic serving solution:
|
|
|
|
|
|
<details><summary>API Reference</summary>
|
|
|
|
|
|
-<p>For main operations provided by the service:</p>
|
|
|
+<p>For primary operations provided by the service:</p>
|
|
|
<ul>
|
|
|
<li>The HTTP request method is POST.</li>
|
|
|
<li>The request body and the response body are both JSON data (JSON objects).</li>
|
|
|
@@ -421,7 +421,7 @@ Below are the API references and multi-language service invocation examples:
|
|
|
</tr>
|
|
|
</tbody>
|
|
|
</table>
|
|
|
-<p>Main operations provided by the service:</p>
|
|
|
+<p>Primary operations provided by the service:</p>
|
|
|
<ul>
|
|
|
<li><b><code>infer</code></b></li>
|
|
|
</ul>
|
|
|
@@ -441,41 +441,44 @@ Below are the API references and multi-language service invocation examples:
|
|
|
</thead>
|
|
|
<tbody>
|
|
|
<tr>
|
|
|
-<td><code>image</code></td>
|
|
|
+<td><code>file</code></td>
|
|
|
<td><code>string</code></td>
|
|
|
-<td>The URL of an image file accessible by the service or the Base64 encoded result of the image file content.</td>
|
|
|
+<td>The URL of an image file or PDF file accessible by the server, or the Base64 encoded result of the content of the above-mentioned file types. For PDF files with more than 10 pages, only the content of the first 10 pages will be used.</td>
|
|
|
<td>Yes</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
-<td><code>inferenceParams</code></td>
|
|
|
-<td><code>object</code></td>
|
|
|
-<td>Inference parameters.</td>
|
|
|
+<td><code>fileType</code></td>
|
|
|
+<td><code>integer</code></td>
|
|
|
+<td>File type. <code>0</code> indicates a PDF file, and <code>1</code> indicates an image file. If this property is not present in the request body, the file type will be inferred based on the URL.</td>
|
|
|
<td>No</td>
|
|
|
</tr>
|
|
|
</tbody>
|
|
|
</table>
|
|
|
-<p>Properties of <code>inferenceParams</code>:</p>
|
|
|
+<ul>
|
|
|
+<li>When the request is processed successfully, the <code>result</code> of the response body has the following properties:</li>
|
|
|
+</ul>
|
|
|
<table>
|
|
|
<thead>
|
|
|
<tr>
|
|
|
<th>Name</th>
|
|
|
<th>Type</th>
|
|
|
<th>Description</th>
|
|
|
-<th>Required</th>
|
|
|
</tr>
|
|
|
</thead>
|
|
|
<tbody>
|
|
|
<tr>
|
|
|
-<td><code>maxLongSide</code></td>
|
|
|
-<td><code>integer</code></td>
|
|
|
-<td>During inference, if the length of the longer side of the input image for the text detection model is greater than <code>maxLongSide</code>, the image will be scaled so that the length of the longer side equals <code>maxLongSide</code>.</td>
|
|
|
-<td>No</td>
|
|
|
+<td><code>tableRecResults</code></td>
|
|
|
+<td><code>array</code></td>
|
|
|
+<td>Table recognition results. The array length is 1 (for image input) or the smaller of the number of document pages and 10 (for PDF input). For PDF input, each element in the array represents the processing result of each page in the PDF file.</td>
|
|
|
+</tr>
|
|
|
+<tr>
|
|
|
+<td><code>dataInfo</code></td>
|
|
|
+<td><code>object</code></td>
|
|
|
+<td>Information about the input data.</td>
|
|
|
</tr>
|
|
|
</tbody>
|
|
|
</table>
|
|
|
-<ul>
|
|
|
-<li>When the request is processed successfully, the <code>result</code> of the response body has the following properties:</li>
|
|
|
-</ul>
|
|
|
+<p>Each element in <code>tableRecResults</code> is an <code>object</code> with the following properties:</p>
|
|
|
<table>
|
|
|
<thead>
|
|
|
<tr>
|
|
|
@@ -535,397 +538,28 @@ Below are the API references and multi-language service invocation examples:
|
|
|
import requests
|
|
|
|
|
|
API_URL = "http://localhost:8080/table-recognition"
|
|
|
-image_path = "./demo.jpg"
|
|
|
-ocr_image_path = "./ocr.jpg"
|
|
|
-layout_image_path = "./layout.jpg"
|
|
|
+file_path = "./demo.jpg"
|
|
|
|
|
|
-with open(image_path, "rb") as file:
|
|
|
- image_bytes = file.read()
|
|
|
- image_data = base64.b64encode(image_bytes).decode("ascii")
|
|
|
+with open(file_path, "rb") as file:
|
|
|
+ file_bytes = file.read()
|
|
|
+ file_data = base64.b64encode(file_bytes).decode("ascii")
|
|
|
|
|
|
-payload = {"image": image_data}
|
|
|
+payload = {"file": file_data, "fileType": 1}
|
|
|
|
|
|
response = requests.post(API_URL, json=payload)
|
|
|
|
|
|
assert response.status_code == 200
|
|
|
result = response.json()["result"]
|
|
|
-with open(ocr_image_path, "wb") as file:
|
|
|
- file.write(base64.b64decode(result["ocrImage"]))
|
|
|
-print(f"Output image saved at {ocr_image_path}")
|
|
|
-with open(layout_image_path, "wb") as file:
|
|
|
- file.write(base64.b64decode(result["layoutImage"]))
|
|
|
-print(f"Output image saved at {layout_image_path}")
|
|
|
-print("\nDetected tables:")
|
|
|
-print(result["tables"])
|
|
|
-</code></pre></details>
|
|
|
-
|
|
|
-<details><summary>C++</summary>
|
|
|
-
|
|
|
-<pre><code class="language-cpp">#include <iostream>
|
|
|
-#include "cpp-httplib/httplib.h" // https://github.com/Huiyicc/cpp-httplib
|
|
|
-#include "nlohmann/json.hpp" // https://github.com/nlohmann/json
|
|
|
-#include "base64.hpp" // https://github.com/tobiaslocker/base64
|
|
|
-
|
|
|
-int main() {
|
|
|
- httplib::Client client("localhost:8080");
|
|
|
- const std::string imagePath = "./demo.jpg";
|
|
|
- const std::string ocrImagePath = "./ocr.jpg";
|
|
|
- const std::string layoutImagePath = "./layout.jpg";
|
|
|
-
|
|
|
- httplib::Headers headers = {
|
|
|
- {"Content-Type", "application/json"}
|
|
|
- };
|
|
|
-
|
|
|
- std::ifstream file(imagePath, std::ios::binary | std::ios::ate);
|
|
|
- std::streamsize size = file.tellg();
|
|
|
- file.seekg(0, std::ios::beg);
|
|
|
-
|
|
|
- std::vector<char> buffer(size);
|
|
|
- if (!file.read(buffer.data(), size)) {
|
|
|
- std::cerr << "Error reading file." << std::endl;
|
|
|
- return 1;
|
|
|
- }
|
|
|
- std::string bufferStr(reinterpret_cast<const char*>(buffer.data()), buffer.size());
|
|
|
- std::string encodedImage = base64::to_base64(bufferStr);
|
|
|
-
|
|
|
- nlohmann::json jsonObj;
|
|
|
- jsonObj["image"] = encodedImage;
|
|
|
- std::string body = jsonObj.dump();
|
|
|
-
|
|
|
- auto response = client.Post("/table-recognition", headers, body, "application/json");
|
|
|
-
|
|
|
- if (response && response->status == 200) {
|
|
|
- nlohmann::json jsonResponse = nlohmann::json::parse(response->body);
|
|
|
- auto result = jsonResponse["result"];
|
|
|
-
|
|
|
- encodedImage = result["ocrImage"];
|
|
|
- std::string decoded_string = base64::from_base64(encodedImage);
|
|
|
- std::vector<unsigned char> decodedOcrImage(decoded_string.begin(), decoded_string.end());
|
|
|
- std::ofstream outputOcrFile(ocrImagePath, std::ios::binary | std::ios::out);
|
|
|
- if (outputOcrFile.is_open()) {
|
|
|
- outputOcrFile.write(reinterpret_cast<char*>(decodedOcrImage.data()), decodedOcrImage.size());
|
|
|
- outputOcrFile.close();
|
|
|
- std::cout << "Output image saved at " << ocrImagePath << std::endl;
|
|
|
- } else {
|
|
|
- std::cerr << "Unable to open file for writing: " << ocrImagePath << std::endl;
|
|
|
- }
|
|
|
-
|
|
|
- encodedImage = result["layoutImage"];
|
|
|
- decodedString = base64::from_base64(encodedImage);
|
|
|
- std::vector<unsigned char> decodedLayoutImage(decodedString.begin(), decodedString.end());
|
|
|
- std::ofstream outputLayoutFile(layoutImagePath, std::ios::binary | std::ios::out);
|
|
|
- if (outputLayoutFile.is_open()) {
|
|
|
- outputLayoutFile.write(reinterpret_cast<char*>(decodedLayoutImage.data()), decodedlayoutImage.size());
|
|
|
- outputLayoutFile.close();
|
|
|
- std::cout << "Output image saved at " << layoutImagePath << std::endl;
|
|
|
- } else {
|
|
|
- std::cerr << "Unable to open file for writing: " << layoutImagePath << std::endl;
|
|
|
- }
|
|
|
-
|
|
|
- auto tables = result["tables"];
|
|
|
- std::cout << "\nDetected tables:" << std::endl;
|
|
|
- for (const auto& table : tables) {
|
|
|
- std::cout << table << std::endl;
|
|
|
- }
|
|
|
- } else {
|
|
|
- std::cout << "Failed to send HTTP request." << std::endl;
|
|
|
- return 1;
|
|
|
- }
|
|
|
-
|
|
|
- return 0;
|
|
|
-}
|
|
|
-</code></pre></details>
|
|
|
-
|
|
|
-<details><summary>Java</summary>
|
|
|
-
|
|
|
-<pre><code class="language-java">import okhttp3.*;
|
|
|
-import com.fasterxml.jackson.databind.ObjectMapper;
|
|
|
-import com.fasterxml.jackson.databind.JsonNode;
|
|
|
-import com.fasterxml.jackson.databind.node.ObjectNode;
|
|
|
-
|
|
|
-import java.io.File;
|
|
|
-import java.io.FileOutputStream;
|
|
|
-import java.io.IOException;
|
|
|
-import java.util.Base64;
|
|
|
-
|
|
|
-public class Main {
|
|
|
- public static void main(String[] args) throws IOException {
|
|
|
- String API_URL = "http://localhost:8080/table-recognition";
|
|
|
- String imagePath = "./demo.jpg";
|
|
|
- String ocrImagePath = "./ocr.jpg";
|
|
|
- String layoutImagePath = "./layout.jpg";
|
|
|
-
|
|
|
- File file = new File(imagePath);
|
|
|
- byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
|
|
|
- String imageData = Base64.getEncoder().encodeToString(fileContent);
|
|
|
-
|
|
|
- ObjectMapper objectMapper = new ObjectMapper();
|
|
|
- ObjectNode params = objectMapper.createObjectNode();
|
|
|
- params.put("image", imageData);
|
|
|
-
|
|
|
- OkHttpClient client = new OkHttpClient();
|
|
|
- MediaType JSON = MediaType.Companion.get("application/json; charset=utf-8");
|
|
|
- RequestBody body = RequestBody.Companion.create(params.toString(), JSON);
|
|
|
- Request request = new Request.Builder()
|
|
|
- .url(API_URL)
|
|
|
- .post(body)
|
|
|
- .build();
|
|
|
-
|
|
|
- try (Response response = client.newCall(request).execute()) {
|
|
|
- if (response.isSuccessful()) {
|
|
|
- String responseBody = response.body().string();
|
|
|
- JsonNode resultNode = objectMapper.readTree(responseBody);
|
|
|
- JsonNode result = resultNode.get("result");
|
|
|
- String ocrBase64Image = result.get("ocrImage").asText();
|
|
|
- String layoutBase64Image = result.get("layoutImage").asText();
|
|
|
- JsonNode tables = result.get("tables");
|
|
|
-
|
|
|
- byte[] imageBytes = Base64.getDecoder().decode(ocrBase64Image);
|
|
|
- try (FileOutputStream fos = new FileOutputStream(ocrImagePath)) {
|
|
|
- fos.write(imageBytes);
|
|
|
- }
|
|
|
- System.out.println("Output image saved at " + ocrBase64Image);
|
|
|
-
|
|
|
- imageBytes = Base64.getDecoder().decode(layoutBase64Image);
|
|
|
- try (FileOutputStream fos = new FileOutputStream(layoutImagePath)) {
|
|
|
- fos.write(imageBytes);
|
|
|
- }
|
|
|
- System.out.println("Output image saved at " + layoutImagePath);
|
|
|
-
|
|
|
- System.out.println("\nDetected tables: " + tables.toString());
|
|
|
- } else {
|
|
|
- System.err.println("Request failed with code: " + response.code());
|
|
|
- }
|
|
|
- }
|
|
|
- }
|
|
|
-}
|
|
|
-</code></pre></details>
|
|
|
-
|
|
|
-<details><summary>Go</summary>
|
|
|
-
|
|
|
-<pre><code class="language-go">package main
|
|
|
-
|
|
|
-import (
|
|
|
- "bytes"
|
|
|
- "encoding/base64"
|
|
|
- "encoding/json"
|
|
|
- "fmt"
|
|
|
- "io/ioutil"
|
|
|
- "net/http"
|
|
|
-)
|
|
|
-
|
|
|
-func main() {
|
|
|
- API_URL := "http://localhost:8080/table-recognition"
|
|
|
- imagePath := "./demo.jpg"
|
|
|
- ocrImagePath := "./ocr.jpg"
|
|
|
- layoutImagePath := "./layout.jpg"
|
|
|
-
|
|
|
- imageBytes, err := ioutil.ReadFile(imagePath)
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error reading image file:", err)
|
|
|
- return
|
|
|
- }
|
|
|
- imageData := base64.StdEncoding.EncodeToString(imageBytes)
|
|
|
-
|
|
|
- payload := map[string]string{"image": imageData}
|
|
|
- payloadBytes, err := json.Marshal(payload)
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error marshaling payload:", err)
|
|
|
- return
|
|
|
- }
|
|
|
-
|
|
|
- client := &http.Client{}
|
|
|
- req, err := http.NewRequest("POST", API_URL, bytes.NewBuffer(payloadBytes))
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error creating request:", err)
|
|
|
- return
|
|
|
- }
|
|
|
-
|
|
|
- res, err := client.Do(req)
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error sending request:", err)
|
|
|
- return
|
|
|
- }
|
|
|
- defer res.Body.Close()
|
|
|
-
|
|
|
- body, err := ioutil.ReadAll(res.Body)
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error reading response body:", err)
|
|
|
- return
|
|
|
- }
|
|
|
- type Response struct {
|
|
|
- Result struct {
|
|
|
- OcrImage string `json:"ocrImage"`
|
|
|
- LayoutImage string `json:"layoutImage"`
|
|
|
- Tables []map[string]interface{} `json:"tables"`
|
|
|
- } `json:"result"`
|
|
|
- }
|
|
|
- var respData Response
|
|
|
- err = json.Unmarshal([]byte(string(body)), &respData)
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error unmarshaling response body:", err)
|
|
|
- return
|
|
|
- }
|
|
|
-
|
|
|
- ocrImageData, err := base64.StdEncoding.DecodeString(respData.Result.OcrImage)
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error decoding base64 image data:", err)
|
|
|
- return
|
|
|
- }
|
|
|
- err = ioutil.WriteFile(ocrImagePath, ocrImageData, 0644)
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error writing image to file:", err)
|
|
|
- return
|
|
|
- }
|
|
|
- fmt.Printf("Image saved at %s.jpg\n", ocrImagePath)
|
|
|
-
|
|
|
- layoutImageData, err := base64.StdEncoding.DecodeString(respData.Result.LayoutImage)
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error decoding base64 image data:", err)
|
|
|
- return
|
|
|
- }
|
|
|
- err = ioutil.WriteFile(layoutImagePath, layoutImageData, 0644)
|
|
|
- if err != nil {
|
|
|
- fmt.Println("Error writing image to file:", err)
|
|
|
- return
|
|
|
- }
|
|
|
- fmt.Printf("Image saved at %s.jpg\n", layoutImagePath)
|
|
|
-
|
|
|
- fmt.Println("\nDetected tables:")
|
|
|
- for _, table := range respData.Result.Tables {
|
|
|
- fmt.Println(table)
|
|
|
- }
|
|
|
-}
|
|
|
-</code></pre></details>
|
|
|
-
|
|
|
-<details><summary>C#</summary>
|
|
|
-
|
|
|
-<pre><code class="language-csharp">using System;
|
|
|
-using System.IO;
|
|
|
-using System.Net.Http;
|
|
|
-using System.Net.Http.Headers;
|
|
|
-using System.Text;
|
|
|
-using System.Threading.Tasks;
|
|
|
-using Newtonsoft.Json.Linq;
|
|
|
-
|
|
|
-class Program
|
|
|
-{
|
|
|
- static readonly string API_URL = "http://localhost:8080/table-recognition";
|
|
|
- static readonly string imagePath = "./demo.jpg";
|
|
|
- static readonly string ocrImagePath = "./ocr.jpg";
|
|
|
- static readonly string layoutImagePath = "./layout.jpg";
|
|
|
-
|
|
|
- static async Task Main(string[] args)
|
|
|
- {
|
|
|
- var httpClient = new HttpClient();
|
|
|
-
|
|
|
- byte[] imageBytes = File.ReadAllBytes(imagePath);
|
|
|
- string image_data = Convert.ToBase64String(imageBytes);
|
|
|
-
|
|
|
- var payload = new JObject{ { "image", image_data } };
|
|
|
- var content = new StringContent(payload.ToString(), Encoding.UTF8, "application/json");
|
|
|
-
|
|
|
- HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
|
|
|
- response.EnsureSuccessStatusCode();
|
|
|
-
|
|
|
- string responseBody = await response.Content.ReadAsStringAsync();
|
|
|
- JObject jsonResponse = JObject.Parse(responseBody);
|
|
|
-
|
|
|
- string ocrBase64Image = jsonResponse["result"]["ocrImage"].ToString();
|
|
|
- byte[] ocrImageBytes = Convert.FromBase64String(ocrBase64Image);
|
|
|
- File.WriteAllBytes(ocrImagePath, ocrImageBytes);
|
|
|
- Console.WriteLine($"Output image saved at {ocrImagePath}");
|
|
|
-
|
|
|
- string layoutBase64Image = jsonResponse["result"]["layoutImage"].ToString();
|
|
|
- byte[] layoutImageBytes = Convert.FromBase64String(layoutBase64Image);
|
|
|
- File.WriteAllBytes(layoutImagePath, layoutImageBytes);
|
|
|
- Console.WriteLine($"Output image saved at {layoutImagePath}");
|
|
|
-
|
|
|
- Console.WriteLine("\nDetected tables:");
|
|
|
- Console.WriteLine(jsonResponse["result"]["tables"].ToString());
|
|
|
- }
|
|
|
-}
|
|
|
-</code></pre></details>
|
|
|
-
|
|
|
-<details><summary>Node.js</summary>
|
|
|
-
|
|
|
-<pre><code class="language-js">const axios = require('axios');
|
|
|
-const fs = require('fs');
|
|
|
-
|
|
|
-const API_URL = 'http://localhost:8080/table-recognition'
|
|
|
-const imagePath = './demo.jpg'
|
|
|
-const ocrImagePath = "./ocr.jpg";
|
|
|
-const layoutImagePath = "./layout.jpg";
|
|
|
-
|
|
|
-let config = {
|
|
|
- method: 'POST',
|
|
|
- maxBodyLength: Infinity,
|
|
|
- url: API_URL,
|
|
|
- data: JSON.stringify({
|
|
|
- 'image': encodeImageToBase64(imagePath)
|
|
|
- })
|
|
|
-};
|
|
|
-
|
|
|
-function encodeImageToBase64(filePath) {
|
|
|
- const bitmap = fs.readFileSync(filePath);
|
|
|
- return Buffer.from(bitmap).toString('base64');
|
|
|
-}
|
|
|
-
|
|
|
-axios.request(config)
|
|
|
-.then((response) => {
|
|
|
- const result = response.data["result"];
|
|
|
-
|
|
|
- const imageBuffer = Buffer.from(result["ocrImage"], 'base64');
|
|
|
- fs.writeFile(ocrImagePath, imageBuffer, (err) => {
|
|
|
- if (err) throw err;
|
|
|
- console.log(`Output image saved at ${ocrImagePath}`);
|
|
|
- });
|
|
|
-
|
|
|
- imageBuffer = Buffer.from(result["layoutImage"], 'base64');
|
|
|
- fs.writeFile(layoutImagePath, imageBuffer, (err) => {
|
|
|
- if (err) throw err;
|
|
|
- console.log(`Output image saved at ${layoutImagePath}`);
|
|
|
- });
|
|
|
-
|
|
|
- console.log("\nDetected tables:");
|
|
|
- console.log(result["tables"]);
|
|
|
-})
|
|
|
-.catch((error) => {
|
|
|
- console.log(error);
|
|
|
-});
|
|
|
-</code></pre></details>
|
|
|
-
|
|
|
-<details><summary>PHP</summary>
|
|
|
-
|
|
|
-<pre><code class="language-php"><?php
|
|
|
-
|
|
|
-$API_URL = "http://localhost:8080/table-recognition";
|
|
|
-$image_path = "./demo.jpg";
|
|
|
-$ocr_image_path = "./ocr.jpg";
|
|
|
-$layout_image_path = "./layout.jpg";
|
|
|
-
|
|
|
-$image_data = base64_encode(file_get_contents($image_path));
|
|
|
-$payload = array("image" => $image_data);
|
|
|
-
|
|
|
-$ch = curl_init($API_URL);
|
|
|
-curl_setopt($ch, CURLOPT_POST, true);
|
|
|
-curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload));
|
|
|
-curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json'));
|
|
|
-curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
|
|
|
-$response = curl_exec($ch);
|
|
|
-curl_close($ch);
|
|
|
-
|
|
|
-$result = json_decode($response, true)["result"];
|
|
|
-file_put_contents($ocr_image_path, base64_decode($result["ocrImage"]));
|
|
|
-echo "Output image saved at " . $ocr_image_path . "\n";
|
|
|
-
|
|
|
-file_put_contents($layout_image_path, base64_decode($result["layoutImage"]));
|
|
|
-echo "Output image saved at " . $layout_image_path . "\n";
|
|
|
-
|
|
|
-echo "\nDetected tables:\n";
|
|
|
-print_r($result["tables"]);
|
|
|
-
|
|
|
-?>
|
|
|
+for i, res in enumerate(result["tableRecResults"]):
|
|
|
+ print("Detected tables:")
|
|
|
+ print(res["tables"])
|
|
|
+ layout_img_path = f"layout_{i}.jpg"
|
|
|
+ with open(layout_img_path, "wb") as f:
|
|
|
+ f.write(base64.b64decode(res["layoutImage"]))
|
|
|
+ ocr_img_path = f"ocr_{i}.jpg"
|
|
|
+ with open(ocr_img_path, "wb") as f:
|
|
|
+ f.write(base64.b64decode(res["ocrImage"]))
|
|
|
+ print(f"Output images saved at {layout_img_path} and {ocr_img_path}")
|
|
|
</code></pre></details>
|
|
|
</details>
|
|
|
<br/>
|