Parcourir la source

fix attribute recognition docs (#2444)

zhangyubo0722 il y a 1 an
Parent
commit
93c4635cf4

+ 22 - 30
docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.en.md

@@ -76,7 +76,7 @@ The pre-trained model pipelines provided by PaddleX can quickly demonstrate thei
 Not supported yet.
 
 ### 2.2 Local Experience
-Before using the pedestrian attribute recognition pipeline locally, ensure you have completed the installation of the PaddleX wheel package following the [PaddleX Local Installation Tutorial](../../../installation/installation.md).
+Before using the pedestrian attribute recognition pipeline locally, ensure you have completed the installation of the PaddleX wheel package following the [PaddleX Local Installation Tutorial](../../../installation/installation.en.md).
 
 #### 2.2.1 Command Line Experience
 You can quickly experience the pedestrian attribute recognition pipeline with a single command. Use the [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pedestrian_attribute_002.jpg) and replace `--input` with the local path for prediction.
@@ -236,9 +236,9 @@ for res in output:
     res.save_to_json("./output/")  # Save the structured output of the prediction
 ```
 ## 3. Development Integration/Deployment
-If the face recognition pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.
+If the pedestrian attribute recognition pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.
 
-If you need to directly apply the face recognition pipeline in your Python project, you can refer to the example code in [2.2.2 Python Script Integration](#222-python-script-integration).
+If you need to directly apply the pedestrian attribute recognition pipeline in your Python project, you can refer to the example code in [2.2.2 Python Script Integration](#222-python-script-integration).
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
@@ -439,45 +439,37 @@ print(result["pedestrians"])
 You can choose an appropriate method to deploy your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
 
-## 4. Custom Development
-If the default model weights provided by the Face Recognition Pipeline do not meet your expectations in terms of accuracy or speed for your specific scenario, you can try to further <b>fine-tune</b> the existing models using <b>your own domain-specific or application-specific data</b> to enhance the recognition performance of the pipeline in your scenario.
-
 ### 4.1 Model Fine-tuning
-Since the Face Recognition Pipeline consists of two modules (face detection and face recognition), the suboptimal performance of the pipeline may stem from either module.
-
-You can analyze images with poor recognition results. If you find that many faces are not detected during the analysis, it may indicate deficiencies in the face detection model. In this case, you need to refer to the [Custom Development](../../../module_usage/tutorials/cv_modules/face_detection.en.md#IV.-Custom-Development) section in the [Face Detection Module Development Tutorial](../../../module_usage/tutorials/cv_modules/face_detection.en.md) and use your private dataset to fine-tune the face detection model. If matching errors occur in detected faces, it suggests that the face feature model needs further improvement. You should refer to the [Custom Development](../../../module_usage/tutorials/cv_modules/face_feature.en.md#IV.-Custom-Development) section in the [Face Feature Module Development Tutorial](../../../module_usage/tutorials/cv_modules/face_feature.en.md) to fine-tune the face feature model.
+Since the pedestrian attribute recognition pipeline includes both a pedestrian attribute recognition module and a pedestrian detection module, the unexpected performance of the pipeline may stem from either module.
+You can analyze images with poor recognition results. If during the analysis, you find that many main targets are not detected, it may indicate deficiencies in the pedestrian detection model. In this case, you need to refer to the [Secondary Development](../../../module_usage/tutorials/cv_modules/human_detection.en.md#secondary-development) section in the [Human Detection Module Development Tutorial](../../../module_usage/tutorials/cv_modules/human_detection.en.md) and use your private dataset to fine-tune the pedestrian detection model. If the detected main attributes are incorrectly recognized, you need to refer to the [Secondary Development](../../../module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.en.md#secondary-development) section in the [Pedestrian Attribute Recognition Module Development Tutorial](../../../module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.en.md) and use your private dataset to fine-tune the pedestrian attribute recognition model.
 
 ### 4.2 Model Application
-After completing fine-tuning training with your private dataset, you will obtain local model weight files.
-
-To use the fine-tuned model weights, you only need to modify the pipeline configuration file by replacing the local paths of the fine-tuned model weights with the corresponding paths in the pipeline configuration file:
+After fine-tuning training with your private dataset, you will obtain local model weight files.
 
-```bash
+If you need to use the fine-tuned model weights, you only need to modify the pipeline configuration file by replacing the local path of the fine-tuned model weights to the corresponding location in the pipeline configuration file:
 
+```
 ......
 Pipeline:
-  device: "gpu:0"
-  det_model: "BlazeFace"        # Can be modified to the local path of the fine-tuned face detection model
-  rec_model: "MobileFaceNet"    # Can be modified to the local path of the fine-tuned face recognition model
-  det_batch_size: 1
-  rec_batch_size: 1
-  device: gpu
+  det_model: PP-YOLOE-L_human
+  cls_model: PP-LCNet_x1_0_pedestrian_attribute  # Can be modified to the local path of the fine-tuned model
+  device: "gpu"
+  batch_size: 1
 ......
 ```
-Subsequently, refer to the command-line method or Python script method in [2.2 Local Experience](#22-Local-Experience) to load the modified pipeline configuration file.
-Note: Currently, setting separate `batch_size` for face detection and face recognition models is not supported.
+Subsequently, refer to the command-line method or Python script method in the local experience, and load the modified pipeline configuration file.
 
 ## 5. Multi-hardware Support
-PaddleX supports various mainstream hardware devices such as NVIDIA GPUs, Kunlun XPU, Ascend NPU, and Cambricon MLU. <b>Simply modifying the `--device` parameter</b> allows seamless switching between different hardware.
+PaddleX supports multiple mainstream hardware devices such as NVIDIA GPUs, Kunlun XPU, Ascend NPU, and Cambricon MLU. <b>Simply modifying the `--device` parameter</b>  allows seamless switching between different hardware.
 
-For example, when running the face recognition pipeline using Python and changing the running device from an NVIDIA GPU to an Ascend NPU, you only need to modify the `device` in the script to `npu`:
+For example, if you use an NVIDIA GPU for inference in the pedestrian attribute recognition pipeline, the command used is:
 
-```python
-from paddlex import create_pipeline
+```bash
+paddlex --pipeline pedestrian_attribute_recognition --input pedestrian_attribute_002.jpg --device gpu:0
+```
+At this point, if you want to switch the hardware to Ascend NPU, you only need to change `--device` to npu:0:
 
-pipeline = create_pipeline(
-    pipeline="face_recognition",
-    device="npu:0" # gpu:0 --> npu:0
-)
+```bash
+paddlex --pipeline pedestrian_attribute_recognition --input pedestrian_attribute_002.jpg --device npu:0
 ```
-If you want to use the face recognition pipeline on more types of hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../other_devices_support/multi_devices_use_guide.en.md).
+If you want to use the pedestrian attribute recognition pipeline on more types of hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../other_devices_support/multi_devices_use_guide.en.md).

+ 2 - 1
docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.md

@@ -456,7 +456,8 @@ print(result[&quot;pedestrians&quot;])
 ```
 ......
 Pipeline:
-  model: PP-LCNet_x1_0  #可修改为微调后模型的本地路径
+  det_model: PP-YOLOE-L_human
+  cls_model: PP-LCNet_x1_0_pedestrian_attribute  #可修改为微调后模型的本地路径
   device: "gpu"
   batch_size: 1
 ......

+ 115 - 360
docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.en.md

@@ -234,9 +234,9 @@ for res in output:
 ```
 
 ## 3. Development Integration/Deployment
-If the face recognition pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.
+If the vehicle attribute recognition pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.
 
-If you need to directly apply the face recognition pipeline in your Python project, you can refer to the example code in [2.2.2 Python Script Integration](#222-python-script-integration).
+If you need to directly apply the vehicle attribute recognition pipeline in your Python project, you can refer to the example code in [2.2.2 Python Script Integration](#222-python-script-integration).
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
@@ -250,40 +250,40 @@ Below are the API reference and multi-language service invocation examples:
 
 <p>For all operations provided by the service:</p>
 <ul>
-<li>The response body and the request body of POST requests are both JSON data (JSON objects).</li>
-<li>When the request is successfully processed, the response status code is <code>200</code>, and the attributes of the response body are as follows:</li>
+<li>Both the response body and the request body for POST requests are JSON data (JSON objects).</li>
+<li>When the request is processed successfully, the response status code is <code>200</code>, and the response body properties are as follows:</li>
 </ul>
 <table>
 <thead>
 <tr>
 <th>Name</th>
 <th>Type</th>
-<th>Meaning</th>
+<th>Description</th>
 </tr>
 </thead>
 <tbody>
 <tr>
 <td><code>errorCode</code></td>
 <td><code>integer</code></td>
-<td>Error code. Fixed to <code>0</code>.</td>
+<td>Error code. Fixed as <code>0</code>.</td>
 </tr>
 <tr>
 <td><code>errorMsg</code></td>
 <td><code>string</code></td>
-<td>Error description. Fixed to <code>"Success"</code>.</td>
+<td>Error description. Fixed as <code>"Success"</code>.</td>
 </tr>
 </tbody>
 </table>
-<p>The response body may also have a <code>result</code> attribute of type <code>object</code>, which stores the operation result information.</p>
+<p>The response body may also have a <code>result</code> property of type <code>object</code>, which stores the operation result information.</p>
 <ul>
-<li>When the request is not successfully processed, the attributes of the response body are as follows:</li>
+<li>When the request is not processed successfully, the response body properties are as follows:</li>
 </ul>
 <table>
 <thead>
 <tr>
 <th>Name</th>
 <th>Type</th>
-<th>Meaning</th>
+<th>Description</th>
 </tr>
 </thead>
 <tbody>
@@ -299,21 +299,21 @@ Below are the API reference and multi-language service invocation examples:
 </tr>
 </tbody>
 </table>
-<p>The operations provided by the service are as follows:</p>
+<p>Operations provided by the service are as follows:</p>
 <ul>
 <li><b><code>infer</code></b></li>
 </ul>
-<p>Obtain OCR results for an image.</p>
-<p><code>POST /ocr</code></p>
+<p>Get vehicle attribute recognition results.</p>
+<p><code>POST /vehicle-attribute-recognition</code></p>
 <ul>
-<li>The attributes of the request body are as follows:</li>
+<li>The request body properties are as follows:</li>
 </ul>
 <table>
 <thead>
 <tr>
 <th>Name</th>
 <th>Type</th>
-<th>Meaning</th>
+<th>Description</th>
 <th>Required</th>
 </tr>
 </thead>
@@ -321,18 +321,88 @@ Below are the API reference and multi-language service invocation examples:
 <tr>
 <td><code>image</code></td>
 <td><code>string</code></td>
-<td>The URL of an accessible image file or the Base64 encoded result of the image file content.</td>
+<td>The URL of an image file accessible by the service or the Base64 encoded result of the image file content.</td>
 <td>Yes</td>
 </tr>
+</tbody>
+</table>
+<ul>
+<li>When the request is processed successfully, the <code>result</code> of the response body has the following properties:</li>
+</ul>
+<table>
+<thead>
+<tr>
+<th>Name</th>
+<th>Type</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>vehicles</code></td>
+<td><code>array</code></td>
+<td>Information about the vehicle's location and attributes.</td>
+</tr>
+<tr>
+<td><code>image</code></td>
+<td><code>string</code></td>
+<td>The vehicle attribute recognition result image. The image is in JPEG format and encoded using Base64.</td>
+</tr>
+</tbody>
+</table>
+<p>Each element in <code>vehicles</code> is an <code>object</code> with the following properties:</p>
+<table>
+<thead>
+<tr>
+<th>Name</th>
+<th>Type</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>bbox</code></td>
+<td><code>array</code></td>
+<td>The location of the vehicle. The elements in the array are the x-coordinate of the top-left corner, the y-coordinate of the top-left corner, the x-coordinate of the bottom-right corner, and the y-coordinate of the bottom-right corner of the bounding box, respectively.</td>
+</tr>
 <tr>
-<td><code>inferenceParams</code></td>
-<td><code>object</code></td>
-<td>Inference parameters.</td>
-<td>No</td>
+<td><code>attributes</code></td>
+<td><code>array</code></td>
+<td>The vehicle attributes.</td>
+</tr>
+<tr>
+<td><code>score</code></td>
+<td><code>number</code></td>
+<td>The detection score.</td>
 </tr>
 </tbody>
 </table>
-<p>The attributes of```markdown</p>
+<p>Each element in <code>attributes</code> is an <code>object</code> with the following properties:</p>
+<table>
+<thead>
+<tr>
+<th>Name</th>
+<th>Type</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>label</code></td>
+<td><code>string</code></td>
+<td>The label of the attribute.</td>
+</tr>
+<tr>
+<td><code>score</code></td>
+<td><code>number</code></td>
+<td>The classification score.</td>
+</tr>
+</tbody>
+</table>
+</details>
+
+<details><summary>Multi-Language Service Invocation Examples</summary>
+
 <details>
 <summary>Python</summary>
 
@@ -340,337 +410,27 @@ Below are the API reference and multi-language service invocation examples:
 <pre><code class="language-python">import base64
 import requests
 
-API_URL = &quot;http://localhost:8080/ocr&quot; # Service URL
+API_URL = &quot;http://localhost:8080/vehicle-attribute-recognition&quot;
 image_path = &quot;./demo.jpg&quot;
 output_image_path = &quot;./out.jpg&quot;
 
-# Encode the local image to Base64
 with open(image_path, &quot;rb&quot;) as file:
     image_bytes = file.read()
     image_data = base64.b64encode(image_bytes).decode(&quot;ascii&quot;)
 
-payload = {&quot;image&quot;: image_data}  # Base64 encoded file content or image URL
+payload = {&quot;image&quot;: image_data}
 
-# Call the API
 response = requests.post(API_URL, json=payload)
 
-# Process the response data
 assert response.status_code == 200
 result = response.json()[&quot;result&quot;]
 with open(output_image_path, &quot;wb&quot;) as file:
     file.write(base64.b64decode(result[&quot;image&quot;]))
 print(f&quot;Output image saved at {output_image_path}&quot;)
-print(&quot;\nDetected texts:&quot;)
-print(result[&quot;texts&quot;])
+print(&quot;\nDetected vehicles:&quot;)
+print(result[&quot;vehicles&quot;])
 </code></pre></details>
-
-<details><summary>C++</summary>
-
-<pre><code class="language-cpp">#include &lt;iostream&gt;
-#include &quot;cpp-httplib/httplib.h&quot; // https://github.com/Huiyicc/cpp-httplib
-#include &quot;nlohmann/json.hpp&quot; // https://github.com/nlohmann/json
-#include &quot;base64.hpp&quot; // https://github.com/tobiaslocker/base64
-
-int main() {
-    httplib::Client client(&quot;localhost:8080&quot;);
-    const std::string imagePath = &quot;./demo.jpg&quot;;
-    const std::string outputImagePath = &quot;./out.jpg&quot;;
-
-    httplib::Headers headers = {
-        {&quot;Content-Type&quot;, &quot;application/json&quot;}
-    };
-
-    // Encode the local image to Base64
-    std::ifstream file(imagePath, std::ios::binary | std::ios::ate);
-    std::streamsize size = file.tellg();
-    file.seekg(0, std::ios::beg);
-
-    std::vector&lt;char&gt; buffer(size);
-    if (!file.read(buffer.data(), size)) {
-        std::cerr &lt;&lt; &quot;Error reading file.&quot; &lt;&lt; std::endl;
-        return 1;
-    }
-    std::string bufferStr(reinterpret_cast&lt;const char*&gt;(buffer.data()), buffer.size());
-    std::string encodedImage = base64::to_base64(bufferStr);
-
-    nlohmann::json jsonObj;
-    jsonObj[&quot;image&quot;] = encodedImage;
-    std::string body = jsonObj.dump();
-
-    // Call the API
-    auto response = client.Post(&quot;/ocr&quot;, headers, body, &quot;application/json&quot;);
-    // Process the response data
-    if (response &amp;&amp; response-&gt;status == 200) {
-        nlohmann::json jsonResponse = nlohmann::json::parse(response-&gt;body);
-        auto result = jsonResponse[&quot;result&quot;];
-
-        encodedImage = result[&quot;image&quot;];
-        std::string decodedString = base64::from_base64(encodedImage);
-        std::vector&lt;unsigned char&gt; decodedImage(decodedString.begin(), decodedString.end());
-        std::ofstream outputImage(outputImagePath, std::ios::binary | std::ios::out);
-        if (outputImage.is_open()) {
-            outputImage.write(reinterpret_cast&lt;char*&gt;(decodedImage.data()), decodedImage.size());
-            outputImage.close();
-            std::cout &lt;&lt; &quot;Output image saved at &quot; &lt;&lt; outputImagePath &lt;&lt; std::endl;
-        } else {
-            std::cerr &lt;&lt; &quot;Unable to open file for writing: &quot; &lt;&lt; outputImagePath &lt;&lt; std::endl;
-        }
-
-        auto texts = result[&quot;texts&quot;];
-        std::cout &lt;&lt; &quot;\nDetected texts:&quot; &lt;&lt; std::endl;
-        for (const auto&amp; text : texts) {
-            std::cout &lt;&lt; text &lt;&lt; std::endl;
-        }
-    } else {
-        std::cout &lt;&lt; &quot;Failed to send HTTP request.&quot; &lt;&lt; std::endl;
-        return 1;
-    }
-
-    return 0;
-}
-
-</code></pre></details>
-``````markdown
-# Tutorial on Artificial Intelligence and Computer Vision
-
-This tutorial, intended for numerous developers, covers the basics and applications of AI and Computer Vision.
-
-<details><summary>Java</summary>
-
-<pre><code class="language-java">import okhttp3.*;
-import com.fasterxml.jackson.databind.ObjectMapper;
-import com.fasterxml.jackson.databind.JsonNode;
-import com.fasterxml.jackson.databind.node.ObjectNode;
-
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.util.Base64;
-
-public class Main {
-    public static void main(String[] args) throws IOException {
-        String API_URL = &quot;http://localhost:8080/ocr&quot;; // Service URL
-        String imagePath = &quot;./demo.jpg&quot;; // Local image path
-        String outputImagePath = &quot;./out.jpg&quot;; // Output image path
-
-        // Encode the local image to Base64
-        File file = new File(imagePath);
-        byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
-        String imageData = Base64.getEncoder().encodeToString(fileContent);
-
-        ObjectMapper objectMapper = new ObjectMapper();
-        ObjectNode params = objectMapper.createObjectNode();
-        params.put(&quot;image&quot;, imageData); // Base64-encoded file content or image URL
-
-        // Create an OkHttpClient instance
-        OkHttpClient client = new OkHttpClient();
-        MediaType JSON = MediaType.get(&quot;application/json; charset=utf-8&quot;);
-        RequestBody body = RequestBody.create(params.toString(), JSON);
-        Request request = new Request.Builder()
-                .url(API_URL)
-                .post(body)
-                .build();
-
-        // Call the API and process the response
-        try (Response response = client.newCall(request).execute()) {
-            if (response.isSuccessful()) {
-                String responseBody = response.body().string();
-                JsonNode resultNode = objectMapper.readTree(responseBody);
-                JsonNode result = resultNode.get(&quot;result&quot;);
-                String base64Image = result.get(&quot;image&quot;).asText();
-                JsonNode texts = result.get(&quot;texts&quot;);
-
-                byte[] imageBytes = Base64.getDecoder().decode(base64Image);
-                try (FileOutputStream fos = new FileOutputStream(outputImagePath)) {
-                    fos.write(imageBytes);
-                }
-                System.out.println(&quot;Output image saved at &quot; + outputImagePath);
-                System.out.println(&quot;\nDetected texts: &quot; + texts.toString());
-            } else {
-                System.err.println(&quot;Request failed with code: &quot; + response.code());
-            }
-        }
-    }
-}
-</code></pre></details>
-
-<details><summary>Go</summary>
-
-<pre><code class="language-go">package main
-
-import (
-    &quot;bytes&quot;
-    &quot;encoding/base64&quot;
-    &quot;encoding/json&quot;
-    &quot;fmt&quot;
-    &quot;io/ioutil&quot;
-    &quot;net/http&quot;
-)
-
-func main() {
-    API_URL := &quot;http://localhost:8080/ocr&quot;
-    imagePath := &quot;./demo.jpg&quot;
-    outputImagePath := &quot;./out.jpg&quot;
-
-    // Encode the local image to Base64
-    imageBytes, err := ioutil.ReadFile(imagePath)
-    if err != nil {
-        fmt.Println(&quot;Error reading image file:&quot;, err)
-        return
-    }
-    imageData := base64.StdEncoding.EncodeToString(imageBytes)
-
-    payload := map[string]string{&quot;image&quot;: imageData} // Base64-encoded file content or image URL
-    payloadBytes, err := json.Marshal(payload)
-    if err != nil {
-        fmt.Println(&quot;Error marshaling payload:&quot;, err)
-        return
-    }
-
-    // Call the API
-    client := &amp;http.Client{}
-    req, err := http.NewRequest(&quot;POST&quot;, API_URL, bytes.NewBuffer(payloadBytes))
-    if err != nil {
-        fmt.Println(&quot;Error creating request:&quot;, err)
-        return
-    }
-
-    res, err := client.Do(req)
-    if err != nil {
-        fmt.Println(&quot;Error sending request:&quot;, err)
-        return
-    }
-    defer res.Body.Close()
-
-    // Process the response
-    body, err := ioutil.ReadAll(res.Body)
-    if err != nil {
-        fmt.Println(&quot;Error reading response body:&quot;, err)
-        return
-    }```markdown
-# An English Tutorial on Artificial Intelligence and Computer Vision
-
-This tutorial document is intended for numerous developers and covers content related to artificial intelligence and computer vision.
-
-&lt;details&gt;
-&lt;summary&gt;C#&lt;/summary&gt;
-
-```csharp
-using System;
-using System.IO;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json.Linq;
-
-class Program
-{
-static readonly string API_URL = &quot;http://localhost:8080/ocr&quot;;
-static readonly string imagePath = &quot;./demo.jpg&quot;;
-static readonly string outputImagePath = &quot;./out.jpg&quot;;
-
-static async Task Main(string[] args)
-{
-var httpClient = new HttpClient();
-
-// Encode the local image to Base64
-byte[] imageBytes = File.ReadAllBytes(imagePath);
-string image_data = Convert.ToBase64String(imageBytes);
-
-var payload = new JObject{ { &quot;image&quot;, image_data } }; // Base64 encoded file content or image URL
-var content = new StringContent(payload.ToString(), Encoding.UTF8, &quot;application/json&quot;);
-
-// Call the API
-HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
-response.EnsureSuccessStatusCode();
-
-// Process the API response
-string responseBody = await response.Content.ReadAsStringAsync();
-JObject jsonResponse = JObject.Parse(responseBody);
-
-string base64Image = jsonResponse[&quot;result&quot;][&quot;image&quot;].ToString();
-byte[] outputImageBytes = Convert.FromBase64String(base64Image);
-
-File.WriteAllBytes(outputImagePath, outputImageBytes);
-Console.WriteLine($&quot;Output image saved at {outputImagePath}&quot;);
-Console.WriteLine(&quot;\nDetected texts:&quot;);
-Console.WriteLine(jsonResponse[&quot;result&quot;][&quot;texts&quot;].ToString());
-}
-}
-</code></pre></details>
-
-<details><summary>Node.js</summary>
-
-<pre><code class="language-js">const axios = require('axios');
-const fs = require('fs');
-
-const API_URL = 'http://localhost:8080/ocr';
-const imagePath = './demo.jpg';
-const outputImagePath = &quot;./out.jpg&quot;;
-
-let config = {
-   method: 'POST',
-   maxBodyLength: Infinity,
-   url: API_URL,
-   data: JSON.stringify({
-    'image': encodeImageToBase64(imagePath)  // Base64 encoded file content or image URL
-  })
-};
-
-// Encode the local image to Base64
-function encodeImageToBase64(filePath) {
-  const bitmap = fs.readFileSync(filePath);
-  return Buffer.from(bitmap).toString('base64');
-}
-
-// Call the API
-axios.request(config)
-.then((response) =&gt; {
-    // Process the API response
-    const result = response.data[&quot;result&quot;];
-    const imageBuffer = Buffer.from(result[&quot;image&quot;], 'base64');
-    fs.writeFile(outputImagePath, imageBuffer, (err) =&gt; {
-      if (err) throw err;
-      console.log(`Output image saved at ${outputImagePath}`);
-    });
-    console.log(&quot;\nDetected texts:&quot;);
-    console.log(result[&quot;texts&quot;]);
-})
-.catch((error) =&gt; {
-  console.log(error);
-});
-</code></pre></details>
-
-<details>
-<summary>PHP</summary>
-
-```php
-<?php
-
-$API_URL = "http://localhost:8080/ocr"; // Service URL
-$image_path = "./demo.jpg";
-$output_image_path = "./out.jpg";
-
-// Encode the local image to Base64
-$image_data = base64_encode(file_get_contents($image_path));
-$payload = array("image" => $image_data); // Base64 encoded file content or image URL
-
-// Call the API
-$ch = curl_init($API_URL);
-curl_setopt($ch, CURLOPT_POST, true);
-curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload));
-curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
-$response = curl_exec($ch);
-curl_close($ch);
-
-// Process the API response
-$result = json_decode($response, true)["result"];
-file_put_contents($output
-```
-
-<details>
-<details>
+</details>
 <br/>
 
 📱 <b>Edge Deployment</b>: Edge deployment is a method where computing and data processing functions are placed on the user's device itself, allowing the device to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.en.md).
@@ -678,44 +438,39 @@ You can choose an appropriate method to deploy your model pipeline based on your
 
 
 ## 4. Custom Development
-If the default model weights provided by the Face Recognition Pipeline do not meet your expectations in terms of accuracy or speed for your specific scenario, you can try to further <b>fine-tune</b> the existing models using <b>your own domain-specific or application-specific data</b> to enhance the recognition performance of the pipeline in your scenario.
+If the default model weights provided by the vehicle attribute recognition pipeline do not meet your expectations in terms of accuracy or speed for your specific scenario, you can try to further <b>fine-tune</b> the existing models using <b>your own data from specific domains or application scenarios</b> to enhance the recognition performance of the vehicle attribute recognition pipeline in your context.
 
 ### 4.1 Model Fine-tuning
-Since the Face Recognition Pipeline consists of two modules (face detection and face recognition), the suboptimal performance of the pipeline may stem from either module.
-
-You can analyze images with poor recognition results. If you find that many faces are not detected during the analysis, it may indicate deficiencies in the face detection model. In this case, you need to refer to the [Custom Development](../../../module_usage/tutorials/cv_modules/face_detection.en.md#IV.-Custom-Development) section in the [Face Detection Module Development Tutorial](../../../module_usage/tutorials/cv_modules/face_detection.en.md) and use your private dataset to fine-tune the face detection model. If matching errors occur in detected faces, it suggests that the face feature model needs further improvement. You should refer to the [Custom Development](../../../module_usage/tutorials/cv_modules/face_feature.en.md#IV.-Custom-Development) section in the [Face Feature Module Development Tutorial](../../../module_usage/tutorials/cv_modules/face_feature.en.md) to fine-tune the face feature model.
+Since the vehicle attribute recognition pipeline includes both a vehicle attribute recognition module and a vehicle detection module, the suboptimal performance of the pipeline may stem from either module.
+You can analyze images with poor recognition results. If during the analysis, you find that many main targets are not detected, it may indicate deficiencies in the vehicle detection model. In this case, you need to refer to the [Custom Development](../../../module_usage/tutorials/cv_modules/human_detection.en.md#4-custom-development) section in the [Vehicle Detection Module Development Tutorial](../../../module_usage/tutorials/cv_modules/human_detection.en.md) and use your private dataset to fine-tune the vehicle detection model. If the detected main attributes are incorrectly recognized, you need to refer to the [Custom Development](../../../module_usage/tutorials/cv_modules/vehicle_attribute_recognition.en.md#4-custom-development) section in the [Vehicle Attribute Recognition Module Development Tutorial](../../../module_usage/tutorials/cv_modules/vehicle_attribute_recognition.en.md) and use your private dataset to fine-tune the vehicle attribute recognition model.
 
 ### 4.2 Model Application
-After completing fine-tuning training with your private dataset, you will obtain local model weight files.
+After fine-tuning with your private dataset, you will obtain local model weight files.
 
-To use the fine-tuned model weights, you only need to modify the pipeline configuration file by replacing the local paths of the fine-tuned model weights with the corresponding paths in the pipeline configuration file:
-
-```bash
+To use the fine-tuned model weights, you only need to modify the pipeline configuration file by replacing the path to the default model weights with the local path to the fine-tuned model weights:
 
+```
 ......
 Pipeline:
-  device: "gpu:0"
-  det_model: "BlazeFace"        # Can be modified to the local path of the fine-tuned face detection model
-  rec_model: "MobileFaceNet"    # Can be modified to the local path of the fine-tuned face recognition model
-  det_batch_size: 1
-  rec_batch_size: 1
-  device: gpu
+  det_model: PP-YOLOE-L_vehicle
+  cls_model: PP-LCNet_x1_0_vehicle_attribute   # Can be modified to the local path of the fine-tuned model
+  device: "gpu"
+  batch_size: 1
 ......
 ```
-Subsequently, refer to the command-line method or Python script method in [2.2 Local Experience](#22-Local-Experience) to load the modified pipeline configuration file.
-Note: Currently, setting separate `batch_size` for face detection and face recognition models is not supported.
+Subsequently, refer to the command-line or Python script methods in the local experience, and load the modified pipeline configuration file.
 
 ## 5. Multi-hardware Support
 PaddleX supports various mainstream hardware devices such as NVIDIA GPUs, Kunlun XPU, Ascend NPU, and Cambricon MLU. <b>Simply modifying the `--device` parameter</b> allows seamless switching between different hardware.
 
-For example, when running the face recognition pipeline using Python and changing the running device from an NVIDIA GPU to an Ascend NPU, you only need to modify the `device` in the script to `npu`:
+For example, if you use an NVIDIA GPU for inference with the vehicle attribute recognition pipeline, the command is:
 
-```python
-from paddlex import create_pipeline
+```bash
+paddlex --pipeline vehicle_attribute_recognition --input vehicle_attribute_002.jpg --device gpu:0
+```
+At this point, if you want to switch the hardware to Ascend NPU, you only need to change `--device` to npu:0:
 
-pipeline = create_pipeline(
-    pipeline="face_recognition",
-    device="npu:0" # gpu:0 --> npu:0
-)
+```bash
+paddlex --pipeline vehicle_attribute_recognition --input vehicle_attribute_002.jpg --device npu:0
 ```
-If you want to use the face recognition pipeline on more types of hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../other_devices_support/multi_devices_use_guide.en.md).
+If you want to use the vehicle attribute recognition pipeline on more types of hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../other_devices_support/multi_devices_use_guide.en.md).

+ 35 - 415
docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.md

@@ -302,8 +302,8 @@ for res in output:
 <ul>
 <li><b><code>infer</code></b></li>
 </ul>
-<p>获取图像OCR结果。</p>
-<p><code>POST /ocr</code></p>
+<p>获取车辆属性识别结果。</p>
+<p><code>POST /vehicle-attribute-recognition</code></p>
 <ul>
 <li>请求体的属性如下:</li>
 </ul>
@@ -323,36 +323,33 @@ for res in output:
 <td>服务可访问的图像文件的URL或图像文件内容的Base64编码结果。</td>
 <td>是</td>
 </tr>
-<tr>
-<td><code>inferenceParams</code></td>
-<td><code>object</code></td>
-<td>推理参数。</td>
-<td>否</td>
-</tr>
 </tbody>
 </table>
-<p><code>inferenceParams</code>的属性如下:</p>
+<ul>
+<li>请求处理成功时,响应体的<code>result</code>具有如下属性:</li>
+</ul>
 <table>
 <thead>
 <tr>
 <th>名称</th>
 <th>类型</th>
 <th>含义</th>
-<th>是否必填</th>
 </tr>
 </thead>
 <tbody>
 <tr>
-<td><code>maxLongSide</code></td>
-<td><code>integer</code></td>
-<td>推理时,若文本检测模型的输入图像较长边的长度大于<code>maxLongSide</code>,则将对图像进行缩放,使其较长边的长度等于<code>maxLongSide</code>。</td>
-<td>否</td>
+<td><code>vehicles</code></td>
+<td><code>array</code></td>
+<td>车辆的位置及属性等信息。</td>
+</tr>
+<tr>
+<td><code>image</code></td>
+<td><code>string</code></td>
+<td>车辆属性识别结果图。图像为JPEG格式,使用Base64编码。</td>
 </tr>
 </tbody>
 </table>
-<ul>
-<li>请求处理成功时,响应体的<code>result</code>具有如下属性:</li>
-</ul>
+<p><code>vehicles</code>中的每个元素为一个<code>object</code>,具有如下属性:</p>
 <table>
 <thead>
 <tr>
@@ -363,18 +360,23 @@ for res in output:
 </thead>
 <tbody>
 <tr>
-<td><code>texts</code></td>
+<td><code>bbox</code></td>
 <td><code>array</code></td>
-<td>文本位置、内容和得分。</td>
+<td>车辆位置。数组中元素依次为边界框左上角x坐标、左上角y坐标、右下角x坐标以及右下角y坐标。</td>
 </tr>
 <tr>
-<td><code>image</code></td>
-<td><code>string</code></td>
-<td>OCR结果图,其中标注检测到的文本位置。图像为JPEG格式,使用Base64编码。</td>
+<td><code>attributes</code></td>
+<td><code>array</code></td>
+<td>车辆属性。</td>
+</tr>
+<tr>
+<td><code>score</code></td>
+<td><code>number</code></td>
+<td>检测得分。</td>
 </tr>
 </tbody>
 </table>
-<p><code>texts</code>中的每个元素为一个<code>object</code>,具有如下属性:</p>
+<p><code>attributes</code>中的每个元素为一个<code>object</code>,具有如下属性:</p>
 <table>
 <thead>
 <tr>
@@ -385,73 +387,18 @@ for res in output:
 </thead>
 <tbody>
 <tr>
-<td><code>poly</code></td>
-<td><code>array</code></td>
-<td>文本位置。数组中元素依次为包围文本的多边形的顶点坐标。</td>
-</tr>
-<tr>
-<td><code>text</code></td>
+<td><code>label</code></td>
 <td><code>string</code></td>
-<td>文本内容。</td>
+<td>属性标签。</td>
 </tr>
 <tr>
 <td><code>score</code></td>
 <td><code>number</code></td>
-<td>文本识别得分。</td>
+<td>分类得分。</td>
 </tr>
 </tbody>
 </table>
-<p><code>result</code>示例如下:</p>
-<pre><code class="language-json">{
-&quot;texts&quot;: [
-{
-&quot;poly&quot;: [
-[
-444,
-244
-],
-[
-705,
-244
-],
-[
-705,
-311
-],
-[
-444,
-311
-]
-],
-&quot;text&quot;: &quot;北京南站&quot;,
-&quot;score&quot;: 0.9
-},
-{
-&quot;poly&quot;: [
-[
-992,
-248
-],
-[
-1263,
-251
-],
-[
-1263,
-318
-],
-[
-992,
-315
-]
-],
-&quot;text&quot;: &quot;天津站&quot;,
-&quot;score&quot;: 0.5
-}
-],
-&quot;image&quot;: &quot;xxxxxx&quot;
-}
-</code></pre></details>
+</details>
 
 <details><summary>多语言调用服务示例</summary>
 
@@ -462,7 +409,7 @@ for res in output:
 <pre><code class="language-python">import base64
 import requests
 
-API_URL = &quot;http://localhost:8080/ocr&quot; # 服务URL
+API_URL = &quot;http://localhost:8080/vehicle-attribute-recognition&quot; # 服务URL
 image_path = &quot;./demo.jpg&quot;
 output_image_path = &quot;./out.jpg&quot;
 
@@ -482,336 +429,8 @@ result = response.json()[&quot;result&quot;]
 with open(output_image_path, &quot;wb&quot;) as file:
     file.write(base64.b64decode(result[&quot;image&quot;]))
 print(f&quot;Output image saved at {output_image_path}&quot;)
-print(&quot;\nDetected texts:&quot;)
-print(result[&quot;texts&quot;])
-</code></pre></details>
-
-<details><summary>C++</summary>
-
-<pre><code class="language-cpp">#include &lt;iostream&gt;
-#include &quot;cpp-httplib/httplib.h&quot; // https://github.com/Huiyicc/cpp-httplib
-#include &quot;nlohmann/json.hpp&quot; // https://github.com/nlohmann/json
-#include &quot;base64.hpp&quot; // https://github.com/tobiaslocker/base64
-
-int main() {
-    httplib::Client client(&quot;localhost:8080&quot;);
-    const std::string imagePath = &quot;./demo.jpg&quot;;
-    const std::string outputImagePath = &quot;./out.jpg&quot;;
-
-    httplib::Headers headers = {
-        {&quot;Content-Type&quot;, &quot;application/json&quot;}
-    };
-
-    // 对本地图像进行Base64编码
-    std::ifstream file(imagePath, std::ios::binary | std::ios::ate);
-    std::streamsize size = file.tellg();
-    file.seekg(0, std::ios::beg);
-
-    std::vector&lt;char&gt; buffer(size);
-    if (!file.read(buffer.data(), size)) {
-        std::cerr &lt;&lt; &quot;Error reading file.&quot; &lt;&lt; std::endl;
-        return 1;
-    }
-    std::string bufferStr(reinterpret_cast&lt;const char*&gt;(buffer.data()), buffer.size());
-    std::string encodedImage = base64::to_base64(bufferStr);
-
-    nlohmann::json jsonObj;
-    jsonObj[&quot;image&quot;] = encodedImage;
-    std::string body = jsonObj.dump();
-
-    // 调用API
-    auto response = client.Post(&quot;/ocr&quot;, headers, body, &quot;application/json&quot;);
-    // 处理接口返回数据
-    if (response &amp;&amp; response-&gt;status == 200) {
-        nlohmann::json jsonResponse = nlohmann::json::parse(response-&gt;body);
-        auto result = jsonResponse[&quot;result&quot;];
-
-        encodedImage = result[&quot;image&quot;];
-        std::string decodedString = base64::from_base64(encodedImage);
-        std::vector&lt;unsigned char&gt; decodedImage(decodedString.begin(), decodedString.end());
-        std::ofstream outputImage(outPutImagePath, std::ios::binary | std::ios::out);
-        if (outputImage.is_open()) {
-            outputImage.write(reinterpret_cast&lt;char*&gt;(decodedImage.data()), decodedImage.size());
-            outputImage.close();
-            std::cout &lt;&lt; &quot;Output image saved at &quot; &lt;&lt; outPutImagePath &lt;&lt; std::endl;
-        } else {
-            std::cerr &lt;&lt; &quot;Unable to open file for writing: &quot; &lt;&lt; outPutImagePath &lt;&lt; std::endl;
-        }
-
-        auto texts = result[&quot;texts&quot;];
-        std::cout &lt;&lt; &quot;\nDetected texts:&quot; &lt;&lt; std::endl;
-        for (const auto&amp; text : texts) {
-            std::cout &lt;&lt; text &lt;&lt; std::endl;
-        }
-    } else {
-        std::cout &lt;&lt; &quot;Failed to send HTTP request.&quot; &lt;&lt; std::endl;
-        return 1;
-    }
-
-    return 0;
-}
-</code></pre></details>
-
-<details><summary>Java</summary>
-
-<pre><code class="language-java">import okhttp3.*;
-import com.fasterxml.jackson.databind.ObjectMapper;
-import com.fasterxml.jackson.databind.JsonNode;
-import com.fasterxml.jackson.databind.node.ObjectNode;
-
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.util.Base64;
-
-public class Main {
-    public static void main(String[] args) throws IOException {
-        String API_URL = &quot;http://localhost:8080/ocr&quot;; // 服务URL
-        String imagePath = &quot;./demo.jpg&quot;; // 本地图像
-        String outputImagePath = &quot;./out.jpg&quot;; // 输出图像
-
-        // 对本地图像进行Base64编码
-        File file = new File(imagePath);
-        byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
-        String imageData = Base64.getEncoder().encodeToString(fileContent);
-
-        ObjectMapper objectMapper = new ObjectMapper();
-        ObjectNode params = objectMapper.createObjectNode();
-        params.put(&quot;image&quot;, imageData); // Base64编码的文件内容或者图像URL
-
-        // 创建 OkHttpClient 实例
-        OkHttpClient client = new OkHttpClient();
-        MediaType JSON = MediaType.Companion.get(&quot;application/json; charset=utf-8&quot;);
-        RequestBody body = RequestBody.Companion.create(params.toString(), JSON);
-        Request request = new Request.Builder()
-                .url(API_URL)
-                .post(body)
-                .build();
-
-        // 调用API并处理接口返回数据
-        try (Response response = client.newCall(request).execute()) {
-            if (response.isSuccessful()) {
-                String responseBody = response.body().string();
-                JsonNode resultNode = objectMapper.readTree(responseBody);
-                JsonNode result = resultNode.get(&quot;result&quot;);
-                String base64Image = result.get(&quot;image&quot;).asText();
-                JsonNode texts = result.get(&quot;texts&quot;);
-
-                byte[] imageBytes = Base64.getDecoder().decode(base64Image);
-                try (FileOutputStream fos = new FileOutputStream(outputImagePath)) {
-                    fos.write(imageBytes);
-                }
-                System.out.println(&quot;Output image saved at &quot; + outputImagePath);
-                System.out.println(&quot;\nDetected texts: &quot; + texts.toString());
-            } else {
-                System.err.println(&quot;Request failed with code: &quot; + response.code());
-            }
-        }
-    }
-}
-</code></pre></details>
-
-<details><summary>Go</summary>
-
-<pre><code class="language-go">package main
-
-import (
-    &quot;bytes&quot;
-    &quot;encoding/base64&quot;
-    &quot;encoding/json&quot;
-    &quot;fmt&quot;
-    &quot;io/ioutil&quot;
-    &quot;net/http&quot;
-)
-
-func main() {
-    API_URL := &quot;http://localhost:8080/ocr&quot;
-    imagePath := &quot;./demo.jpg&quot;
-    outputImagePath := &quot;./out.jpg&quot;
-
-    // 对本地图像进行Base64编码
-    imageBytes, err := ioutil.ReadFile(imagePath)
-    if err != nil {
-        fmt.Println(&quot;Error reading image file:&quot;, err)
-        return
-    }
-    imageData := base64.StdEncoding.EncodeToString(imageBytes)
-
-    payload := map[string]string{&quot;image&quot;: imageData} // Base64编码的文件内容或者图像URL
-    payloadBytes, err := json.Marshal(payload)
-    if err != nil {
-        fmt.Println(&quot;Error marshaling payload:&quot;, err)
-        return
-    }
-
-    // 调用API
-    client := &amp;http.Client{}
-    req, err := http.NewRequest(&quot;POST&quot;, API_URL, bytes.NewBuffer(payloadBytes))
-    if err != nil {
-        fmt.Println(&quot;Error creating request:&quot;, err)
-        return
-    }
-
-    res, err := client.Do(req)
-    if err != nil {
-        fmt.Println(&quot;Error sending request:&quot;, err)
-        return
-    }
-    defer res.Body.Close()
-
-    // 处理接口返回数据
-    body, err := ioutil.ReadAll(res.Body)
-    if err != nil {
-        fmt.Println(&quot;Error reading response body:&quot;, err)
-        return
-    }
-    type Response struct {
-        Result struct {
-            Image      string   `json:&quot;image&quot;`
-            Texts []map[string]interface{} `json:&quot;texts&quot;`
-        } `json:&quot;result&quot;`
-    }
-    var respData Response
-    err = json.Unmarshal([]byte(string(body)), &amp;respData)
-    if err != nil {
-        fmt.Println(&quot;Error unmarshaling response body:&quot;, err)
-        return
-    }
-
-    outputImageData, err := base64.StdEncoding.DecodeString(respData.Result.Image)
-    if err != nil {
-        fmt.Println(&quot;Error decoding base64 image data:&quot;, err)
-        return
-    }
-    err = ioutil.WriteFile(outputImagePath, outputImageData, 0644)
-    if err != nil {
-        fmt.Println(&quot;Error writing image to file:&quot;, err)
-        return
-    }
-    fmt.Printf(&quot;Image saved at %s.jpg\n&quot;, outputImagePath)
-    fmt.Println(&quot;\nDetected texts:&quot;)
-    for _, text := range respData.Result.Texts {
-        fmt.Println(text)
-    }
-}
-</code></pre></details>
-
-<details><summary>C#</summary>
-
-<pre><code class="language-csharp">using System;
-using System.IO;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json.Linq;
-
-class Program
-{
-    static readonly string API_URL = &quot;http://localhost:8080/ocr&quot;;
-    static readonly string imagePath = &quot;./demo.jpg&quot;;
-    static readonly string outputImagePath = &quot;./out.jpg&quot;;
-
-    static async Task Main(string[] args)
-    {
-        var httpClient = new HttpClient();
-
-        // 对本地图像进行Base64编码
-        byte[] imageBytes = File.ReadAllBytes(imagePath);
-        string image_data = Convert.ToBase64String(imageBytes);
-
-        var payload = new JObject{ { &quot;image&quot;, image_data } }; // Base64编码的文件内容或者图像URL
-        var content = new StringContent(payload.ToString(), Encoding.UTF8, &quot;application/json&quot;);
-
-        // 调用API
-        HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
-        response.EnsureSuccessStatusCode();
-
-        // 处理接口返回数据
-        string responseBody = await response.Content.ReadAsStringAsync();
-        JObject jsonResponse = JObject.Parse(responseBody);
-
-        string base64Image = jsonResponse[&quot;result&quot;][&quot;image&quot;].ToString();
-        byte[] outputImageBytes = Convert.FromBase64String(base64Image);
-
-        File.WriteAllBytes(outputImagePath, outputImageBytes);
-        Console.WriteLine($&quot;Output image saved at {outputImagePath}&quot;);
-        Console.WriteLine(&quot;\nDetected texts:&quot;);
-        Console.WriteLine(jsonResponse[&quot;result&quot;][&quot;texts&quot;].ToString());
-    }
-}
-</code></pre></details>
-
-<details><summary>Node.js</summary>
-
-<pre><code class="language-js">const axios = require('axios');
-const fs = require('fs');
-
-const API_URL = 'http://localhost:8080/ocr'
-const imagePath = './demo.jpg'
-const outputImagePath = &quot;./out.jpg&quot;;
-
-let config = {
-   method: 'POST',
-   maxBodyLength: Infinity,
-   url: API_URL,
-   data: JSON.stringify({
-    'image': encodeImageToBase64(imagePath)  // Base64编码的文件内容或者图像URL
-  })
-};
-
-// 对本地图像进行Base64编码
-function encodeImageToBase64(filePath) {
-  const bitmap = fs.readFileSync(filePath);
-  return Buffer.from(bitmap).toString('base64');
-}
-
-// 调用API
-axios.request(config)
-.then((response) =&gt; {
-    // 处理接口返回数据
-    const result = response.data[&quot;result&quot;];
-    const imageBuffer = Buffer.from(result[&quot;image&quot;], 'base64');
-    fs.writeFile(outputImagePath, imageBuffer, (err) =&gt; {
-      if (err) throw err;
-      console.log(`Output image saved at ${outputImagePath}`);
-    });
-    console.log(&quot;\nDetected texts:&quot;);
-    console.log(result[&quot;texts&quot;]);
-})
-.catch((error) =&gt; {
-  console.log(error);
-});
-</code></pre></details>
-
-<details><summary>PHP</summary>
-
-<pre><code class="language-php">&lt;?php
-
-$API_URL = &quot;http://localhost:8080/ocr&quot;; // 服务URL
-$image_path = &quot;./demo.jpg&quot;;
-$output_image_path = &quot;./out.jpg&quot;;
-
-// 对本地图像进行Base64编码
-$image_data = base64_encode(file_get_contents($image_path));
-$payload = array(&quot;image&quot; =&gt; $image_data); // Base64编码的文件内容或者图像URL
-
-// 调用API
-$ch = curl_init($API_URL);
-curl_setopt($ch, CURLOPT_POST, true);
-curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload));
-curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
-$response = curl_exec($ch);
-curl_close($ch);
-
-// 处理接口返回数据
-$result = json_decode($response, true)[&quot;result&quot;];
-file_put_contents($output_image_path, base64_decode($result[&quot;image&quot;]));
-echo &quot;Output image saved at &quot; . $output_image_path . &quot;\n&quot;;
-echo &quot;\nDetected texts:\n&quot;;
-print_r($result[&quot;texts&quot;]);
-
-?&gt;
+print(&quot;\nDetected vehicles:&quot;)
+print(result[&quot;vehicles&quot;])
 </code></pre></details>
 </details>
 <br/>
@@ -824,7 +443,7 @@ print_r($result[&quot;texts&quot;]);
 
 ### 4.1 模型微调
 由于车辆属性识别产线包含车辆属性识别模块和车辆检测模块,如果模型产线的效果不及预期可能来自于其中任何一个模块。
-您可以对识别效果差的图片进行分析,如果在分析过程中发现有较多的主体目标未被检测出来,那么可能是车辆检测模型存在不足那么您需要参考[车辆检测模块开发教程](../../../module_usage/tutorials/cv_modules/human_detection.md)中的[二次开发](../../../module_usage/tutorials/cv_modules/human_detection.md#四二次开发)章节,使用您的私有数据集对车辆检测模型进行微调;如果检测出来的主体属性识别错误,那么您需要参考[车辆属性识别模块开发教程](../../../module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md)中的[二次开发](../../../module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md#四二次开发)章节,使用您的私有数据集对车辆属性识别模型进行微调。
+您可以对识别效果差的图片进行分析,如果在分析过程中发现有较多的主体目标未被检测出来,那么可能是车辆检测模型存在不足那么您需要参考[车辆检测模块开发教程](../../../module_usage/tutorials/cv_modules/vehicle_detection.md)中的[二次开发](../../../module_usage/tutorials/cv_modules/vehicle_detection.md#四二次开发)章节,使用您的私有数据集对车辆检测模型进行微调;如果检测出来的主体属性识别错误,那么您需要参考[车辆属性识别模块开发教程](../../../module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md)中的[二次开发](../../../module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md#四二次开发)章节,使用您的私有数据集对车辆属性识别模型进行微调。
 
 ### 4.2 模型应用
 当您使用私有数据集完成微调训练后,可获得本地模型权重文件。
@@ -834,7 +453,8 @@ print_r($result[&quot;texts&quot;]);
 ```
 ......
 Pipeline:
-  model: PP-LCNet_x1_0  #可修改为微调后模型的本地路径
+  det_model: PP-YOLOE-L_vehicle
+  cls_model: PP-LCNet_x1_0_vehicle_attribute  #可修改为微调后模型的本地路径
   device: "gpu"
   batch_size: 1
 ......