Speech recognition is an advanced tool that can automatically convert spoken languages into corresponding text or commands. This technology plays an important role in various fields such as intelligent customer service, voice assistants, and meeting records. Multilingual speech recognition supports automatic language detection and recognition of multiple languages.
Multilingual Speech Recognition Models (Optional):
| Model | Model Download Link | Training Data | Model Size | Word Error Rate | Introduction |
|---|---|---|---|---|---|
| whisper_large | whisper_large | 680kh | 5.8G | 2.7 (Librispeech) | Whisper is a multilingual automatic speech recognition model developed by OpenAI, known for its high precision and robustness. It features an end-to-end architecture and can handle noisy audio environments, making it suitable for applications such as voice assistants and real-time subtitles. |
| whisper_medium | whisper_medium | 680kh | 2.9G | - | |
| whisper_small | whisper_small | 680kh | 923M | - | |
| whisper_base | whisper_base | 680kh | 277M | - | |
| whisper_tiny | whisper_tiny | 680kh | 145M | - |
multilingual_speech_recognition pipeline object is instantiated through create_pipeline(). The specific parameter descriptions are as follows:
| Parameter | Parameter Description | Parameter Type | Default Value |
|---|---|---|---|
pipeline |
The name of the pipeline or the path to the pipeline configuration file. If it is the pipeline name, it must be a pipeline supported by PaddleX. | str |
None |
device |
The inference device for the pipeline. It supports specifying the specific card number of the GPU, such as "gpu:0", the specific card number of other hardware, such as "npu:0", and the CPU, such as "cpu". | str |
gpu:0 |
predict() method of the multilingual_speech_recognition pipeline object is called to perform inference and prediction. This method will return a generator. Below are the parameters and their descriptions for the predict() method:
| Parameter | Parameter Description | Parameter Type | Options | Default Value |
|---|---|---|---|---|
input |
Data to be predicted | str |
|
None |
device |
The inference device for the pipeline | str|None |
|
None |
| Method | Description | Parameter | Parameter Type | Parameter Description | Default Value |
|---|---|---|---|---|---|
print() |
Print the result to the terminal | format_json |
bool |
Whether to format the output content using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True |
False |
||
save_to_json() |
Save the result as a JSON file | save_path |
str |
Path to save the file. When it is a directory, the saved file name is consistent with the input file type naming | None |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True |
False |
| Attribute | Attribute Description |
|---|---|
json |
Get the predicted json format result |
multilingual_speech_recognition pipeline, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Additionally, CLI prediction also supports passing in a configuration file, simply specify the path of the configuration file with --pipeline.
import base64
import requests
API_URL = "http://localhost:8080/video-classification" # Service URL
video_path = "./demo.mp4"
output_video_path = "./out.mp4"
# Encode local video to Base64
with open(video_path, "rb") as file:
video_bytes = file.read()
video_data = base64.b64encode(video_bytes).decode("ascii")
payload = {"video": video_data} # Base64 encoded file content or video URL
# Call API
response = requests.post(API_URL, json=payload)
# Process API response
assert response.status_code == 200
result = response.json()["result"]
with open(output_video_path, "wb") as file:
file.write(base64.b64decode(result["video"]))
print(f"Output video saved at {output_video_path}")
print("\nCategories:")
print(result["categories"])
#include <iostream>
#include "cpp-httplib/httplib.h" // https://github.com/Huiyicc/cpp-httplib
#include "nlohmann/json.hpp" // https://github.com/nlohmann/json
#include "base64.hpp" // https://github.com/tobiaslocker/base64
int main() {
httplib::Client client("localhost:8080");
const std::string videoPath = "./demo.mp4";
const std::string outputImagePath = "./out.mp4";
httplib::Headers headers = {
{"Content-Type", "application/json"}
};
// Encode local video to Base64
std::ifstream file(videoPath, std::ios::binary | std::ios::ate);
std::streamsize size = file.tellg();
file.seekg(0, std::ios::beg);
std::vector<char> buffer(size);
if (!file.read(buffer.data(), size)) {
std::cerr << "Error reading file." << std::endl;
return 1;
}
std::string bufferStr(reinterpret_cast<const char*>(buffer.data()), buffer.size());
std::string encodedImage = base64::to_base64(bufferStr);
nlohmann::json jsonObj;
jsonObj["video"] = encodedImage;
std::string body = jsonObj.dump();
// Call API
auto response = client.Post("/video-classification", headers, body, "application/json");
// Process API response
if (response && response->status == 200) {
nlohmann::json jsonResponse = nlohmann::json::parse(response->body);
auto result = jsonResponse["result"];
encodedImage = result["video"];
std::string decodedString = base64::from_base64(encodedImage);
std::vector<unsigned char> decodedImage(decodedString.begin(), decodedString.end());
std::ofstream outputImage(outPutImagePath, std::ios::binary | std::ios::out);
if (outputImage.is_open()) {
outputImage.write(reinterpret_cast<char*>(decodedImage.data()), decodedImage.size());
outputImage.close();
std::cout << "Output video saved at " << outPutImagePath << std::endl;
} else {
std::cerr << "Unable to open file for writing: " << outPutImagePath << std::endl;
}
auto categories = result["categories"];
std::cout << "\nCategories:" << std::endl;
for (const auto& category : categories) {
std::cout << category << std::endl;
}
} else {
std::cout << "Failed to send HTTP request." << std::endl;
return 1;
}
return 0;
}
import okhttp3.*;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Base64;
public class Main {
public static void main(String[] args) throws IOException {
String API_URL = "http://localhost:8080/video-classification"; // Service URL
String videoPath = "./demo.mp4"; // Local video
String outputImagePath = "./out.mp4"; // Output video
// Encode local video to Base64
File file = new File(videoPath);
byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
String videoData = Base64.getEncoder().encodeToString(fileContent);
ObjectMapper objectMapper = new ObjectMapper();
ObjectNode params = objectMapper.createObjectNode();
params.put("video", videoData); // Base64 encoded file content or video URL
// Create OkHttpClient instance
OkHttpClient client = new OkHttpClient();
MediaType JSON = MediaType.Companion.get("application/json; charset=utf-8");
RequestBody body = RequestBody.Companion.create(params.toString(), JSON);
Request request = new Request.Builder()
.url(API_URL)
.post(body)
.build();
// Call API and process API response
try (Response response = client.newCall(request).execute()) {
if (response.isSuccessful()) {
String responseBody = response.body().string();
JsonNode resultNode = objectMapper.readTree(responseBody);
JsonNode result = resultNode.get("result");
String base64Image = result.get("video").asText();
JsonNode categories = result.get("categories");
byte[] videoBytes = Base64.getDecoder().decode(base64Image);
try (FileOutputStream fos = new FileOutputStream(outputImagePath)) {
fos.write(videoBytes);
}
System.out.println("Output video saved at " + outputImagePath);
System.out.println("\nCategories: " + categories.toString());
} else {
System.err.println("Request failed with code: " + response.code());
}
}
}
}
package main
import (
"bytes"
"encoding/base64"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
)
func main() {
API_URL := "http://localhost:8080/video-classification"
videoPath := "./demo.mp4"
outputImagePath := "./out.mp4"
// Base64 encode the local video
videoBytes, err := ioutil.ReadFile(videoPath)
if err != nil {
fmt.Println("Error reading video file:", err)
return
}
videoData := base64.StdEncoding.EncodeToString(videoBytes)
payload := map[string]string{"video": videoData} // Base64 encoded file content or video URL
payloadBytes, err := json.Marshal(payload)
if err != nil {
fmt.Println("Error marshaling payload:", err)
return
}
// Call the API
client := &http.Client{}
req, err := http.NewRequest("POST", API_URL, bytes.NewBuffer(payloadBytes))
if err != nil {
fmt.Println("Error creating request:", err)
return
}
res, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return
}
defer res.Body.Close()
// Handle the API response
body, err := ioutil.ReadAll(res.Body)
if err != nil {
fmt.Println("Error reading response body:", err)
return
}
type Response struct {
Result struct {
Image string `json:"video"`
Categories []map[string]interface{} `json:"categories"`
} `json:"result"`
}
var respData Response
err = json.Unmarshal([]byte(string(body)), &respData)
if err != nil {
fmt.Println("Error unmarshaling response body:", err)
return
}
outputImageData, err := base64.StdEncoding.DecodeString(respData.Result.Image)
if err != nil {
fmt.Println("Error decoding base64 video data:", err)
return
}
err = ioutil.WriteFile(outputImagePath, outputImageData, 0644)
if err != nil {
fmt.Println("Error writing video to file:", err)
return
}
fmt.Printf("Image saved at %s.mp4\n", outputImagePath)
fmt.Println("\nCategories:")
for _, category := range respData.Result.Categories {
fmt.Println(category)
}
}
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json.Linq;
class Program
{
static readonly string API_URL = "http://localhost:8080/video-classification";
static readonly string videoPath = "./demo.mp4";
static readonly string outputImagePath = "./out.mp4";
static async Task Main(string[] args)
{
var httpClient = new HttpClient();
// Base64 encode the local video
byte[] videoBytes = File.ReadAllBytes(videoPath);
string video_data = Convert.ToBase64String(videoBytes);
var payload = new JObject{ { "video", video_data } }; // Base64 encoded file content or video URL
var content = new StringContent(payload.ToString(), Encoding.UTF8, "application/json");
// Call the API
HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
response.EnsureSuccessStatusCode();
// Handle the API response
string responseBody = await response.Content.ReadAsStringAsync();
JObject jsonResponse = JObject.Parse(responseBody);
string base64Image = jsonResponse["result"]["video"].ToString();
byte[] outputImageBytes = Convert.FromBase64String(base64Image);
File.WriteAllBytes(outputImagePath, outputImageBytes);
Console.WriteLine($"Output video saved at {outputImagePath}");
Console.WriteLine("\nCategories:");
Console.WriteLine(jsonResponse["result"]["categories"].ToString());
}
}
const axios = require('axios');
const fs = require('fs');
const API_URL = 'http://localhost:8080/video-classification'
const videoPath = './demo.mp4'
const outputImagePath = "./out.mp4";
let config = {
method: 'POST',
maxBodyLength: Infinity,
url: API_URL,
data: JSON.stringify({
'video': encodeImageToBase64(videoPath) // Base64 encoded file content or video URL
})
};
// Base64 encode the local video
function encodeImageToBase64(filePath) {
const bitmap = fs.readFileSync(filePath);
return Buffer.from(bitmap).toString('base64');
}
// Call the API
axios.request(config)
.then((response) => {
// Process the API response
const result = response.data["result"];
const videoBuffer = Buffer.from(result["video"], 'base64');
fs.writeFile(outputImagePath, videoBuffer, (err) => {
if (err) throw err;
console.log(`Output video saved at ${outputImagePath}`);
});
console.log("\nCategories:");
console.log(result["categories"]);
})
.catch((error) => {
console.log(error);
});
<?php
$API_URL = "http://localhost:8080/video-classification"; // Service URL
$video_path = "./demo.mp4";
$output_video_path = "./out.mp4";
// Base64 encode the local video
$video_data = base64_encode(file_get_contents($video_path));
$payload = array("video" => $video_data); // Base64 encoded file content or video URL
// Call the API
$ch = curl_init($API_URL);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);
// Process the API response
$result = json_decode($response, true)["result"];
file_put_contents($output_video_path, base64_decode($result["video"]));
echo "Output video saved at " . $output_video_path . "\n";
echo "\nCategories:\n";
print_r($result["categories"]);
?>
For the main operations provided by the service:
200, and the properties of the response body are as follows:| Name | Type | Meaning |
|---|---|---|
logId |
string |
The UUID of the request. |
errorCode |
integer |
Error code. Fixed as 0. |
errorMsg |
string |
Error message. Fixed as "Success". |
result |
object |
The result of the operation. |
| Name | Type | Meaning |
|---|---|---|
logId |
string |
The UUID of the request. |
errorCode |
integer |
Error code. Same as the response status code. |
errorMsg |
string |
Error message. |
The main operations provided by the service are as follows:
inferPerform multilingual speech recognition on audio.
POST /multilingual-speech-recognition
| Name | Type | Meaning | Required |
|---|---|---|---|
audio |
string |
The URL or path of the audio file accessible by the server. | Yes |
result of the response body has the following properties:| Name | Type | Meaning |
|---|---|---|
text |
string |
The text result of speech recognition. |
segments |
array |
The result text with timestamps. |
language |
string |
The recognized language. |
Each element in segments is an object with the following properties:
| Name | Type | Meaning |
|---|---|---|
id |
integer |
The ID of the audio segment. |
seek |
integer |
The pointer of the audio segment. |
start |
number |
The start time of the audio segment. |
end |
number |
The end time of the audio segment. |
text |
string |
The recognized text of the audio segment. |
tokens |
array |
The token IDs of the audio segment. |
temperature |
number |
The speed change ratio. |
avgLogProb |
number |
The average log probability. |
compressionRatio |
number |
The compression ratio. |
noSpeechProb |
number |
The probability of no speech. |
Example of Multilingual Service Invocation
Python
import requests
API_URL = "http://localhost:8080/multilingual-speech-recognition" # Service URL
audio_path = "./zh.wav"
payload = {"audio": audio_path}
Invoke API
response = requests.post(API_URL, json=payload)
Process API response
assert response.status_code == 200
result = response.json()["result"]
print(result)
📱 Edge Deployment: Edge deployment is a method that places computational and data processing capabilities directly on user devices, allowing them to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed procedures, please refer to the PaddleX Edge Deployment Guide. You can choose the appropriate deployment method based on your needs to integrate the model into your pipeline and proceed with subsequent AI application integration.
If the default model weights provided by the general video classification pipeline are not satisfactory in terms of accuracy or speed for your specific scenario, you can attempt to fine-tune the existing model using your own domain-specific or application-specific data to improve the recognition performance of the general video classification pipeline in your scenario.
Since the general video classification pipeline only includes a video classification module, if the performance of the pipeline is not up to expectations, you can analyze the videos with poor recognition and refer to the corresponding fine-tuning tutorial links in the table below for model fine-tuning.
| Scenario | Fine-Tuning Module | Fine-Tuning Reference Link |
|---|---|---|
| Inaccurate video classification | Video Classification Module | Link |
After completing the fine-tuning with your private dataset, you will obtain the local model weight file.
If you need to use the fine-tuned model weights, simply modify the pipeline configuration file by replacing the path to the fine-tuned model weights with the corresponding location in the pipeline configuration file:
from paddlex import create_pipeline
pipeline = create_pipeline(pipeline="./my_path/multilingual_speech_recognition.yaml")
output = pipeline.predict(input="zh.wav")
for res in output:
res.print()
res.save_to_json("./output/")
Subsequently, refer to the command-line method or Python script method in the local experience to load the modified pipeline configuration file.
PaddleX supports a variety of mainstream hardware devices, including NVIDIA GPU, Kunlunxin XPU, Ascend NPU, and Cambricon MLU. Simply modify the --device parameter to seamlessly switch between different hardware devices.
For example, if you use Ascend NPU for video classification in the pipeline, the Python command used is: