Skip to content

Commit 859896c

Browse files
authored
[Other] add code and docs for ppclas examples (#1312)
* add code and docs for ppclas examples * fix doc * add code for printing results * add ppcls demo and docs * modify example according to refined c api * modify example code and docs for ppcls and ppdet * modify example code and docs for ppcls and ppdet * update ppdet demo * fix demo codes * fix doc * release resource when failed * fix * fix name * fix name
1 parent 7c4e0d7 commit 859896c

File tree

14 files changed

+926
-138
lines changed

14 files changed

+926
-138
lines changed

csharp/fastdeploy/vision/detection/ppdet/model.cs

+60-60
Large diffs are not rendered by default.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
PROJECT(infer_demo C)
2+
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
3+
4+
# 指定下载解压后的fastdeploy库路径
5+
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
6+
7+
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
8+
9+
# 添加FastDeploy依赖头文件
10+
include_directories(${FASTDEPLOY_INCS})
11+
12+
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.c)
13+
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,183 @@
1+
English | [简体中文](README_CN.md)
2+
# PaddleClas C Deployment Example
3+
4+
This directory provides examples that `infer.c` fast finishes the deployment of PaddleClas models on CPU/GPU.
5+
6+
Before deployment, two steps require confirmation.
7+
8+
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
9+
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
10+
11+
Taking ResNet50_vd inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
12+
13+
```bash
14+
mkdir build
15+
cd build
16+
# Download FastDeploy precompiled library. Users can choose your appropriate version in the`FastDeploy Precompiled Library` mentioned above
17+
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
18+
tar xvf fastdeploy-linux-x64-x.x.x.tgz
19+
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
20+
make -j
21+
22+
# Download ResNet50_vd model file and test images
23+
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
24+
tar -xvf ResNet50_vd_infer.tgz
25+
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
26+
27+
28+
# CPU inference
29+
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 0
30+
# GPU inference
31+
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 1
32+
```
33+
34+
The above command works for Linux or MacOS. Refer to
35+
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
36+
37+
## PaddleClas C Interface
38+
39+
### RuntimeOption
40+
41+
```c
42+
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
43+
```
44+
45+
> Create a RuntimeOption object, and return a pointer to manipulate it.
46+
>
47+
> **Return**
48+
>
49+
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
50+
51+
52+
```c
53+
void FD_C_RuntimeOptionWrapperUseCpu(
54+
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
55+
```
56+
57+
> Enable Cpu inference.
58+
>
59+
> **Params**
60+
>
61+
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
62+
63+
```c
64+
void FD_C_RuntimeOptionWrapperUseGpu(
65+
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
66+
int gpu_id)
67+
```
68+
> 开启GPU推理
69+
>
70+
> **参数**
71+
>
72+
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
73+
> * **gpu_id**(int): gpu id
74+
75+
76+
### Model
77+
78+
```c
79+
80+
FD_C_PaddleClasModelWrapper* FD_C_CreatePaddleClasModelWrapper(
81+
const char* model_file, const char* params_file, const char* config_file,
82+
FD_C_RuntimeOptionWrapper* runtime_option,
83+
const FD_C_ModelFormat model_format)
84+
85+
```
86+
87+
> Create a PaddleClas model object, and return a pointer to manipulate it.
88+
>
89+
> **Params**
90+
>
91+
> * **model_file**(const char*): Model file path
92+
> * **params_file**(const char*): Parameter file path
93+
> * **config_file**(const char*): Configuration file path, which is the deployment yaml file exported by PaddleClas.
94+
> * **runtime_option**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
95+
> * **model_format**(FD_C_ModelFormat): Model format. Paddle format by default
96+
>
97+
> **Return**
98+
> * **fd_c_ppclas_wrapper**(FD_C_PaddleClasModelWrapper*): Pointer to manipulate PaddleClas object.
99+
100+
101+
#### Read and write image
102+
103+
```c
104+
FD_C_Mat FD_C_Imread(const char* imgpath)
105+
```
106+
107+
> Read an image, and return a pointer to cv::Mat.
108+
>
109+
> **Params**
110+
>
111+
> * **imgpath**(const char*): image path
112+
>
113+
> **Return**
114+
>
115+
> * **imgmat**(FD_C_Mat): pointer to cv::Mat object which holds the image.
116+
117+
118+
```c
119+
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
120+
```
121+
122+
> Write image to a file.
123+
>
124+
> **Params**
125+
>
126+
> * **savepath**(const char*): save path
127+
> * **img**(FD_C_Mat): pointer to cv::Mat object
128+
>
129+
> **Return**
130+
>
131+
> * **result**(FD_C_Bool): bool to indicate success or failure
132+
133+
134+
#### Prediction
135+
136+
```c
137+
FD_C_Bool FD_C_PaddleClasModelWrapperPredict(
138+
__fd_take FD_C_PaddleClasModelWrapper* fd_c_ppclas_wrapper, FD_C_Mat img,
139+
FD_C_ClassifyResult* fd_c_ppclas_result)
140+
```
141+
>
142+
> Predict an image, and generate classification result.
143+
>
144+
> **Params**
145+
> * **fd_c_ppclas_wrapper**(FD_C_PaddleClasModelWrapper*): pointer to manipulate PaddleClas object
146+
> * **img**(FD_C_Mat): pointer to cv::Mat object, which can be obained by FD_C_Imread interface
147+
> * **fd_c_ppclas_result** (FD_C_ClassifyResult*): The classification result, including label_id, and the corresponding confidence. Refer to [Visual Model Prediction Results](../../../../../docs/api/vision_results/) for the description of ClassifyResult
148+
149+
150+
#### Result
151+
152+
```c
153+
FD_C_ClassifyResultWrapper* FD_C_CreateClassifyResultWrapperFromData(
154+
FD_C_ClassifyResult* fd_c_classify_result)
155+
```
156+
>
157+
> Create a pointer to FD_C_ClassifyResultWrapper structure, which contains `fastdeploy::vision::ClassifyResult` object in C++. You can call methods in C++ ClassifyResult object by C API with this pointer.
158+
>
159+
> **Params**
160+
> * **fd_c_classify_result**(FD_C_ClassifyResult*): pointer to FD_C_ClassifyResult structure
161+
>
162+
> **Return**
163+
> * **fd_c_classify_result_wrapper**(FD_C_ClassifyResultWrapper*): pointer to FD_C_ClassifyResultWrapper structure
164+
165+
166+
```c
167+
char* FD_C_ClassifyResultWrapperStr(
168+
FD_C_ClassifyResultWrapper* fd_c_classify_result_wrapper);
169+
```
170+
>
171+
> Call Str() methods in `fastdeploy::vision::ClassifyResult` object contained in FD_C_ClassifyResultWrapper structure,and return a string to describe information in result.
172+
>
173+
> **Params**
174+
> * **fd_c_classify_result_wrapper**(FD_C_ClassifyResultWrapper*): pointer to FD_C_ClassifyResultWrapper structure
175+
>
176+
> **Return**
177+
> * **str**(char*): a string to describe information in result
178+
179+
180+
- [Model Description](../../)
181+
- [Python Deployment](../python)
182+
- [Visual Model prediction results](../../../../../docs/api/vision_results/)
183+
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,189 @@
1+
[English](README.md) | 简体中文
2+
# PaddleClas C 部署示例
3+
4+
本目录下提供`infer_xxx.c`来调用C API快速完成PaddleClas系列模型在CPU/GPU上部署的示例。
5+
6+
在部署前,需确认以下两个步骤
7+
8+
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
9+
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
10+
11+
以Linux上ResNet50_vd推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
12+
13+
```bash
14+
mkdir build
15+
cd build
16+
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
17+
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
18+
tar xvf fastdeploy-linux-x64-x.x.x.tgz
19+
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
20+
make -j
21+
22+
# 下载ResNet50_vd模型文件和测试图片
23+
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
24+
tar -xvf ResNet50_vd_infer.tgz
25+
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
26+
27+
28+
# CPU推理
29+
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 0
30+
# GPU推理
31+
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 1
32+
```
33+
34+
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
35+
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
36+
37+
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
38+
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
39+
40+
## PaddleClas C API接口
41+
42+
### 配置
43+
44+
```c
45+
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
46+
```
47+
48+
> 创建一个RuntimeOption的配置对象,并且返回操作它的指针。
49+
>
50+
> **返回**
51+
>
52+
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
53+
54+
55+
```c
56+
void FD_C_RuntimeOptionWrapperUseCpu(
57+
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
58+
```
59+
60+
> 开启CPU推理
61+
>
62+
> **参数**
63+
>
64+
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
65+
66+
```c
67+
void FD_C_RuntimeOptionWrapperUseGpu(
68+
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
69+
int gpu_id)
70+
```
71+
> 开启GPU推理
72+
>
73+
> **参数**
74+
>
75+
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
76+
> * **gpu_id**(int): 显卡号
77+
78+
79+
### 模型
80+
81+
```c
82+
83+
FD_C_PaddleClasModelWrapper* FD_C_CreatePaddleClasModelWrapper(
84+
const char* model_file, const char* params_file, const char* config_file,
85+
FD_C_RuntimeOptionWrapper* runtime_option,
86+
const FD_C_ModelFormat model_format)
87+
88+
```
89+
90+
> 创建一个PaddleClas的模型,并且返回操作它的指针。
91+
>
92+
> **参数**
93+
>
94+
> * **model_file**(const char*): 模型文件路径
95+
> * **params_file**(const char*): 参数文件路径
96+
> * **config_file**(const char*): 配置文件路径,即PaddleClas导出的部署yaml文件
97+
> * **runtime_option**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
98+
> * **model_format**(FD_C_ModelFormat): 模型格式
99+
>
100+
> **返回**
101+
> * **fd_c_ppclas_wrapper**(FD_C_PaddleClasModelWrapper*): 指向PaddleClas模型对象的指针
102+
103+
104+
#### 读写图像
105+
106+
```c
107+
FD_C_Mat FD_C_Imread(const char* imgpath)
108+
```
109+
110+
> 读取一个图像,并且返回cv::Mat的指针。
111+
>
112+
> **参数**
113+
>
114+
> * **imgpath**(const char*): 图像文件路径
115+
>
116+
> **返回**
117+
>
118+
> * **imgmat**(FD_C_Mat): 指向图像数据cv::Mat的指针。
119+
120+
121+
```c
122+
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
123+
```
124+
125+
> 将图像写入文件中。
126+
>
127+
> **参数**
128+
>
129+
> * **savepath**(const char*): 保存图像的路径
130+
> * **img**(FD_C_Mat): 指向图像数据的指针
131+
>
132+
> **返回**
133+
>
134+
> * **result**(FD_C_Bool): 表示操作是否成功
135+
136+
137+
#### Predict函数
138+
139+
```c
140+
FD_C_Bool FD_C_PaddleClasModelWrapperPredict(
141+
__fd_take FD_C_PaddleClasModelWrapper* fd_c_ppclas_wrapper, FD_C_Mat img,
142+
FD_C_ClassifyResult* fd_c_ppclas_result)
143+
```
144+
>
145+
> 模型预测接口,输入图像直接并生成分类结果。
146+
>
147+
> **参数**
148+
> * **fd_c_ppclas_wrapper**(FD_C_PaddleClasModelWrapper*): 指向PaddleClas模型的指针
149+
> * **img**(FD_C_Mat): 输入图像的指针,指向cv::Mat对象,可以调用FD_C_Imread读取图像获取
150+
> * **fd_c_ppclas_result**(FD_C_ClassifyResult*): 分类结果,包括label_id,以及相应的置信度, ClassifyResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
151+
152+
153+
#### Predict结果
154+
155+
```c
156+
FD_C_ClassifyResultWrapper* FD_C_CreateClassifyResultWrapperFromData(
157+
FD_C_ClassifyResult* fd_c_classify_result)
158+
```
159+
>
160+
> 创建一个FD_C_ClassifyResultWrapper对象的指针,FD_C_ClassifyResultWrapper中包含了C++的`fastdeploy::vision::ClassifyResult`对象,通过该指针,使用C API可以访问调用对应C++中的函数。
161+
>
162+
>
163+
> **参数**
164+
> * **fd_c_classify_result**(FD_C_ClassifyResult*): 指向FD_C_ClassifyResult对象的指针
165+
>
166+
> **返回**
167+
> * **fd_c_classify_result_wrapper**(FD_C_ClassifyResultWrapper*): 指向FD_C_ClassifyResultWrapper的指针
168+
169+
170+
```c
171+
char* FD_C_ClassifyResultWrapperStr(
172+
FD_C_ClassifyResultWrapper* fd_c_classify_result_wrapper);
173+
```
174+
>
175+
> 调用FD_C_ClassifyResultWrapper所包含的`fastdeploy::vision::ClassifyResult`对象的Str()方法,返回相关结果内数据信息的字符串。
176+
>
177+
> **参数**
178+
> * **fd_c_classify_result_wrapper**(FD_C_ClassifyResultWrapper*): 指向FD_C_ClassifyResultWrapper对象的指针
179+
>
180+
> **返回**
181+
> * **str**(char*): 表示结果数据信息的字符串
182+
183+
184+
185+
186+
- [模型介绍](../../)
187+
- [Python部署](../python)
188+
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
189+
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)

0 commit comments

Comments
 (0)