Skip to content

Commit 6dad271

Browse files
committed
vis
1 parent 46e009a commit 6dad271

File tree

6 files changed

+65
-29
lines changed

6 files changed

+65
-29
lines changed

docs/Intra-node/extensible_backend.mdx

+19
Original file line numberDiff line numberDiff line change
@@ -95,6 +95,25 @@ model(input)
9595
assert(input["result"] == b"123")
9696
```
9797

98+
Or you can:
99+
```python
100+
tp.utils.cpp_extension.load_filter(
101+
name = 'Skip',
102+
sources='status forward(dict data){return status::Skip;}',
103+
sources_header="")
104+
105+
106+
107+
tp.utils.cpp_extension.load_backend(
108+
name = 'identity',
109+
sources='void forward(dict data){(*data)["result"] = (*data)["data"];}',
110+
sources_header="")
111+
model = tp.pipe({"backend":'identity'})
112+
input = {"data":2}
113+
model(input)
114+
assert input["result"] == 2
115+
```
116+
98117
## Binding with Python
99118
When using Python as the front-end language, the back-end is called from Python and the results are returned to Python, requiring type conversion.
100119
### From Python Types to Any {#py2any}

docs/installation.mdx

+4-4
Original file line numberDiff line numberDiff line change
@@ -145,12 +145,12 @@ For more examples, see [Showcase](./showcase/showcase.mdx).
145145
## Customizing Dockerfile {#selfdocker}
146146

147147
Refer to the [example Dockerfile](https://github.com/torchpipe/torchpipe/blob/main/docker/trt9.1.base). After downloading [TensorRT](https://github.com/NVIDIA/TensorRT/tree/release/9.1#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path) in advance, you can compile the corresponding base image.
148-
```
149-
# put TensorRT-9.1.0.4.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/
148+
```bash
149+
# put TensorRT-9.*.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/
150150

151-
# docker build --network=host -f docker/trt9.1.base -t torchpipe:base_trt-9.1 .
151+
# docker build --network=host -f docker/trt9.base -t torchpipe:base_trt-9 .
152152

153-
# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9.1 /bin/bash
153+
# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9 /bin/bash
154154

155155
```
156156
Base images compiled in this way have smaller sizes than NGC PyTorch images. Please note that `_GLIBCXX_USE_CXX11_ABI==0`.

docs/tools/vis.mdx

+11-11
Original file line numberDiff line numberDiff line change
@@ -4,25 +4,25 @@ title: Configuration Visualizing
44
type: explainer
55
---
66

7+
(From v0.4.0)
78
We provide a simple web-based visualization feature for configuration files.
89
## Environment Setup
910

1011
```bash
11-
apt-get update
12-
apt install graphviz
13-
pip install pydot gradio
12+
pip install gradio
1413
```
1514

1615
## Usage {#parameter}
1716

18-
`torchpipe.utils.vis [-h] [--port PORT] [--save] toml`
1917

20-
:::tip Parameters
21-
- **--save** - Whether to save the graph as an SVG image. The image will be saved in the current directory with a different file extension than the TOML file.
22-
:::
18+
```python
19+
import torchpipe as tp
2320

21+
a=tp.parse_toml("examples/ppocr/ocr.toml")
2422

25-
## Example
26-
```bash
27-
python -m torchpipe.utils.vis your.toml # --port 2211
28-
```
23+
tp.utils.Visual(a).launch()
24+
```
25+
26+
27+
28+

i18n/zh/docusaurus-plugin-content-docs/current/Intra-node/extensible_backend.mdx

+20
Original file line numberDiff line numberDiff line change
@@ -99,6 +99,26 @@ model(input)
9999
assert(input["result"] == b"123")
100100
```
101101

102+
103+
Or you can:
104+
```python
105+
tp.utils.cpp_extension.load_filter(
106+
name = 'Skip',
107+
sources='status forward(dict data){return status::Skip;}',
108+
sources_header="")
109+
110+
111+
112+
tp.utils.cpp_extension.load_backend(
113+
name = 'identity',
114+
sources='void forward(dict data){(*data)["result"] = (*data)["data"];}',
115+
sources_header="")
116+
model = tp.pipe({"backend":'identity'})
117+
input = {"data":2}
118+
model(input)
119+
assert input["result"] == 2
120+
```
121+
102122
## 与python的绑定
103123
以python为前端语言时,会从python中调用后端,并且将结果返回到python中,需要进行类型转换。
104124
### 从python类型到any {#py2any}

i18n/zh/docusaurus-plugin-content-docs/current/installation.mdx

+4-4
Original file line numberDiff line numberDiff line change
@@ -133,13 +133,13 @@ print(input["result"].shape) # 失败则此键值一定不存在,即使输入
133133

134134
## 自定义dockerfile {#selfdocker}
135135

136-
参考[示例dockerfile](https://github.com/torchpipe/torchpipe/blob/main/docker/trt9.1.base),预先下载[TensorRT](https://github.com/NVIDIA/TensorRT/tree/release/9.1#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path)后可编译相关基础环境镜像。
136+
参考[示例dockerfile](https://github.com/torchpipe/torchpipe/blob/main/docker/trt9.base),预先下载[TensorRT](https://github.com/NVIDIA/TensorRT/tree/release/9.1#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path)后可编译相关基础环境镜像。
137137
```bash
138-
# put TensorRT-9.1.0.4.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/
138+
# put TensorRT-9.*.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/
139139

140-
# docker build --network=host -f docker/trt9.1.base -t torchpipe:base_trt-9.1 .
140+
# docker build --network=host -f docker/trt9.base -t torchpipe:base_trt-9 .
141141

142-
# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9.1 /bin/bash
142+
# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9 /bin/bash
143143

144144
```
145145
这种方式编译出的基础镜像比NGC pytorch镜像体积更小. 需要注意,其`_GLIBCXX_USE_CXX11_ABI==0`

i18n/zh/docusaurus-plugin-content-docs/current/tools/vis.mdx

+7-10
Original file line numberDiff line numberDiff line change
@@ -3,26 +3,23 @@ id: vis
33
title: 配置文件可视化
44
type: explainer
55
---
6+
从0.4.4版本开始生效
67

78
针对配置文件,我们提供了简单的网页可视化功能。
89

910
## 环境准备
1011
```bash
11-
apt-get update
12-
apt install graphviz
13-
pip install pydot gradio
12+
13+
pip install gradio
1414
```
1515

1616
## 使用方法 {#parameter}
1717

18-
`torchpipe.utils.vis [-h] [--port PORT] [--save] toml`
1918

20-
:::tip 参数
21-
- **--save** - 是否将图保存为svg图片。图片将保存在当前目录下,与toml文件(后缀不同)
22-
:::
19+
```python
20+
import torchpipe as tp
2321

22+
a=tp.parse_toml("examples/ppocr/ocr.toml")
2423

25-
## 示例
26-
```bash
27-
python -m torchpipe.utils.vis your.toml # --port 2211
24+
tp.utils.Visual(a).launch()
2825
```

0 commit comments

Comments
 (0)