Skip to content

Commit ac1e16e

Browse files
committed
更新字体,部分新的英文
1 parent 0e2f2f2 commit ac1e16e

19 files changed

+683
-444
lines changed

docs/contribution.md

+16-16
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,25 @@
11
---
22
id: contribution
3-
title: 贡献指南
3+
title: Contribution Guidelines
44
type: explainer
55
---
66

7-
# 贡献指南
8-
欢迎大家为 Torchpipe 项目做出贡献。我们重视所有形式的贡献,包括但不限于:
7+
# Contribution Guidelines
8+
Welcome to contribute to the Torchpipe project. We value all forms of contributions, including but not limited to:
99

10-
- 现有补丁的代码 review
11-
- 文档和使用示例
12-
- 社区参与论坛和 issue
13-
- 代码的可读性和开发手册
14-
- 为代码添加注释来提高可读性
15-
- 贡献文档来解释内部的设计选择
16-
- 测试用例,使代码库更加稳健
17-
- 推广项目的教程、博文和讲座
1810

19-
以下是对项目各方面的贡献指南:
11+
GitHub Copilot: - Code review for existing patches
12+
- Documentation and usage examples
13+
- Community participation in forums and issue tracking
14+
- Code readability and development guidelines
15+
- Adding comments to the code to improve readability
16+
- Contributing documentation to explain internal design choices
17+
- Test cases to make the codebase more robust
18+
- Promoting the project through tutorials, blog posts, and presentations
2019

21-
- [修改代码](contribution_guide/modify_the_code.md)
22-
- [修改文档](contribution_guide/modify_the_doc.md)
23-
- [权利说明](contribution_guide/statement.md)
24-
- [交流提问](contribution_guide/communicate.md)
20+
GitHub Copilot: Here are the contribution guidelines for various aspects of the project:
2521

22+
- [Modifying the code](contribution_guide/modify_the_code.md)
23+
- [Modifying the documentation](contribution_guide/modify_the_doc.md)
24+
- [Statement of rights](contribution_guide/statement.md)
25+
- [Communicating and asking questions](contribution_guide/communicate.md)
+16-17
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,32 @@
11
---
22
id: modify_the_code
3-
title: 修改代码
3+
title: Code Modification
44
type: reference
55
---
66

7-
# 修改代码
7+
8+
GitHub Copilot: Code Modification:
89

9-
代码位于 https://g.hz.netease.com/deploy/torchpipe/-/tree/develop
10+
To modify the code, please follow these steps:
11+
- Submit a merge request to the develop branch.
12+
- For major changes, please discuss them in an issue beforehand.
1013

11-
修改代码方式:
12-
- 直接提交commit至develop分支
13-
- 提交 merge request 至 develop 分支
14+
To ensure the stability of the server, we have certain requirements for C++:
15+
- All code must be exception-safe.
16+
- Manual program exits are not allowed; exceptions should be thrown instead.
1417

15-
为了保证服务端的稳定性,我们对c++有所要求:
16-
- 所有的代码需要是异常安全的
17-
- 不允许人为退出程序,而是以抛出异常代替
18+
For complex modifications, please consider testing:
19+
- Prepare the runtime environment by referring to [Torchpipe installation](../installation).
20+
- Run existing tests:
1821

19-
对于复杂的修改,请考虑进行测试:
20-
21-
- 准备运行环境,请参考[torchpipe的安装](../installation.mdx)
22-
- 运行已有测试:
2322
```bash
2423
cd test
2524
pip install -r requirements.txt
2625
pytest .
2726
```
2827

29-
需要时请考虑补充[python测试](https://g.hz.netease.com/deploy/torchpipe/-/tree/develop/test)
28+
GitHub Copilot: If necessary, please consider supplementing with [Python tests](https://g.hz.netease.com/deploy/torchpipe/-/tree/develop/test).
3029

31-
:::note 代码格式(optinal)
32-
请配置格式化插件以便[.clang-format](https://g.hz.netease.com/deploy/torchpipe/-/blob/develop/.clang-format)生效。
33-
:::
30+
:::note Code Formatting (optional)
31+
Please configure a formatting plugin to enable [.clang-format](https://g.hz.netease.com/deploy/torchpipe/-/blob/develop/.clang-format).
32+
:::
+4-5
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
11
---
22
id: modify_the_doc
3-
title: 修改文档
3+
title: Documentation Modification
44
type: reference
55
---
66

7-
# 修改文档
7+
GitHub Copilot: The documentation is located at https://github.com/torchpipe/torchpipe.github.io.
88

9-
文档位于https://g.hz.netease.com/deploy/torchpipe-docs
10-
- 对于简单的修改,请直接提交commit至master分支
11-
- 否则,提交merge request
9+
To modify the documentation, please:
10+
- Submit a merge request to the main branch.

docs/introduction.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ There are some industry practices, such as [triton inference server](https://git
1313

1414
One common complaint from users of the Triton Inference Server is that in a system with multiple intertwined nodes, a lot of business logic needs to be completed on the client side and then called through RPC to the server, which can be cumbersome. For performance reasons, unconventional methods such as shared memory, ensemble, and [BLS](https://github.com/triton-inference-server/python_backend#business-logic-scripting) must be considered.
1515

16-
To address this issue, TorchPipe provides a thread-safe function interface for the PyTorch frontend and a fine-grained backend extension for users, by delving into PyTorch's C++ calculation backend and CUDA stream management, as well as modeling domain-specific languages for multiple nodes.
16+
To address these issues, TorchPipe provides a thread-safe function interface for the PyTorch frontend and a fine-grained backend extension for users, by delving into PyTorch's C++ calculation backend and CUDA stream management, as well as modeling domain-specific languages for multiple nodes.
1717

1818

1919
![jpg](.././static/images/EngineFlow-light-english.png)
@@ -22,7 +22,7 @@ To address this issue, TorchPipe provides a thread-safe function interface for t
2222
**Features of the torchpipe framework:**
2323
- Achieves near-optimal performance (peak throughput/TP99) from a business perspective, reducing widespread negative optimization and performance loss between nodes.
2424
- With a fine-grained generic backend, it is easy to expand hardware and weaken the difficulty of hardware vendor ecosystem migration.
25-
- Simple and high-performance modeling, including complex business systems such as multi-model fusion. Typical industrial scenarios include AI systems A and B with up to 10 model nodes in smart cities, and OCR systems that involve subgraph independent scheduling, bucket scheduling, and intelligent batch grouping for extreme optimization.
25+
- Simple and high-performance modeling, including complex business systems such as multi-model fusion. Typical industrial scenarios include AI systems with up to 10 model nodes in smart cities, and OCR systems that involve subgraph independent scheduling, bucket scheduling, and intelligent batch grouping for extreme optimization.
2626
- Maximizes the elimination of performance loss caused by Python runtime, GIL, heterogeneous hardware, virtualization, and multi-process.
2727

2828
Unlike many other service-oriented frameworks, we decouple the system from RPC and focus on concurrent safety and pipeline scheduling of C++ and Python interfaces.

docs/preliminaries/rpc.md

-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ title: Performance indicators for services
44
type: explainer
55
---
66

7-
Performance indicators for services
87

98
When evaluating the performance of a service, there are several key indicators to consider. These indicators can help us understand the performance of the service in terms of latency, throughput, error rate, and other aspects. Here are some commonly used key performance indicators:
109

0 commit comments

Comments
 (0)