You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update and fix docs (namespaces, consistency) (#6084)
Summary:
Pull Request resolved: #6084
Audit all instances of `\bexec_aten::` and `\btorch::` under `docs/`, updating where appropriate.
The only remaining `torch::` instances are for kernels, which I didn't get a chance to migrate before v0.4.0.
Also:
- Update the LLM Manual code to be consistent between the doc and main.cpp
- Fix some LLM Manual issues: point to the latest release, and "main.cpp" instead of "main.h"
Reviewed By: mergennachin, Gasoonjia, Olivia-liu
Differential Revision: D64152344
fbshipit-source-id: 2f6582429d5e3ef285b728350f937247996bb454
8. Use [Xcode](https://developer.apple.com/documentation/xcode/building-and-running-an-app#Build-run-and-debug-your-app) to deploy the application on the device.
Copy file name to clipboardExpand all lines: docs/source/bundled-io.md
+15-15
Original file line number
Diff line number
Diff line change
@@ -201,51 +201,51 @@ This stage mainly focuses on executing the model with the bundled inputs and and
201
201
### Get ExecuTorch Program Pointer from `BundledProgram` Buffer
202
202
We need the pointer to ExecuTorch program to do the execution. To unify the process of loading and executing `BundledProgram` and Program flatbuffer, we create an API:
"get_program_data() failed with status 0x%" PRIx32,
229
229
status);
230
230
```
231
231
232
232
### Load Bundled Input to Method
233
-
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `torch::executor::bundled_program::LoadBundledInput`:
233
+
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `executorch::bundled_program::load_bundled_input`:
We call `torch::executor::bundled_program::VerifyResultWithBundledExpectedOutput` to verify the method's output with bundled expected outputs. Here's the details of this API:
243
+
We call `executorch::bundled_program::verify_method_outputs` to verify the method's output with bundled expected outputs. Here's the details of this API:
Copy file name to clipboardExpand all lines: docs/source/concepts.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ The goal of ATen dialect is to capture users’ programs as faithfully as possib
26
26
27
27
## ATen mode
28
28
29
-
ATen mode uses the ATen implementation of Tensor (`at::Tensor`) and related types, such as `ScalarType`, from the PyTorch core. This is in contrast to portable mode, which uses ExecuTorch’s smaller implementation of tensor (`torch::executor::Tensor`) and related types, such as `torch::executor::ScalarType`.
29
+
ATen mode uses the ATen implementation of Tensor (`at::Tensor`) and related types, such as `ScalarType`, from the PyTorch core. This is in contrast to ETensor mode, which uses ExecuTorch’s smaller implementation of tensor (`executorch::runtime::etensor::Tensor`) and related types, such as `executorch::runtime::etensor::ScalarType`.
30
30
- ATen kernels that rely on the full `at::Tensor` API are usable in this configuration.
31
31
- ATen kernels tend to do dynamic memory allocation and often have extra flexibility (and thus overhead) to handle cases not needed by mobile/embedded clients. e.g., CUDA support, sparse tensor support, and dtype promotion.
32
32
- Note: ATen mode is currently a WIP.
@@ -244,10 +244,10 @@ Kernels that support a subset of tensor dtypes and/or dim orders.
244
244
245
245
Parts of a model may be delegated to run on an optimized backend. The partitioner splits the graph into the appropriate sub-networks and tags them for delegation.
246
246
247
-
## Portable mode (lean mode)
247
+
## ETensor mode
248
248
249
-
Portable mode uses ExecuTorch’s smaller implementation of tensor (`torch::executor::Tensor`) along with related types (`torch::executor::ScalarType`, etc.). This is in contrast to ATen mode, which uses the ATen implementation of Tensor (`at::Tensor`) and related types (`ScalarType`, etc.)
250
-
-`torch::executor::Tensor`, also known as ETensor, is a source-compatible subset of `at::Tensor`. Code written against ETensor can build against `at::Tensor`.
249
+
ETensor mode uses ExecuTorch’s smaller implementation of tensor (`executorch::runtime::etensor::Tensor`) along with related types (`executorch::runtime::etensor::ScalarType`, etc.). This is in contrast to ATen mode, which uses the ATen implementation of Tensor (`at::Tensor`) and related types (`ScalarType`, etc.)
250
+
-`executorch::runtime::etensor::Tensor`, also known as ETensor, is a source-compatible subset of `at::Tensor`. Code written against ETensor can build against `at::Tensor`.
251
251
- ETensor does not own or allocate memory on its own. To support dynamic shapes, kernels can allocate Tensor data using the MemoryAllocator provided by the client.
0 commit comments