Skip to content

Commit 5512fe0

Browse files
dbortfacebook-github-bot
authored andcommitted
Update and fix docs (namespaces, consistency) (#6084)
Summary: Pull Request resolved: #6084 Audit all instances of `\bexec_aten::` and `\btorch::` under `docs/`, updating where appropriate. The only remaining `torch::` instances are for kernels, which I didn't get a chance to migrate before v0.4.0. Also: - Update the LLM Manual code to be consistent between the doc and main.cpp - Fix some LLM Manual issues: point to the latest release, and "main.cpp" instead of "main.h" Reviewed By: mergennachin, Gasoonjia, Olivia-liu Differential Revision: D64152344 fbshipit-source-id: 2f6582429d5e3ef285b728350f937247996bb454
1 parent 4b3ffc4 commit 5512fe0

10 files changed

+167
-160
lines changed

docs/source/Doxyfile

+2-1
Original file line numberDiff line numberDiff line change
@@ -943,7 +943,8 @@ WARN_LOGFILE =
943943
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
944944
# Note: If this tag is empty the current directory is searched.
945945

946-
INPUT = ../runtime/executor/memory_manager.h \
946+
INPUT = ../devtools/bundled_program/bundled_program.h \
947+
../runtime/executor/memory_manager.h \
947948
../runtime/executor/method.h \
948949
../runtime/executor/method_meta.h \
949950
../runtime/executor/program.h \

docs/source/build-run-coreml.md

+2-3
Original file line numberDiff line numberDiff line change
@@ -147,11 +147,10 @@ libsqlite3.tbd
147147

148148
7. Update the code to load the program from the Application's bundle.
149149
``` objective-c
150-
using namespace torch::executor;
151-
152150
NSURL *model_url = [NBundle.mainBundle URLForResource:@"mv3_coreml_all" extension:@"pte"];
153151

154-
Result<util::FileDataLoader> loader = util::FileDataLoader::from(model_url.path.UTF8String);
152+
Result<executorch::extension::FileDataLoader> loader =
153+
executorch::extension::FileDataLoader::from(model_url.path.UTF8String);
155154
```
156155
157156
8. Use [Xcode](https://developer.apple.com/documentation/xcode/building-and-running-an-app#Build-run-and-debug-your-app) to deploy the application on the device.

docs/source/bundled-io.md

+15-15
Original file line numberDiff line numberDiff line change
@@ -201,51 +201,51 @@ This stage mainly focuses on executing the model with the bundled inputs and and
201201
### Get ExecuTorch Program Pointer from `BundledProgram` Buffer
202202
We need the pointer to ExecuTorch program to do the execution. To unify the process of loading and executing `BundledProgram` and Program flatbuffer, we create an API:
203203

204-
:::{dropdown} `GetProgramData`
204+
:::{dropdown} `get_program_data`
205205

206206
```{eval-rst}
207-
.. doxygenfunction:: torch::executor::bundled_program::GetProgramData
207+
.. doxygenfunction:: ::executorch::bundled_program::get_program_data
208208
```
209209
:::
210210

211-
Here's an example of how to use the `GetProgramData` API:
211+
Here's an example of how to use the `get_program_data` API:
212212
```c++
213213
// Assume that the user has read the contents of the file into file_data using
214214
// whatever method works best for their application. The file could contain
215215
// either BundledProgram data or Program data.
216216
void* file_data = ...;
217217
size_t file_data_len = ...;
218218

219-
// If file_data contains a BundledProgram, GetProgramData() will return a
219+
// If file_data contains a BundledProgram, get_program_data() will return a
220220
// pointer to the Program data embedded inside it. Otherwise it will return
221221
// file_data, which already pointed to Program data.
222222
const void* program_ptr;
223223
size_t program_len;
224-
status = torch::executor::bundled_program::GetProgramData(
224+
status = executorch::bundled_program::get_program_data(
225225
file_data, file_data_len, &program_ptr, &program_len);
226226
ET_CHECK_MSG(
227227
status == Error::Ok,
228-
"GetProgramData() failed with status 0x%" PRIx32,
228+
"get_program_data() failed with status 0x%" PRIx32,
229229
status);
230230
```
231231
232232
### Load Bundled Input to Method
233-
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `torch::executor::bundled_program::LoadBundledInput`:
233+
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `executorch::bundled_program::load_bundled_input`:
234234
235-
:::{dropdown} `LoadBundledInput`
235+
:::{dropdown} `load_bundled_input`
236236
237237
```{eval-rst}
238-
.. doxygenfunction:: torch::executor::bundled_program::LoadBundledInput
238+
.. doxygenfunction:: ::executorch::bundled_program::load_bundled_input
239239
```
240240
:::
241241

242242
### Verify the Method's Output.
243-
We call `torch::executor::bundled_program::VerifyResultWithBundledExpectedOutput` to verify the method's output with bundled expected outputs. Here's the details of this API:
243+
We call `executorch::bundled_program::verify_method_outputs` to verify the method's output with bundled expected outputs. Here's the details of this API:
244244

245-
:::{dropdown} `VerifyResultWithBundledExpectedOutput`
245+
:::{dropdown} `verify_method_outputs`
246246

247247
```{eval-rst}
248-
.. doxygenfunction:: torch::executor::bundled_program::VerifyResultWithBundledExpectedOutput
248+
.. doxygenfunction:: ::executorch::bundled_program::verify_method_outputs
249249
```
250250
:::
251251

@@ -266,13 +266,13 @@ ET_CHECK_MSG(
266266
method.error());
267267

268268
// Load testset_idx-th input in the buffer to plan
269-
status = torch::executor::bundled_program::LoadBundledInput(
269+
status = executorch::bundled_program::load_bundled_input(
270270
*method,
271271
program_data.bundled_program_data(),
272272
FLAGS_testset_idx);
273273
ET_CHECK_MSG(
274274
status == Error::Ok,
275-
"LoadBundledInput failed with status 0x%" PRIx32,
275+
"load_bundled_input failed with status 0x%" PRIx32,
276276
status);
277277

278278
// Execute the plan
@@ -283,7 +283,7 @@ ET_CHECK_MSG(
283283
status);
284284

285285
// Verify the result.
286-
status = torch::executor::bundled_program::VerifyResultWithBundledExpectedOutput(
286+
status = executorch::bundled_program::verify_method_outputs(
287287
*method,
288288
program_data.bundled_program_data(),
289289
FLAGS_testset_idx,

docs/source/concepts.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ The goal of ATen dialect is to capture users’ programs as faithfully as possib
2626

2727
## ATen mode
2828

29-
ATen mode uses the ATen implementation of Tensor (`at::Tensor`) and related types, such as `ScalarType`, from the PyTorch core. This is in contrast to portable mode, which uses ExecuTorch’s smaller implementation of tensor (`torch::executor::Tensor`) and related types, such as `torch::executor::ScalarType`.
29+
ATen mode uses the ATen implementation of Tensor (`at::Tensor`) and related types, such as `ScalarType`, from the PyTorch core. This is in contrast to ETensor mode, which uses ExecuTorch’s smaller implementation of tensor (`executorch::runtime::etensor::Tensor`) and related types, such as `executorch::runtime::etensor::ScalarType`.
3030
- ATen kernels that rely on the full `at::Tensor` API are usable in this configuration.
3131
- ATen kernels tend to do dynamic memory allocation and often have extra flexibility (and thus overhead) to handle cases not needed by mobile/embedded clients. e.g., CUDA support, sparse tensor support, and dtype promotion.
3232
- Note: ATen mode is currently a WIP.
@@ -244,10 +244,10 @@ Kernels that support a subset of tensor dtypes and/or dim orders.
244244

245245
Parts of a model may be delegated to run on an optimized backend. The partitioner splits the graph into the appropriate sub-networks and tags them for delegation.
246246

247-
## Portable mode (lean mode)
247+
## ETensor mode
248248

249-
Portable mode uses ExecuTorch’s smaller implementation of tensor (`torch::executor::Tensor`) along with related types (`torch::executor::ScalarType`, etc.). This is in contrast to ATen mode, which uses the ATen implementation of Tensor (`at::Tensor`) and related types (`ScalarType`, etc.)
250-
- `torch::executor::Tensor`, also known as ETensor, is a source-compatible subset of `at::Tensor`. Code written against ETensor can build against `at::Tensor`.
249+
ETensor mode uses ExecuTorch’s smaller implementation of tensor (`executorch::runtime::etensor::Tensor`) along with related types (`executorch::runtime::etensor::ScalarType`, etc.). This is in contrast to ATen mode, which uses the ATen implementation of Tensor (`at::Tensor`) and related types (`ScalarType`, etc.)
250+
- `executorch::runtime::etensor::Tensor`, also known as ETensor, is a source-compatible subset of `at::Tensor`. Code written against ETensor can build against `at::Tensor`.
251251
- ETensor does not own or allocate memory on its own. To support dynamic shapes, kernels can allocate Tensor data using the MemoryAllocator provided by the client.
252252

253253
## Portable kernels

docs/source/etdump.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Generating an ETDump is a relatively straightforward process. Users can follow t
1515
2. ***Create*** an Instance of the ETDumpGen class and pass it into the `load_method` call that is invoked in the runtime.
1616

1717
```C++
18-
torch::executor::ETDumpGen etdump_gen = torch::executor::ETDumpGen();
18+
executorch::etdump::ETDumpGen etdump_gen;
1919
Result<Method> method =
2020
program->load_method(method_name, &memory_manager, &etdump_gen);
2121
```

docs/source/executorch-runtime-api-reference.rst

+8-8
Original file line numberDiff line numberDiff line change
@@ -11,25 +11,25 @@ For detailed information on how APIs evolve and the deprecation process, please
1111
Model Loading and Execution
1212
---------------------------
1313

14-
.. doxygenclass:: executorch::runtime::DataLoader
14+
.. doxygenclass:: executorch::runtime::Program
1515
:members:
1616

17-
.. doxygenclass:: executorch::runtime::MemoryAllocator
17+
.. doxygenclass:: executorch::runtime::Method
1818
:members:
1919

20-
.. doxygenclass:: executorch::runtime::HierarchicalAllocator
20+
.. doxygenclass:: executorch::runtime::MethodMeta
2121
:members:
2222

23-
.. doxygenclass:: executorch::runtime::MemoryManager
23+
.. doxygenclass:: executorch::runtime::DataLoader
2424
:members:
2525

26-
.. doxygenclass:: executorch::runtime::Program
26+
.. doxygenclass:: executorch::runtime::MemoryAllocator
2727
:members:
2828

29-
.. doxygenclass:: executorch::runtime::Method
29+
.. doxygenclass:: executorch::runtime::HierarchicalAllocator
3030
:members:
3131

32-
.. doxygenclass:: executorch::runtime::MethodMeta
32+
.. doxygenclass:: executorch::runtime::MemoryManager
3333
:members:
3434

3535
Values
@@ -38,5 +38,5 @@ Values
3838
.. doxygenstruct:: executorch::runtime::EValue
3939
:members:
4040

41-
.. doxygenclass:: executorch::aten::Tensor
41+
.. doxygenclass:: executorch::runtime::etensor::Tensor
4242
:members:

0 commit comments

Comments
 (0)