Skip to content

Commit 9081ccb

Browse files
committed
Update docs
1 parent 25eecc4 commit 9081ccb

File tree

5 files changed

+22
-12
lines changed

5 files changed

+22
-12
lines changed

.vscode/settings.json

+8
Original file line numberDiff line numberDiff line change
@@ -12,5 +12,13 @@
1212
"FSharp.keywordsAutocomplete": false,
1313
"FSharp.minimizeBackgroundParsing": true,
1414
"FSharp.resolveNamespaces": false,
15+
"cSpell.words": [
16+
"BLAS",
17+
"Bezout's",
18+
"CUDA",
19+
"MATLAB",
20+
"SIMD",
21+
"nVidia"
22+
],
1523

1624
}

Tensor/README.md

+5-6
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ A *tensor* is an n-dimensional array of an arbitrary data type (for example `sin
77
Tensors of data type `'T` are implemented by the [Tensor<'T>](xref:Tensor.Tensor`1) type.
88

99
A tensor can be either stored in host memory or in the memory of a GPU computing device.
10-
Currenty only nVidia cards implementing the [CUDA API](https://developer.nvidia.com/cuda-zone) are supported.
10+
Currently only nVidia cards implementing the [CUDA API](https://developer.nvidia.com/cuda-zone) are supported.
1111
The API for host and GPU stored tensors is mostly equal, thus a program can make use of GPU accelerated operations without porting effort.
1212

1313
The tensor library provides functionality similar to [Numpy's Ndarray](http://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.html) and [MATLAB arrays](http://www.mathworks.com/help/matlab/matrices-and-arrays.html), including vector-wise operations, reshaping, slicing, broadcasting, masked assignment, reduction operations and BLAS operations.
@@ -17,12 +17,12 @@ This open source library is written in [F#](http://fsharp.org/) and targets the
1717
### Features provided by the core Tensor library
1818

1919
* Core features
20-
* n-dimensional arrays (tensors) in host memory or on CUDA GPUs
20+
* n-dimensional arrays (tensors) in host memory or on CUDA GPUs
2121
* element-wise operations (addition, multiplication, absolute value, etc.)
2222
* basic linear algebra operations (dot product, SVD decomposition, matrix inverse, etc.)
2323
* reduction operations (sum, product, average, maximum, arg max, etc.)
24-
* logic operations (comparision, and, or, etc.)
25-
* views, slicing, reshaping, broadcasting (similar to NumPy)
24+
* logic operations (comparison, and, or, etc.)
25+
* views, slicing, reshaping, broadcasting (similar to NumPy)
2626
* scatter and gather by indices
2727
* standard functional operations (map, fold, etc.)
2828
* Data exchange
@@ -37,7 +37,7 @@ This open source library is written in [F#](http://fsharp.org/) and targets the
3737
* Matrix algebra (integer, rational)
3838
* Row echelon form
3939
* Smith normal form
40-
* Kernel, cokernel and (pseudo-)inverse
40+
* Kernel, co-kernel and (pseudo-)inverse
4141
* Matrix decomposition (floating point)
4242
* Principal component analysis (PCA)
4343
* ZCA whitening
@@ -59,4 +59,3 @@ The following NuGet packages are available for download.
5959
## Documentation
6060

6161
Documentation is provided at <http://www.deepml.net/Tensor>.
62-

Tensor/Tensor.Docs/articles/ReleaseNotes.md

+4-1
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
# Release notes
22

3+
* 0.4.11
4+
* First release of 0.4 branch on public NuGet.
5+
36
* 0.4.10
47
* Update to CUDA 9.1
5-
* CUDA SDK no longer required.
8+
* CUDA SDK no longer required.
69

710
* 0.4.9
811
* Remove type constraints on Tensor.arange and Tensor.linspace.

Tensor/Tensor.Docs/index.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ A *tensor* is an n-dimensional array of an arbitrary data type (for example `sin
66
Tensors of data type `'T` are implemented by the [Tensor<'T>](xref:Tensor.Tensor`1) type.
77

88
A tensor can be either stored in host memory or in the memory of a GPU computing device.
9-
Currenty only nVidia cards implementing the [CUDA API](https://developer.nvidia.com/cuda-zone) are supported.
9+
Currently only nVidia cards implementing the [CUDA API](https://developer.nvidia.com/cuda-zone) are supported.
1010
The API for host and GPU stored tensors is mostly equal, thus a program can make use of GPU accelerated operations without porting effort.
1111

1212
The tensor library provides functionality similar to [Numpy's Ndarray](http://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.html) and [MATLAB arrays](http://www.mathworks.com/help/matlab/matrices-and-arrays.html), including vector-wise operations, reshaping, slicing, broadcasting, masked assignment, reduction operations and BLAS operations.
@@ -36,7 +36,7 @@ This open source library is written in [F#](http://fsharp.org/) and targets the
3636
* Matrix algebra (integer, rational)
3737
* Row echelon form
3838
* Smith normal form
39-
* Kernel, cokernel and (pseudo-)inverse
39+
* Kernel, co-kernel and (pseudo-)inverse
4040
* Matrix decomposition (floating point)
4141
* Principal component analysis (PCA)
4242
* ZCA whitening

Tensor/Tensor.Sample/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,13 @@
33
[![Build Status](https://travis-ci.org/DeepMLNet/Tensor.Sample.svg?branch=master)](https://travis-ci.org/DeepMLNet/Tensor.Sample)
44
[![Build status](https://ci.appveyor.com/api/projects/status/wrun3ku66kl09ki6?svg=true)](https://ci.appveyor.com/project/surban/tensor-sample)
55

6-
This is an example project that demonstrates the capabilites of the [F# tensor library](http://www.deepml.net/Tensor).
6+
This is an example project that demonstrates the capabilities of the [F# tensor library](http://www.deepml.net/Tensor).
77

88
## System requirements
99

10-
* Linux or Microsoft Windows (64-bit)
10+
* Linux, MacOS or Microsoft Windows (64-bit)
1111
* [Microsoft .NET Core 2.0 or higher](https://www.microsoft.com/net/learn/get-started/)
12-
* For GPU acceleration (optional): [nVidia CUDA SDK 8.0](https://developer.nvidia.com/cuda-80-ga2-download-archive)
12+
* For GPU acceleration (optional): [nVidia CUDA GPU with recent driver](http://www.nvidia.com/Download/index.aspx)
1313

1414
## Running
1515

0 commit comments

Comments
 (0)