You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This demo application belongs to the set of examples for LightningChart JS, data visualization library for JavaScript.
6
6
7
7
LightningChart JS is entirely GPU accelerated and performance optimized charting library for presenting massive amounts of data. It offers an easy way of creating sophisticated and interactive charts and adding them to your website or web application.
8
8
9
9
The demo can be used as an example or a seed project. Local execution requires the following steps:
10
10
11
-
- Make sure that relevant version of [Node.js](https://nodejs.org/en/download/) is installed
12
-
- Open the project folder in a terminal:
11
+
-Make sure that relevant version of [Node.js](https://nodejs.org/en/download/) is installed
12
+
-Open the project folder in a terminal:
13
13
14
-
npm install # fetches dependencies
15
-
npm start # builds an application and starts the development server
14
+
npm install # fetches dependencies
15
+
npm start # builds an application and starts the development server
16
16
17
-
- The application is available at *http://localhost:8080* in your browser, webpack-dev-server provides hot reload functionality.
17
+
-The application is available at _http://localhost:8080_ in your browser, webpack-dev-server provides hot reload functionality.
This example uses the [Web Audio APIs][web-audio-api] to retrieve the frequency data to display in the heatmap. These APIs make it easy to work with audio files and manipulate the files. For spectrogram use the [AnalyzerNode][analyzer-node] is the most useful part of the API as it provides [getByteFrequencyData][getByteFrequencyData] method which is a implementation of [Fast Fourier Transform][fft].
40
-
The AudioContext contains method to convert an ArrayBuffer into an [AudioBuffer][AudioBuffer].
39
+
This example uses the [Web Audio APIs][web-audio-api] to retrieve the frequency data to display in the heatmap. These APIs make it easy to work with audio files and manipulate the files. For spectrogram use the [AnalyzerNode][analyzer-node] is the most useful part of the API as it provides [getByteFrequencyData][getbytefrequencydata] method which is a implementation of [Fast Fourier Transform][fft].
40
+
The AudioContext contains method to convert an ArrayBuffer into an [AudioBuffer][audiobuffer].
Now that the audio file is converted into a AudioBuffer it's possible to start extracting data from it.
47
47
48
-
To process the full audio buffer as fast as possible, a [OfflineAudioContext][OfflineAudioContext] is used. The OfflineAudioContext doesn't output the data to a audio device, instead it will go through the audio as fast as possible and outputs an AudioBuffer with the processed data. In this example the processed audio buffer is not used, but the processing is used to calculate the FFT data we need to display the intensities for each frequency in the spectrogram. The audio buffer we have created is used as a [buffer source][createBufferSource] for the OfflineAudioContext.
48
+
To process the full audio buffer as fast as possible, a [OfflineAudioContext][offlineaudiocontext] is used. The OfflineAudioContext doesn't output the data to a audio device, instead it will go through the audio as fast as possible and outputs an AudioBuffer with the processed data. In this example the processed audio buffer is not used, but the processing is used to calculate the FFT data we need to display the intensities for each frequency in the spectrogram. The audio buffer we have created is used as a [buffer source][createbuffersource] for the OfflineAudioContext.
49
49
50
-
The buffer source only has a single output but we want to be able to process each channel separately, to do this a [ChannelSplitter][createChannelSplitter] is used with the output count matching the source channel count.
50
+
The buffer source only has a single output but we want to be able to process each channel separately, to do this a [ChannelSplitter][createchannelsplitter] is used with the output count matching the source channel count.
This makes it possible to process each channel separately by making it possible to create AnalyzerNodes for each channel and only piping a single channel to each analyzer.
57
57
58
-
A [ScriptProcessorNode][createScriptProcessor] is used to go through the audio buffer in chuncks. For each chunk, the FFT data is calculated for each channel and stored in large enough buffers to fit the full data.
58
+
A [ScriptProcessorNode][createscriptprocessor] is used to go through the audio buffer in chuncks. For each chunk, the FFT data is calculated for each channel and stored in large enough buffers to fit the full data.
59
59
60
60
Last [startRendering()][start-rendering] method is called to render out the audio buffer. This is when all of the FFT calculation is done.
61
61
@@ -67,22 +67,22 @@ The data from the audio APIs is in wrong format to display in the heatmap withou
0 commit comments