title | description | ms.assetid | ms.topic | ms.date | ms.devlang | ms.custom |
---|---|---|---|---|---|---|
JavaScript developer reference for Azure Functions |
Understand how to develop functions by using JavaScript. |
45dedd78-3ff9-411f-bb4b-16d29a11384c |
conceptual |
11/18/2021 |
javascript |
devx-track-js |
This guide contains detailed information to help you succeed developing Azure Functions using JavaScript.
As an Express.js, Node.js, or JavaScript developer, if you are new to Azure Functions, please consider first reading one of the following articles:
Getting started | Concepts | Guided learning |
---|---|---|
A JavaScript (Node.js) function is an exported function
that executes when triggered (triggers are configured in function.json). The first argument passed to every function is a context
object, which is used for receiving and sending binding data, logging, and communicating with the runtime.
The required folder structure for a JavaScript project looks like the following. This default can be changed. For more information, see the scriptFile section below.
FunctionsProject
| - MyFirstFunction
| | - index.js
| | - function.json
| - MySecondFunction
| | - index.js
| | - function.json
| - SharedCode
| | - myFirstHelperFunction.js
| | - mySecondHelperFunction.js
| - node_modules
| - host.json
| - package.json
| - extensions.csproj
At the root of the project, there's a shared host.json file that can be used to configure the function app. Each function has a folder with its own code file (.js) and binding configuration file (function.json). The name of function.json
's parent directory is always the name of your function.
The binding extensions required in version 2.x of the Functions runtime are defined in the extensions.csproj
file, with the actual library files in the bin
folder. When developing locally, you must register binding extensions. When developing functions in the Azure portal, this registration is done for you.
JavaScript functions must be exported via module.exports
(or exports
). Your exported function should be a JavaScript function that executes when triggered.
By default, the Functions runtime looks for your function in index.js
, where index.js
shares the same parent directory as its corresponding function.json
. In the default case, your exported function should be the only export from its file or the export named run
or index
. To configure the file location and export name of your function, read about configuring your function's entry point below.
Your exported function is passed a number of arguments on execution. The first argument it takes is always a context
object.
When using the async function
declaration or plain JavaScript Promises in version 2.x, 3.x, or 4.x of the Functions runtime, you do not need to explicitly call the context.done
callback to signal that your function has completed. Your function completes when the exported async function/Promise completes.
The following example is a simple function that logs that it was triggered and immediately completes execution.
module.exports = async function (context) {
context.log('JavaScript trigger function processed a request.');
};
When exporting an async function, you can also configure an output binding to take the return
value. This is recommended if you only have one output binding.
If your function is synchronous (doesn't return a Promise), you must pass the context
object, as calling context.done
is required for correct use.
// You should include `context`
// Other arguments like `myTrigger` are optional
module.exports = function(context, myTrigger, myInput, myOtherInput) {
// function logic goes here :)
context.done();
};
To assign an output using return
, change the name
property to $return
in function.json
.
{
"type": "http",
"direction": "out",
"name": "$return"
}
In this case, your function should look like the following example:
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
// You can call and await an async method here
return {
body: "Hello, world!"
};
}
In JavaScript, bindings are configured and defined in a function's function.json. Functions interact with bindings a number of ways.
Input are divided into two categories in Azure Functions: one is the trigger input and the other is the additional input. Trigger and other input bindings (bindings of direction === "in"
) can be read by a function in three ways:
-
[Recommended] As parameters passed to your function. They are passed to the function in the same order that they are defined in function.json. The
name
property defined in function.json does not need to match the name of your parameter, although it should.module.exports = async function(context, myTrigger, myInput, myOtherInput) { ... };
-
As members of the
context.bindings
object. Each member is named by thename
property defined in function.json.module.exports = async function(context) { context.log("This is myTrigger: " + context.bindings.myTrigger); context.log("This is myInput: " + context.bindings.myInput); context.log("This is myOtherInput: " + context.bindings.myOtherInput); };
Outputs (bindings of direction === "out"
) can be written to by a function in a number of ways. In all cases, the name
property of the binding as defined in function.json corresponds to the name of the object member written to in your function.
You can assign data to output bindings in one of the following ways (don't combine these methods):
-
[Recommended for multiple outputs] Returning an object. If you are using an async/Promise returning function, you can return an object with assigned output data. In the example below, the output bindings are named "httpResponse" and "queueOutput" in function.json.
module.exports = async function(context) { let retMsg = 'Hello, world!'; return { httpResponse: { body: retMsg }, queueOutput: retMsg }; };
-
[Recommended for single output] Returning a value directly and using the $return binding name. This only works for async/Promise returning functions. See example in exporting an async function.
-
Assigning values to
context.bindings
You can assign values directly to context.bindings.module.exports = async function(context) { let retMsg = 'Hello, world!'; context.bindings.httpResponse = { body: retMsg }; context.bindings.queueOutput = retMsg; };
To define the data type for an input binding, use the dataType
property in the binding definition. For example, to read the content of an HTTP request in binary format, use the type binary
:
{
"type": "httpTrigger",
"name": "req",
"direction": "in",
"dataType": "binary"
}
Options for dataType
are: binary
, stream
, and string
.
The runtime uses a context
object to pass data to and from your function and the runtime. Used to read and set data from bindings and for writing to logs, the context
object is always the first parameter passed to a function.
module.exports = async function(context){
// function logic goes here
context.log("The function has executed.");
};
The context passed into your function exposes an executionContext
property, which is an object with the following properties:
Property name | Type | Description |
---|---|---|
invocationId |
String | Provides a unique identifier for the specific function invocation. |
functionName |
String | Provides the name of the running function |
functionDirectory |
String | Provides the functions app directory. |
The following example shows how to return the invocationId
.
module.exports = async function (context, req) {
context.res = {
body: context.executionContext.invocationId
};
};
context.bindings
Returns a named object that is used to read or assign binding data. Input and trigger binding data can be accessed by reading properties on context.bindings
. Output binding data can be assigned by adding data to context.bindings
For example, the following binding definitions in your function.json let you access the contents of a queue from context.bindings.myInput
and assign outputs to a queue using context.bindings.myOutput
.
{
"type":"queue",
"direction":"in",
"name":"myInput"
...
},
{
"type":"queue",
"direction":"out",
"name":"myOutput"
...
}
// myInput contains the input data, which may have properties such as "name"
var author = context.bindings.myInput.name;
// Similarly, you can set your output data
context.bindings.myOutput = {
some_text: 'hello world',
a_number: 1 };
In a synchronous function, you can choose to define output binding data using the context.done
method instead of the context.binding
object (see below).
context.bindingData
Returns a named object that contains trigger metadata and function invocation data (invocationId
, sys.methodName
, sys.utcNow
, sys.randGuid
). For an example of trigger metadata, see this event hubs example.
In 2.x, 3.x, and 4.x, the function should be marked as async even if there is no awaited function call inside the function, and the function doesn't need to call context.done to indicate the end of the function.
//you don't need an awaited function call inside to use async
module.exports = async function (context, req) {
context.log("you don't need an awaited function call inside to use async")
};
The context.done method is used by 1.x synchronous functions. In 2.x, 3.x, and 4.x, the function should be marked as async even if there is no awaited function call inside the function, and the function doesn't need to call context.done to indicate the end of the function.
module.exports = function (context, req) {
// 1.x Synchronous code only
// Even though we set myOutput to have:
// -> text: 'hello world', number: 123
context.bindings.myOutput = { text: 'hello world', number: 123 };
// If we pass an object to the done function...
context.done(null, { myOutput: { text: 'hello there, world', noNumber: true }});
// the done method overwrites the myOutput binding to be:
// -> text: 'hello there, world', noNumber: true
}
context.log(message)
Allows you to write to the streaming function logs at the default trace level, with other logging levels available. Trace logging is described in detail in the next section.
In Functions, you use the context.log
methods to write trace output to the logs and the console. When you call context.log()
, your message is written to the logs at the default trace level, which is the info trace level. Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application telemetry and your trace outputs. To learn more, see monitoring Azure Functions.
The following example writes a log at the info trace level, including the invocation ID:
context.log("Something has happened. " + context.invocationId);
All context.log
methods support the same parameter format that's supported by the Node.js util.format method. Consider the following code, which writes function logs by using the default trace level:
context.log('Node.js HTTP trigger function processed a request. RequestUri=' + req.originalUrl);
context.log('Request Headers = ' + JSON.stringify(req.headers));
You can also write the same code in the following format:
context.log('Node.js HTTP trigger function processed a request. RequestUri=%s', req.originalUrl);
context.log('Request Headers = ', JSON.stringify(req.headers));
Note
Don't use console.log
to write trace outputs. Because output from console.log
is captured at the function app level, it's not tied to a specific function invocation and isn't displayed in a specific function's logs. Also, version 1.x of the Functions runtime doesn't support using console.log
to write to the console.
In addition to the default level, the following logging methods are available that let you write function logs at specific trace levels.
Method | Description |
---|---|
context.log.error(message) | Writes an error-level event to the logs. |
context.log.warn(message) | Writes a warning-level event to the logs. |
context.log.info(message) | Writes to info level logging, or lower. |
context.log.verbose(message) | Writes to verbose level logging. |
The following example writes the same log at the warning trace level, instead of the info level:
context.log.warn("Something has happened. " + context.invocationId);
Because error is the highest trace level, this trace is written to the output at all trace levels as long as logging is enabled.
Functions lets you define the threshold trace level for writing to the logs or the console. The specific threshold settings depend on your version of the Functions runtime.
To set the threshold for traces written to the logs, use the logging.logLevel
property in the host.json file. This JSON object lets you define a default threshold for all functions in your function app, plus you can define specific thresholds for individual functions. To learn more, see How to configure monitoring for Azure Functions.
To set the threshold for all traces written to logs and the console, use the tracing.consoleLevel
property in the host.json file. This setting applies to all functions in your function app. The following example sets the trace threshold to enable verbose logging:
{
"tracing": {
"consoleLevel": "verbose"
}
}
Values of consoleLevel correspond to the names of the context.log
methods. To disable all trace logging to the console, set consoleLevel to off. For more information, see host.json v1.x reference.
By default, Functions writes output as traces to Application Insights. For more control, you can instead use the Application Insights Node.js SDK to send custom telemetry data to your Application Insights instance.
const appInsights = require("applicationinsights");
appInsights.setup();
const client = appInsights.defaultClient;
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
// Use this with 'tagOverrides' to correlate custom telemetry to the parent function invocation.
var operationIdOverride = {"ai.operation.id":context.traceContext.traceparent};
client.trackEvent({name: "my custom event", tagOverrides:operationIdOverride, properties: {customProperty2: "custom property value"}});
client.trackException({exception: new Error("handled exceptions can be logged with this method"), tagOverrides:operationIdOverride});
client.trackMetric({name: "custom metric", value: 3, tagOverrides:operationIdOverride});
client.trackTrace({message: "trace message", tagOverrides:operationIdOverride});
client.trackDependency({target:"http://dbname", name:"select customers proc", data:"SELECT * FROM Customers", duration:231, resultCode:0, success: true, dependencyTypeName: "ZSQL", tagOverrides:operationIdOverride});
client.trackRequest({name:"GET /customers", url:"http://myserver/customers", duration:309, resultCode:200, success:true, tagOverrides:operationIdOverride});
};
const appInsights = require("applicationinsights");
appInsights.setup();
const client = appInsights.defaultClient;
module.exports = function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
// Use this with 'tagOverrides' to correlate custom telemetry to the parent function invocation.
var operationIdOverride = {"ai.operation.id":context.operationId};
client.trackEvent({name: "my custom event", tagOverrides:operationIdOverride, properties: {customProperty2: "custom property value"}});
client.trackException({exception: new Error("handled exceptions can be logged with this method"), tagOverrides:operationIdOverride});
client.trackMetric({name: "custom metric", value: 3, tagOverrides:operationIdOverride});
client.trackTrace({message: "trace message", tagOverrides:operationIdOverride});
client.trackDependency({target:"http://dbname", name:"select customers proc", data:"SELECT * FROM Customers", duration:231, resultCode:0, success: true, dependencyTypeName: "ZSQL", tagOverrides:operationIdOverride});
client.trackRequest({name:"GET /customers", url:"http://myserver/customers", duration:309, resultCode:200, success:true, tagOverrides:operationIdOverride});
context.done();
};
The tagOverrides
parameter sets the operation_Id
to the function's invocation ID. This setting enables you to correlate all of the automatically generated and custom telemetry for a given function invocation.
HTTP and webhook triggers and HTTP output bindings use request and response objects to represent the HTTP messaging.
The context.req
(request) object has the following properties:
Property | Description |
---|---|
body | An object that contains the body of the request. |
headers | An object that contains the request headers. |
method | The HTTP method of the request. |
originalUrl | The URL of the request. |
params | An object that contains the routing parameters of the request. |
query | An object that contains the query parameters. |
rawBody | The body of the message as a string. |
The context.res
(response) object has the following properties:
Property | Description |
---|---|
body | An object that contains the body of the response. |
headers | An object that contains the response headers. |
isRaw | Indicates that formatting is skipped for the response. |
status | The HTTP status code of the response. |
cookies | An array of HTTP cookie objects that are set in the response. An HTTP cookie object has a name , value , and other cookie properties, such as maxAge or sameSite . |
When you work with HTTP triggers, you can access the HTTP request and response objects in a number of ways:
-
From
req
andres
properties on thecontext
object. In this way, you can use the conventional pattern to access HTTP data from the context object, instead of having to use the fullcontext.bindings.name
pattern. The following example shows how to access thereq
andres
objects on thecontext
:// You can access your HTTP request off the context ... if(context.req.body.emoji === ':pizza:') context.log('Yay!'); // and also set your HTTP response context.res = { status: 202, body: 'You successfully ordered more coffee!' };
-
From the named input and output bindings. In this way, the HTTP trigger and bindings work the same as any other binding. The following example sets the response object by using a named
response
binding:{ "type": "http", "direction": "out", "name": "response" }
context.bindings.response = { status: 201, body: "Insert succeeded." };
-
[Response only] By calling
context.res.send(body?: any)
. An HTTP response is created with inputbody
as the response body.context.done()
is implicitly called. -
[Response only] By returning the response. A special binding name of
$return
allows you to assign the function's return value to the output binding. The following HTTP output binding defines a$return
output parameter:{ "type": "http", "direction": "out", "name": "$return" }
In a 2.x+ function, you can return the response object directly:
return { status: 201, body: "Insert succeeded." };
In a 1.x sync function, return the response object using the second argument of
context.done()
:// Define a valid response object. res = { status: 201, body: "Insert succeeded." }; context.done(null, res);
Note that request and response keys are in lowercase.
By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Node.js as needed. Functions uses built-in (not user configurable) thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. For more information, see How the Consumption and Premium plans work.
This scaling behavior is sufficient for many Node.js applications. For CPU-bound applications, you can improve performance further by using multiple language worker processes.
By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the FUNCTIONS_WORKER_PROCESS_COUNT application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers. This makes it less likely that a CPU-intensive function blocks other functions from running.
The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates when scaling out your application to meet demand.
The following table shows current supported Node.js versions for each major version of the Functions runtime, by operating system:
Functions version | Node version (Windows) | Node Version (Linux) |
---|---|---|
4.x (recommended) | ~16 (preview)~14 (recommended) |
`node |
3.x | ~14 ~12 ~10 |
`node |
2.x | ~12 ~10 ~8 |
`node |
1.x | 6.11.2 (locked by the runtime) | n/a |
You can see the current version that the runtime is using by logging process.version
from any function.
For Windows function apps, target the version in Azure by setting the WEBSITE_NODE_DEFAULT_VERSION
app setting to a supported LTS version, such as ~14
.
For Linux function apps, run the following Azure CLI command to update the Node version.
az functionapp config set --linux-fx-version "node|14" --name "<MY_APP_NAME>" --resource-group "<MY_RESOURCE_GROUP_NAME>"
To learn more about Azure Functions runtime support policy, please refer to this article.
In order to use community libraries in your JavaScript code, as is shown in the below example, you need to ensure that all dependencies are installed on your Function App in Azure.
// Import the underscore.js library
const _ = require('underscore');
module.exports = async function(context) {
// Using our imported underscore.js library
const matched_names = _
.where(context.bindings.myInput.names, {first: 'Carla'});
}
Note
You should define a package.json
file at the root of your Function App. Defining the file lets all functions in the app share the same cached packages, which gives the best performance. If a version conflict arises, you can resolve it by adding a package.json
file in the folder of a specific function.
When deploying Function Apps from source control, any package.json
file present in your repo, will trigger an npm install
in its folder during deployment. But when deploying via the Portal or CLI, you will have to manually install the packages.
There are two ways to install packages on your Function App:
-
Install all requisite packages locally by running
npm install
. -
Deploy your code, and ensure that the
node_modules
folder is included in the deployment.
-
Go to
https://<function_app_name>.scm.azurewebsites.net
. -
Click Debug Console > CMD.
-
Go to
D:\home\site\wwwroot
, and then drag your package.json file to the wwwroot folder at the top half of the page.
You can upload files to your function app in other ways also. For more information, see How to update function app files. -
After the package.json file is uploaded, run the
npm install
command in the Kudu remote execution console.
This action downloads the packages indicated in the package.json file and restarts the function app.
Add your own environment variables to a function app, in both your local and cloud environments, such as operational secrets (connection strings, keys, and endpoints) or environmental settings (such as profiling variables). Access these settings using process.env
in your function code.
When running locally, your functions project includes a local.settings.json
file, where you store your environment variables in the Values
object.
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "node",
"translatorTextEndPoint": "https://api.cognitive.microsofttranslator.com/",
"translatorTextKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"languageWorkers__node__arguments": "--prof"
}
}
When running in Azure, the function app lets you set and use Application settings, such as service connection strings, and exposes these settings as environment variables during execution.
[!INCLUDE Function app settings]
Access application settings as environment variables using process.env
, as shown here in the second and third calls to context.log()
where we log the AzureWebJobsStorage
and WEBSITE_SITE_NAME
environment variables:
module.exports = async function (context, myTimer) {
context.log("AzureWebJobsStorage: " + process.env["AzureWebJobsStorage"]);
context.log("WEBSITE_SITE_NAME: " + process.env["WEBSITE_SITE_NAME"]);
};
Note
As ECMAScript modules are currently a preview feature in Node.js 14 and 16 Azure Functions.
ECMAScript modules (ES modules) are the new official standard module system for Node.js. So far, the code samples in this article use the CommonJS syntax. When running Azure Functions in Node.js 14 or higher, you can choose to write your functions using ES modules syntax.
To use ES modules in a function, change its filename to use a .mjs
extension. The following index.mjs file example is an HTTP triggered function that uses ES modules syntax to import the uuid
library and return a value.
import { v4 as uuidv4 } from 'uuid';
export default async function (context, req) {
context.res.body = uuidv4();
};
The function.json
properties scriptFile
and entryPoint
can be used to configure the location and name of your exported function. These properties can be important when your JavaScript is transpiled.
By default, a JavaScript function is executed from index.js
, a file that shares the same parent directory as its corresponding function.json
.
scriptFile
can be used to get a folder structure that looks like the following example:
FunctionApp
| - host.json
| - myNodeFunction
| | - function.json
| - lib
| | - sayHello.js
| - node_modules
| | - ... packages ...
| - package.json
The function.json
for myNodeFunction
should include a scriptFile
property pointing to the file with the exported function to run.
{
"scriptFile": "../lib/sayHello.js",
"bindings": [
...
]
}
In scriptFile
(or index.js
), a function must be exported using module.exports
in order to be found and run. By default, the function that executes when triggered is the only export from that file, the export named run
, or the export named index
.
This can be configured using entryPoint
in function.json
, as in the following example:
{
"entryPoint": "logFoo",
"bindings": [
...
]
}
In Functions v2.x or higher, which supports the this
parameter in user functions, the function code could then be as in the following example:
class MyObj {
constructor() {
this.foo = 1;
};
async logFoo(context) {
context.log("Foo is " + this.foo);
}
}
const myObj = new MyObj();
module.exports = myObj;
In this example, it is important to note that although an object is being exported, there are no guarantees for preserving state between executions.
When started with the --inspect
parameter, a Node.js process listens for a debugging client on the specified port. In Azure Functions 2.x or higher, you can specify arguments to pass into the Node.js process that runs your code by adding the environment variable or App Setting languageWorkers:node:arguments = <args>
.
To debug locally, add "languageWorkers:node:arguments": "--inspect=5858"
under Values
in your local.settings.json file and attach a debugger to port 5858.
When debugging using VS Code, the --inspect
parameter is automatically added using the port
value in the project's launch.json file.
In version 1.x, setting languageWorkers:node:arguments
will not work. The debug port can be selected with the --nodeDebugPort
parameter on Azure Functions Core Tools.
Note
You can only configure languageWorkers:node:arguments
when running the function app locally.
When you target version 2.x or higher of the Functions runtime, both Azure Functions for Visual Studio Code and the Azure Functions Core Tools let you create function apps using a template that supports TypeScript function app projects. The template generates package.json
and tsconfig.json
project files that make it easier to transpile, run, and publish JavaScript functions from TypeScript code with these tools.
A generated .funcignore
file is used to indicate which files are excluded when a project is published to Azure.
TypeScript files (.ts) are transpiled into JavaScript files (.js) in the dist
output directory. TypeScript templates use the scriptFile
parameter in function.json
to indicate the location of the corresponding .js file in the dist
folder. The output location is set by the template by using outDir
parameter in the tsconfig.json
file. If you change this setting or the name of the folder, the runtime is not able to find the code to run.
The way that you locally develop and deploy from a TypeScript project depends on your development tool.
The Azure Functions for Visual Studio Code extension lets you develop your functions using TypeScript. The Core Tools is a requirement of the Azure Functions extension.
To create a TypeScript function app in Visual Studio Code, choose TypeScript
as your language when you create a function app.
When you press F5 to run the app locally, transpilation is done before the host (func.exe) is initialized.
When you deploy your function app to Azure using the Deploy to function app... button, the Azure Functions extension first generates a production-ready build of JavaScript files from the TypeScript source files.
There are several ways in which a TypeScript project differs from a JavaScript project when using the Core Tools.
To create a TypeScript function app project using Core Tools, you must specify the TypeScript language option when you create your function app. You can do this in one of the following ways:
-
Run the
func init
command, selectnode
as your language stack, and then selecttypescript
. -
Run the
func init --worker-runtime typescript
command.
To run your function app code locally using Core Tools, use the following commands instead of func host start
:
npm install
npm start
The npm start
command is equivalent to the following commands:
npm run build
func extensions install
tsc
func start
Before you use the func azure functionapp publish
command to deploy to Azure, you create a production-ready build of JavaScript files from the TypeScript source files.
The following commands prepare and publish your TypeScript project using Core Tools:
npm run build:production
func azure functionapp publish <APP_NAME>
In this command, replace <APP_NAME>
with the name of your function app.
When you work with JavaScript functions, be aware of the considerations in the following sections.
When you create a function app that uses the App Service plan, we recommend that you select a single-vCPU plan rather than a plan with multiple vCPUs. Today, Functions runs JavaScript functions more efficiently on single-vCPU VMs, and using larger VMs does not produce the expected performance improvements. When necessary, you can manually scale out by adding more single-vCPU VM instances, or you can enable autoscale. For more information, see Scale instance count manually or automatically.
When developing Azure Functions in the serverless hosting model, cold starts are a reality. Cold start refers to the fact that when your function app starts for the first time after a period of inactivity, it takes longer to start up. For JavaScript functions with large dependency trees in particular, cold start can be significant. To speed up the cold start process, run your functions as a package file when possible. Many deployment methods use the run from package model by default, but if you're experiencing large cold starts and are not running this way, this change can offer a significant improvement.
When you use a service-specific client in an Azure Functions application, don't create a new client with every function invocation. Instead, create a single, static client in the global scope. For more information, see managing connections in Azure Functions.
When writing Azure Functions in JavaScript, you should write code using the async
and await
keywords. Writing code using async
and await
instead of callbacks or .then
and .catch
with Promises helps avoid two common problems:
- Throwing uncaught exceptions that crash the Node.js process, potentially affecting the execution of other functions.
- Unexpected behavior, such as missing logs from context.log, caused by asynchronous calls that are not properly awaited.
In the example below, the asynchronous method fs.readFile
is invoked with an error-first callback function as its second parameter. This code causes both of the issues mentioned above. An exception that is not explicitly caught in the correct scope crashed the entire process (issue #1). Calling the 1.x context.done()
outside of the scope of the callback function means that the function invocation may end before the file is read (issue #2). In this example, calling 1.x context.done()
too early results in missing log entries starting with Data from file:
.
// NOT RECOMMENDED PATTERN
const fs = require('fs');
module.exports = function (context) {
fs.readFile('./hello.txt', (err, data) => {
if (err) {
context.log.error('ERROR', err);
// BUG #1: This will result in an uncaught exception that crashes the entire process
throw err;
}
context.log(`Data from file: ${data}`);
// context.done() should be called here
});
// BUG #2: Data is not guaranteed to be read before the Azure Function's invocation ends
context.done();
}
Using the async
and await
keywords helps avoid both of these errors. You should use the Node.js utility function util.promisify
to turn error-first callback-style functions into awaitable functions.
In the example below, any unhandled exceptions thrown during the function execution only fail the individual invocation that raised an exception. The await
keyword means that steps following readFileAsync
only execute after readFile
is complete. With async
and await
, you also don't need to call the context.done()
callback.
// Recommended pattern
const fs = require('fs');
const util = require('util');
const readFileAsync = util.promisify(fs.readFile);
module.exports = async function (context) {
let data;
try {
data = await readFileAsync('./hello.txt');
} catch (err) {
context.log.error('ERROR', err);
// This rethrown exception will be handled by the Functions Runtime and will only fail the individual invocation
throw err;
}
context.log(`Data from file: ${data}`);
}
For more information, see the following resources: