Node.js N-API: Implementation and Performance Comparison

I have been planning to give N-API a try since it was released in version 8 and became stable in Node.js version 10. I was curious to find out how fast C add-ons perform in comparison to JavaScript code and what is the complexity of implementation. There are several common reasons to use N-API:

  • Integration with a third-party C/C++ library.
  • Performance optimization of the existing JavaScript module.
  • Deeper integration with the operating system: file system, sockets, ports, and other lower level stuff.

In this article, I would like to touch base on the first two topics. The goal is to integrate JavaScript code with a third-party library and then compare the performance of JavaScript code and C++ add-on.

To bootstrap add-on development, you can use one of the existing modules. I chose the node-addon-api module. So, let’s get to code.

Simple Case: Sorting Arrays

First things first, let’s implement well-known bubble sort and quicksort algorithms in JavaScript and C via N-API, and compare their performance.

Let’s generate some test data – an array of random integers from 0 to 999 of a given length:

const ARRAY_LENGTH = 1000
const targetArray = Array(ARRAY_LENGTH)
 .fill(null)
 .map(() => Math.round(Math.random() * 1000))

We are going to sort this array using the quicksort algorithm implemented in JavaScript and use console.time to measure the execution time.

const defaultComparator = (a, b) => {
 if (a < b) {
   return -1
 }
 if (a > b) {
   return 1
 }
 return 0
}
 
const quickSort = ( unsortedArray, comparator = defaultComparator) => {
 const sortedArray = [ ...unsortedArray ]
 const recursiveSort = (start, end) => {
   if (end - start < 1) {
     return
   }
   const pivotValue = sortedArray[end]
   let splitIndex = start
   for (let i = start; i < end; i++) {
     const sort = comparator(sortedArray[i], pivotValue)
     if (sort === -1) {
       if (splitIndex !== i) {
         const temp = sortedArray[splitIndex]
         sortedArray[splitIndex] = sortedArray[i]
         sortedArray[i] = temp
       }
       splitIndex++
     }
   }
   sortedArray[end] = sortedArray[splitIndex]
   sortedArray[splitIndex] = pivotValue
   recursiveSort(start, splitIndex - 1)
   recursiveSort(splitIndex + 1, end)
 }
 recursiveSort(0, unsortedArray.length - 1)
 return sortedArray
}
 
console.time('JS quick sort')
 
const sortedArrayJS = quickSort(targetArray)
 
console.timeEnd('JS quick sort')

In case of an array length of 1000, execution time is approximately 1,6 ms.
All right, how about the bubble sort? Here is the code snippet:

const sortedArrayJSBubble = [...targetArray]
console.time('JS bubble sort')
for (let i = 0; i < sortedArrayJSBubble.length - 1; ++i) {
 for (let j = 0; j < sortedArrayJSBubble.length - 1 - i; ++j) {
   if (sortedArrayJSBubble[j] > sortedArrayJSBubble[j + 1]) {
     const temp = sortedArrayJSBubble[j + 1]
     sortedArrayJSBubble[j + 1] = sortedArrayJSBubble[j]
     sortedArrayJSBubble[j] = temp
   }
 }
}
console.timeEnd('JS bubble sort')

The algorithm is straightforward, and its execution time is 4,6 ms.

Let’s build the C add-on module, and compare the results with JavaScript implementation. We will use the node-gyp build tool. To configure the build, we will create bindings.gyp file in the project root. This file defines a source for a build, dependencies, build flags, environment, et cetera. In our case, it looks like this:

{
 'targets': [
   {
     'target_name': 'module',
     'cflags!': [ '-fno-exceptions' ],
     'cflags_cc!': [ '-fno-exceptions' ],
     'sources': [ './src/module.cc' ],
     'include_dirs': [
       '<!@(node -p \'require("node-addon-api").include\')'
     ],
 
     'dependencies': [
       '<!(node -p \'require("node-addon-api").gyp\')'
     ],
     'defines': [ 'NAPI_DISABLE_CPP_EXCEPTIONS' ],
   }
 ]
}

You can find more information about writing .gyp files in the documentation. The next step is to create module.cc in ./src folder and place the Sort method there:

napi_value Sort(const Napi::CallbackInfo &info)
{
 // get an environment in which method is being run
 Napi::Env env = info.Env();
 if (info.Length() < 2)
 {
   Napi::TypeError::New(env, "Wrong arguments").ThrowAsJavaScriptException();
   return env.Null();
 }
 if (!info[0].IsArray())
 {
   Napi::TypeError::New(env, "Wrong first argument").ThrowAsJavaScriptException();
   return env.Null();
 }
 if (!info[1].IsNumber())
 {
   Napi::TypeError::New(env, "Wrong second argument").ThrowAsJavaScriptException();
   return env.Null();
 }
 // get the method input params: array to sort and sorting type ('quick' or 'bubble')
 const Napi::Array inputArray = info[0].As<Napi::Array>();
 const int sortType = info[1].As<Napi::Number>().Uint32Value();
 
 // get the input array length, create an empty array of unsigned integers with the length of the input array
 unsigned int length = inputArray.Length();
 unsigned int array[length];
 unsigned int i;
 
 // params of add-on’s Sort methods are storing in Napi::CallbackInfo structure. To work with the input data it’s handy to convert and copy it before calling the sorting algorithm
for (i = 0; i < length; i++)
 {
   array[i] = inputArray[i].As<Napi::Number>().Uint32Value();
 }
 unsigned int *arrayPointer = &array[0];
 
 // passing the array pointer to the sorting method
 switch (sortType)
 {
 case BUBBLE_SORT:
   bubbleSort(arrayPointer, length);
   break;
 case QUICK_SORT:
   quickSort(arrayPointer, length);
   break;
 default:
   break;
 }
  // create an output array
 Napi::Array outputArray = Napi::Array::New(env, length);
 for (i = 0; i < length; i++)
 {
   outputArray[i] = Napi::Number::New(env, uint32_t(array[i]));
 }
 return outputArray;
}

As you can see from the listing, we use some N-API-specific data structures and methods. You can find the full documentation for the wrapper here.

For more details on the code snippet above, please check the comments inlined. I will not describe sort methods as they are common, however, you can find them in this git repository.

To export the Sort method from the module, we use the following method:

Napi::Object Init(Napi::Env env, Napi::Object exports)
{
 exports.Set(Napi::String::New(env, "sort"), Napi::Function::New(env, Sort));
 return exports;
}
NODE_API_MODULE(module, Init)

To build and run our module, we add the following script to the package.json file:

"start": "node-gyp configure build && node --no-warnings index.js"

All right. We are now ready to use the module:

console.time('N-API quick sort')
const sortedArrayCquick = addon.sort(targetArray, QUICK_SORT)
console.timeEnd('N-API quick sort')
 
console.time('N-API bubble sort')
const sortedArrayCbubble = addon.sort(targetArray, BUBBLE_SORT)
console.timeEnd('N-API bubble sort')

Executing the above lines gives us the following results: quicksort — 0,310 ms and bubble sort — 0,843 ms.

I performed several runs of the algorithm with the array sizes from 10 to 100,000 to have a better understanding of how it behaves in different cases. Check out the results in the table below:

Average time (ms) for n iterations

Here is a chart with all the results:

This chart may look a bit weird. Few comments for better understanding. To represent all the results on one chart we use a logarithmic scale for Y-axis. For values under 1000 ms bars direction is inverted, higher length represents lower value.

We can see that pure JavaScript implementation is much slower than N-API addon.

Image Processing

Now, let’s take a more complex computational problem – image processing. We already have two implementations of the same library – OpenCV in C and opencv.js in JavaScript and WASM.

Let’s start with a simple case of converting 320 x 320 pixels Baboon JPEG photo to grayscale. Converting it with opencv.js is pretty simple and looks like this:

console.time('opencv.js to B&W')
 
const inJpgDataBW = fs.readFileSync(path.join(__dirname, 'baboon.jpg'))
const rawDataBW = jpeg.decode(inJpgDataBW)
const img = cv.matFromImageData(rawDataBW)
const bwImg = new cv.Mat()
 
cv.cvtColor(img, bwImg, cv.COLOR_BGR2GRAY)
 
const outData = {
 data: bwImg.data,
 width: bwImg.size().width,
 height: bwImg.size().height,
}
const outJpegData = jpeg.encode(outData, 100)
 
fs.writeFileSync(path.join(__dirname, 'baboon-bw-opencv-js.jpg'), outJpegData.data )
console.timeEnd('opencv.js to B&W')

This operation took 190,158 ms.Upscaling of a 320 x 320 image to 1024 x 1024 pixels took approximately 378,819 ms:

const x = 1024
const y = 1024
 
console.time('opencv.js resize')
const inJpegDataResize = fs.readFileSync(path.join(__dirname, 'baboon.jpg'))
const rawDataResize = jpeg.decode(inJpegDataResize)
 
const imgResize = cv.matFromImageData(rawDataResize)
 
const resizedImg = new cv.Mat()
const newSize = new cv.Size(x, y)
cv.resize(imgResize, resizedImg, newSize)
 
const outDataResize = {
 data: resizedImg.data,
 width: resizedImg.size().width,
 height: resizedImg.size().height,
}
const outJpegDataResize = jpeg.encode(outDataResize, 100)
 
fs.writeFileSync(path.join(__dirname, 'baboon-resize-opencv-js.jpg'), outJpegDataResize.data)
console.timeEnd('opencv.js resize')

Let’s perform the same operations with OpenCV. To work with it on Mac, we can install it with brew:

brew install opencv@2
brew link --force opencv@2

We should then modify bindings.gyp by adding this code to the ‘module’ target:

'libraries': [
  '<!(pkg-config opencv --libs)'
],
'conditions': [
  [
    'OS==\'mac\'', {
      'xcode_settings': {
        'OTHER_CFLAGS': [
          '-mmacosx-version-min=10.7',
          '-std=c++11',
          '-stdlib=libc++',
          '<!(pkg-config opencv --cflags)'
        ],
        'GCC_ENABLE_CPP_RTTI': 'YES',
        'GCC_ENABLE_CPP_EXCEPTIONS': 'YES'
      }
    }
  ]
]

The configuration of the build environment for OpenCV and for a lot of other C libraries depends on the environment’s OS. The ‘conditions’ section is used to define OS-specific parameters. The next step is to add a new method to the module.cc.

Implementation of the conversion is pretty similar to what we had in JavaScript: getting the input and output file paths as strings and calling toGrayScale.

void toGrayScale(std::string inPath, std::string outPath)
{
 cv::Mat image, gray_image;
 image = cv::imread(inPath, 1);
 cv::cvtColor(image, gray_image, CV_BGR2GRAY);
 cv::imwrite(outPath, gray_image);
}

This function makes all OpenCV calls: read file, convert it and save.
To make it work, you need to export the function from module.cc:

exports.Set(
Napi::String::New(env, "toGrayScale"), Napi::Function::New(env, 
  ToGrayScale)
);

Now, we can make calls from our JS code:

console.time('opencv to B&W')
addon.toGrayScale(path.join(__dirname, 'baboon.jpg'), path.join(__dirname, 'baboon-bw-n-api.jpg'))
console.timeEnd('opencv to B&W')

Implementation of the resize algorithm looks very similar, and you can find it in the git repository.

Grayscale conversion of the same image took 6.246 ms.
Find the results in the table and chart below:

That is pretty obvious, and you can see it from the charts that C add-on is faster than JS implementation.

Conclusion

As we expected, N-API add-ons have a much better performance than native JavaScript code. JavaScript is flexible and easy to deal with, but C code is blazing fast. This is beneficial for complex synchronous algorithms such as data processing, encoding, decoding, et cetera. Adding N-API C add-on to the existing JavaScript code base increases complexity and effort. It incorporates development, platform-dependent builds, testing, and support of the codebase written in two languages. On the other hand, there are already plenty of libraries ported to node.js that provide bindings to various existing C libraries. In this case, development and build phases have a complexity of “just another npm dependency”. Still, you need to take care of testing, but we expect you already have.

To summarise, the decision of implementing N-API add-on vs pure JS is a choice between increasing complexity, improving performance and implementation, as well as the support efforts of the solution.

From our experience, it is a bit uncommon to write N-API add-ons with no C/C++ experience. However, it’s not a big deal. Remember the mantra, “If it hurts, do it more often.” :). The usage of node-addon-api really speeds up development.

We hope the performance comparison from this post motivates you to consider N-API in your project whenever you need a performance boost. You can find the full code from this experiment here. You are welcome to pull or fork it.