Dealing with large amounts of data can be difficult. Writable streams can be a great help when working with such data. Writable streams are more time and memory efficient because they process data in small chunks (portions) rather than all the amount at once. You can write any large-sized text or media files without worrying about speed drop or memory leaks. Moreover, we generally use writable streams to write data to a variety of destinations, including files, network sockets, and HTTP responses.
Let's learn how they work in more detail!
Writing with stream
The write stream is part of the fs module. You can bring it into your project by writing this line:
const fs = require('node:fs');
The next step is to find the fs.createWriteStream method. Save it into a new constant:
const writeStream = fs.createWriteStream('numbers.txt');
The method takes in two parameters, the first one is the path where you want to write (numbers.txt file in this case), and the second one is optional. Here are some of the settings the optional parameter includes:
- flags — determine the operations you want to perform on a file (
wby default) - encoding — (
utf-8by default) - mode — permissions to write and read (
0666by default, means you can read and write to the file) - autoClose — closes the file once writing is complete (set to
trueby default) - fd — file handle, if you specify it, the first parameter (path) will be ignored
When you initialize your stream in the writeStream variable, you have access to its methods. Let's do a small speed test, and see how the stream writes data.
The example implements a for-loop to write numbers from 0 to 100 to a file called numbers.txt, which you specified at the beginning of this section. The console.time method helps measure the time taken to complete the operation.
console.time('check');
for (i = 0; i < 100; i++) {
writeStream.write(`${i}\n`)
};
console.timeEnd('check');
Here, the code calls the write method on the writeStream instance. Then it passes i to the write method. After running the code, you will see a numbers.txt file with numbers from 0 to 99.
The result is mind-blowing. It took only a second to complete.
Why use writable stream
You might already be familiar with sync and async methods of writing files, such as fs.writeFile, fs.writeFileSync, etc. Let's now perform the speed experiment and compare the writable stream with one of these methods.
Here's a code snippet for fs.writeFile method. It has the same for-loop from the earlier snippet to write numbers from 0 to 100 to a file called speed-test.txt. Executing the code in a specific machine took about 8 seconds. This is 8 times slower compared to the fs writable stream. Try to run it on your machine and compare the results.
Consider this — What if you had 10 000 numbers to write? Or 1 000 000? That's why writable stream is such a game-changer.
console.time('check');
for (i = 0; i < 100; i++) {
fs.writeFile('speed-test.txt', `${i}\n`, {flag: 'a'}, err => {
if (err) {
console.log(err);
};
});
};
console.timeEnd('check');
Writable streams are much faster compared to other methods. Moreover, they use less memory while doing the same thing. Writable streams can greatly improve the performance of your apps.
When a writable stream deals with a large file, it doesn't load all the data into the memory at once. Instead, it breaks the file into smaller pieces and processes the data piece by piece. Just like in real life, it's easier to solve it when it's decomposed into smaller parts. The same holds true when working with streams.
Finish writing
Once you are done writing data, you can use the end or destroy method to finish the stream. This is useful when you don't want any other data to be written to your file in the future.
The end method allows you to include some data (string or buffer). It will be the final part of your file:
writeStream.end('Bye, stream!');
If you run the program, you'll see this string in your numbers.txt file. Now, if you try to write something after calling end, you'll get an error.
The destroy method doesn't accept any argument but functions just like the end method. After calling the destroy method, it won't let you write to a file.
writeStream.destroy();Read and write
Oftentimes, you'll need to read from one source and then copy it to a different file. While reading from the file, you may need to modify the chunks in a specific way and then save them in a new format. This means that readable and writable streams almost always go hand in hand. Let's see how you can combine these two methods.
const readStream = fs.createReadStream('users.js');
const writeStream = fs.createWriteStream('copy-users.js');
readStream.on('data', (chunk) => {
writeStream.write(chunk);
});
The code snippet uses both readable and writable streams. The users.js file contains an array of user data. If you want you can try using any other file, even the numbers.txt file from the previous sections. On the data event, the read stream passes every chunk into the writable stream. As a result, the code creates a copy of the specified file.
Another way to do the same is via pipes. This is a more flexible and convenient way of writing data. Here's how you can rewrite your previous code:
readStream.pipe(writeStream);
You are basically calling the pipe method on the readable stream and passing in the destination (output file) to write data to. Just like household pipes transfer water or gas from one place to another, streaming pipes transfer data.
The result is the same as in the previous example, except that the code is now cleaner and shorter.
Event emitters
There is a sequence of actions that takes place when writing to a file. For example, the program needs to open the stream first, then the stream gets ready to be used, then it writes each chunk one by one, and finally closes the stream if there are no errors. You can listen to the error event too, which makes it easy to handle errors with streams.
The events are intuitive and you can easily understand them by their names. They are open, ready, close, finish, error, drain, pipe, and unpipe.
open— fires when a stream opensready— a stream is ready to be usedclose— a stream is closedfinish— fires whenstream.endis callederror— catches an errordrain — means the internal buffer is empty and the stream is ready to write more datapipeandunpipe — show thatstream.pipeandstream.unpipemethods are called on a readable stream
Here's a short example that uses some of these events:
const writeStream = fs.createWriteStream('weather.txt');
writeStream.write("It's sunny today.");
writeStream.end('Yay!');
writeStream.on('open', () => console.log('Stream opened'));
writeStream.on('finish', () => console.log('Stream ended'));
The code snippet creates a writable stream with a specified path to the file — weather.txt. Then it writes to the file after which the process ends. When open and finish events fire, the program prints some info to the console.
Conclusion
The writable stream of the fs module is a great tool that allows you to work with large files of any format. It works faster and consumes less memory than the standard fs methods for writing files. You can use writable streams exclusively or in combination with readable streams through the pipes. The module also gives access to event emitters, therefore you have better control and understanding of what is happening in every step of writable streams.