Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Node Cookbook: Second Edition

You're reading from   Node Cookbook: Second Edition Transferring your JavaScript skills to server-side programming is simplified with this comprehensive cookbook. Each chapter focuses on a different aspect of Node, featuring recipes supported with lots of illustrations, tips, and hints.

Arrow left icon
Product type Paperback
Published in Apr 2014
Publisher Packt
ISBN-13 9781783280438
Length 378 pages
Edition Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
David Mark Clements David Mark Clements
Author Profile Icon David Mark Clements
David Mark Clements
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Node Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
1. Making a Web Server 2. Exploring the HTTP Object FREE CHAPTER 3. Working with Data Serialization 4. Interfacing with Databases 5. Employing Streams 6. Going Real Time 7. Accelerating Development with Express 8. Implementing Security, Encryption, and Authentication 9. Integrating Network Paradigms 10. Writing Your Own Node Modules 11. Taking It Live Index

Introduction


To quote Dominic Tarr, the Streams API is Node's "best and most misunderstood idea". Throughout this book, recipes often touch on the Streams API. Streams are fundamental to the Node platform and are utilized in many of the core modules.

A stream is basically an object with some formalized methods and functionality, which is geared towards receiving, sending, and processing data in small pieces called chunks. The type of stream, that is, whether it's readable, writable, or both, determines these methods and functionality. This is known as a duplex stream.

There are many advantages of streams over the more traditional buffering method, whereby all data is read into memory prior to processing. Primarily, we use less memory this way—once a chunk is processed and sent somewhere else, to a client for instance, and we no longer need that data, we can simply discard it. This allows the data to be garbage collected.

In addition, we can deliver this first chunk (and the subsequent chunks...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image