The evolution of ECMAScript
Nowadays, browser vendors are releasing new features in short iterations, and users receive updates quite often. This helps developers take advantage of bleeding-edge Web technologies. ES2015 is already standardized. The implementation of the latest version of the language has already started in the major browsers. Learning the new syntax and taking advantage of it will not only increase our productivity as developers but will also prepare us for the near future when all browsers will have full support for it. This makes it essential to start using the latest syntax now.
Some projects' requirements may enforce us to support older browsers, which do not support any ES2015 features. In this case, we can directly write ECMAScript 5, which has different syntax but equivalent semantics to ES2015. On the other hand, a better approach will be to take advantage of the process of transpilation. Using a transpiler in our build process allows us to take advantage of the new syntax by writing ES2015 and translating it to a target language that is supported by the browsers.
Angular has been around since 2009. Back then, the frontend of most websites was powered by ECMAScript 3, the last main release of ECMAScript before ECMAScript 5. This automatically meant that the language used for the framework's implementation was ECMAScript 3. Taking advantage of the new version of the language requires porting of the entirety of AngularJS to ES2015.
From the beginning, Angular 2 took into account the current state of the Web by bringing the latest syntax in the framework. Although new Angular is written with a superset of ES2016 (TypeScript, which we're going to take a look at in Chapter 3, TypeScript Crash Course), it allows developers to use a language of their own preference. We can use ES2015, or if we prefer not to have any intermediate preprocessing of our code and simplify the build process, we can even use ECMAScript 5. Note that if we use JavaScript for our Angular applications we cannot use Ahead-of-Time (AoT) compilation. You can find more about AoT compilation in Chapter 8, Tooling and Development Experience.
Web Components
The first public draft of Web Components was published on May 22, 2012, about three years after the release of AngularJS. As mentioned, the Web Components standard allows us to create custom elements and attach behavior to them. It sounds familiar; we've already used a similar concept in the development of the user interface in AngularJS applications. Web Components sound like an alternative to Angular directives; however, they have a more intuitive API and built-in browser support. They introduced a few other benefits, such as better encapsulation, which is very important, for example, in handling CSS-style collisions.
A possible strategy for adding Web Components support in AngularJS is to change the directives implementation and introduce primitives of the new standard in the DOM compiler. As Angular developers, we know how powerful and complex the directives API is. It includes a lot of properties, such as postLink
, preLink
, compile
, restrict
, scope
, controller
, and much more, and of course, our favorite transclude
. Approved as standard, Web Components will be implemented on a much lower level in the browsers, which introduces plenty of benefits, such as better performance and native API.
During the implementation of Web Components, a lot of web specialists met with the same problems the Angular team did when developing the directives API, and came up with similar ideas. Good design decisions behind Web Components include the content element, which deals with the infamous transclusion problem in AngularJS. Since both the directives API and Web Components solve similar problems in different ways, keeping the directives API on top of Web Components would have been redundant and added unnecessary complexity. That's why the Angular core team decided to start from the beginning by building a framework compatible with Web Components and taking full advantage of the new standard. Web Components involve new features; some of them were not yet implemented by all browsers. In case our application is run in a browser which does not support any of these features natively, Angular emulates them. An example for this is the content element polyfilled with the ng-content
directive.
Web Workers
JavaScript is known for its event loop. Usually, JavaScript programs are executed in a single thread and different events are scheduled by being pushed in a queue and processed sequentially, in the order of their arrival. However, this computational strategy is not effective when one of the scheduled events requires a lot of computational time. In such cases, the event's handling will block the main thread, and all other events will not be handled until the time-consuming computation is complete and the execution passed to the next one in the queue. A simple example of this is a mouse click that triggers an event, in which callback we do some audio processing using the HTML5 audio API. If the processed audio track is big and the algorithm running over it is heavy, this will affect the user's experience by freezing the UI until the execution is complete.
The Web Workers API was introduced in order to prevent such pitfalls. It allows execution of heavy computations inside the context of a different thread, which leaves the main thread of execution free, capable of handling user input and rendering the user interface.
How can we take advantage of this in Angular? In order to answer this question, let's think about how things work in AngularJS. What if we have an enterprise application, which processes a huge amount of data that needs to be rendered on the screen using data binding? For each binding, the framework will create a new watcher. Once the digest loop is run, it will loop over all the watchers, execute the expressions associated with them, and compare the returned results with the results gained from the previous iteration. We have a few slowdowns here:
- The iteration over a large number of watchers.
- The evaluation of the expression in a given context.
- The copy of the returned result.
- The comparison between the current result of the expression's evaluation and the previous one.
All these steps could be quite slow, depending on the size of the input. If the digest loop involves heavy computations, why not move it to a Web Worker? Why not run the digest loop inside Web Worker, get the changed bindings, and then apply them to the DOM?
There were experiments by the community which aimed for this result. However, their integration into the framework wasn't trivial. One of the main reasons behind the lack of satisfying results was the coupling of the framework with the DOM. Often, inside the watchers' callbacks, AngularJS directly manipulates the DOM, which makes it impossible to move the watchers inside a Web Worker since the Web Workers are executed in an isolated context, without access to the DOM. In AngularJS, we may have implicit or explicit dependencies between the different watchers, which require multiple iterations of the digest loop in order to get stable results. Combining the last two points, it is quite hard to achieve practical results in calculating the changes in threads other than the main thread of execution.
Fixing this in AngularJS introduces a great deal of complexity into the internal implementation. The framework simply was not built with this in mind. Since Web Workers were introduced before the Angular 2 design process started, the core team took them into consideration from the beginning.