Build Fast, Performant, and Intuitive Web Applications
Tejas Kumar
by Tejas Kumar
Copyright 2023 Tejas Kumar. All rights reserved.
Printed in the United States of America.
Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles ( http://oreilly.com ). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com.
See http://oreilly.com/catalog/errata.csp?isbn=9781098138714 for release details.
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Fluent React, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc.
The views expressed in this work are those of the author and do not represent the publisher’s views. While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
978-1-098-13865-3
[FILL IN]
Let’s start with a disclaimer: React was made to be used by all. In fact, you could go through life never having read this book and continue to use React without problems! This book dives much deeper into React for those of us that are curious about its internals, advanced patterns, and best practices. It lends itself better to knowing React deeper instead of learning how to use React. There are plenty of other books that are written with the intent to teach folks how to use React as an end-user. In contrast, this book will help you know React at the level of a library/framework author instead of an end-user. In keeping with that theme, let’s go on a deep dive together starting at the top: the higher-level, entry-level topics. We’ll start with the basics of React, and then dive deeper and deeper into the details of how React works.
In this chapter, we’ll talk about why React exists, how it works, and what problems it solves. We’ll cover its initial inspiration and design, and follow it from its humble beginnings at Facebook to the prevalent solution that it is today. This chapter is a bit of a meta chapter, but it’s important to understand the context of React before we dive into the details. The next chapter is a complement to this one, but takes a look at React’s usage with the server as a starting point instead of client devices. Without digressing too far, let’s dive in here.
The answer in one word is: updates. In the early days of the web, we had a lot of static pages. We’d fill out forms, hit submit, and load an entirely new page. This was fine for a while, but then we started to want more. We wanted to be able to see things update instantly without having to wait for a new page to be rendered and loaded. We wanted the web and its pages to feel snappier and more “instant”. The problem was that these instant updates were pretty hard to do at scale for a number of reasons:
These were some of the large problems for companies that were building web apps at the time. They had to figure out how to make their apps feel snappy and instant, but also scale to millions of users and work reliably in a safe way. For example, let’s consider a button click: when a user clicks a button, we want to update the user interface to reflect that the button has been clicked. We’d need to consider at least 4 different “states” the user interface can be in:
Once we have these states, we need to figure out how to update the user interface to reflect these states. Often times, updating the user interface would require the following steps:
document.querySelector or document.getElementByIdThis is a simple example, but it’s a good one to start with. Let’s say we have a button that says “Like” and when a user clicks it, we want to update the button to say “Liked”. How do we do this? To start with, we’d have an HTML element:
<buttontype="button">Like</button>
We’d need some way to reference this button with JavaScript, so we’d give it an id attribute:
<buttontype="button"id="like-button">Like</button>
Great! Now that there’s an id, JavaScript can work with it to make it interactive. We can get a reference to the button using document.getElementById, and then we’ll add an event listener to the button to listen for click events:
constlikeButton=document.getElementById("like-button");likeButton.addEventListener("click",()=>{// do something});
Now that we have an event listener, we can do something when the button is clicked. Let’s say we want to update the button to say “Liked” when it’s clicked. We can do this by updating the button’s text content:
constlikeButton=document.getElementById("like-button");likeButton.addEventListener("click",()=>{likeButton.textContent="Liked";});
Great! Now we have a button that says “Like” and when it’s clicked, it says “Liked”. The problem here is that we can’t “unlike” things. Let’s fix that and update the button to say “Like” again if it’s clicked in its “Liked” state. We’d need to add some state to the button to keep track of whether or not it’s been clicked. We can do this by adding a data-liked attribute to the button:
<buttontype="button"id="like-button"data-liked="false">Like</button>
Now that we have this attribute, we can use it to keep track of whether or not the button has been clicked. We can update the button’s text content based on the value of this attribute:
constlikeButton=document.getElementById("like-button");likeButton.addEventListener("click",()=>{constliked=likeButton.getAttribute("data-liked")==="true";likeButton.setAttribute("data-liked",!liked);likeButton.textContent=liked?"Like":"Liked";});
Wait, but we’re just changing the textContent of the button! We’re not actually saving the liked state to a database. Normally to do this, we’d communicate over the network like so:
constlikeButton=document.getElementById("like-button");likeButton.addEventListener("click",()=>{constliked=likeButton.getAttribute("data-liked")==="true";// communicate over the networkfetch("/like",{method:"POST",body:JSON.stringify({liked:!liked}),}).then(()=>{likeButton.setAttribute("data-liked",!liked);likeButton.textContent=liked?"Like":"Liked";});});
Now we’re communicating over the network, but what if the network request fails? We’d need to update the button’s text content to reflect that the network request failed. We can do this by adding a data-failed attribute to the button:
<buttontype="button"id="like-button"data-liked="false"data-failed="false">Like</button>
Now we can update the button’s text content based on the value of this attribute:
constlikeButton=document.getElementById("like-button");likeButton.addEventListener("click",()=>{constliked=likeButton.getAttribute("data-liked")==="true";// communicate over the networkfetch("/like",{method:"POST",body:JSON.stringify({liked:!liked}),}).then(()=>{likeButton.setAttribute("data-liked",!liked);likeButton.textContent=liked?"Like":"Liked";}).catch(()=>{likeButton.setAttribute("data-failed",true);likeButton.textContent="Failed";});});
There’s one more case to handle: the process where we’re currently “liking” a thing. That is, the pending state. To model this in code, we’d set yet another attribute on the button for pending state like so:
<buttontype="button"id="like-button"data-pending="false"data-liked="false"data-failed="false">Like</button>
Now, we can disable the button if a network request is in process so that multiple clicks don’t queue up network requests and lead to odd race conditions and server overload.
constlikeButton=document.getElementById("like-button");likeButton.addEventListener("click",()=>{constliked=likeButton.getAttribute("data-liked")==="true";constisPending=likeButton.getAttribute("data-pending")==="true";if(isPending){return;// do nothing}likeButton.setAttribute("data-pending","true");likeButton.setAttribute("disabled","disabled");// communicate over the networkfetch("/like",{method:"POST",body:JSON.stringify({liked:!liked}),}).then(()=>{likeButton.setAttribute("data-liked",!liked);likeButton.textContent=liked?"Like":"Liked";}).catch(()=>{likeButton.setAttribute("data-failed","true");likeButton.textContent="Failed";}).finally(()=>{likeButton.setAttribute("data-pending","false");likeButton.setAttribute("disabled",null);});});
Okay, now our button is kind of robust and can handle multiple states—but a few questions still remain:
data-pending really necessary? Can’t we just check if the button is disabled? Probably not because a disabled button could be disabled for other reasons, like the user not being logged in or otherwise not having permission to click the button.data-state attribute instead of data-liked and data-failed? Probably, but then we’d need to add a bunch of logic to handle the different states.document.appendChild it? This would make it easier to test and would make the code more self-contained, but then we’d have to keep track of its parent if its parent isn’t document. In fact, we might have to keep track of all the parents on the page.React helps us solve some of these problems but not all of them: for example, the question of how to break up state into separate flags (isPending, hasFailed, etc.) or a single state variable (like state) is a question that React doesn’t answer for us. It’s a question that we have to answer for ourselves. But React does help us solve the problem of scale: creating a lot of buttons that need to be interactive and updating the user interface in response to events in a minimal and efficient way, and doing this in a testible, reproducible, and reliable way.
This is a very simple example, but it’s a good one to start with. So far, we’ve seen how we can use JavaScript to make a button interactive, but this is a very manual process if we want to do it well: we have to find the button in the browser, add an event listener, update the button’s text content, and account for myriad edge cases. This is a lot of work, and it’s not very scalable. What if we had a lot of buttons on the page? What if we had a lot of buttons that needed to be interactive? What if we had a lot of buttons that needed to be interactive, and we needed to update the user interface in response to events? Would we use event delegation (or event bubbling) and attach an event listener to the higher document? Or should we attach event listeners to each button?
This pain of creating reliable and scalable user interfaces was shared by many web companies at the time. It was at this point on the web that we saw the rise of multiple JavaScript-based solutions that aimed to solve this: Backbone, KnockoutJS, AngularJS, and jQuery. Let’s look at these solutions in turn and see how they solved this problem. This will help us understand how React is different from these solutions, and may even be superior to them.
Backbone was one of the first of these solutions, and it was an elegantly simple solution: it was a library that provided a way to create “models” and “views”. Models were conceptually sources of data, and views were conceptually user interfaces that consumed and rendered that data. Backbone exported comfortable APIs to work with these models and views, and then it provided a way to connect the models and views together. This was a solution that was very powerful and flexible for its time. It was also a solution that was scalable to use and allowed developers to test their code in isolation.
By way of example, here’s our button example from before but this time using Backbone:
constLikeButton=Backbone.View.extend({tagName:"button",attributes:{type:"button",},events:{click:"onClick",},initialize(){this.model.on("change",this.render,this);},render(){this.$el.text(this.model.get("liked")?"Liked":"Like");returnthis;},onClick(){fetch("/like",{method:"POST",body:JSON.stringify({liked:!this.model.get("liked")}),}).then(()=>{this.model.set("liked",!this.model.get("liked"));}).catch(()=>{this.model.set("failed",true);}).finally(()=>{this.model.set("pending",false);});},});constlikeButton=newLikeButton({model:newBackbone.Model({liked:false,}),});document.body.appendChild(likeButton.render().el);
Notice how LikeButton is a subclass of Backbone.View and how it has a render method that returns this? We’d go on to see a similar render method in React, but let’s not get ahead of ourselves. Backbone exposed a chainable API that allowed developers to colocate logic as properties on objects. Comparing this to our previous example, we can see that Backbone has made it far more comfortable to create a button that is interactive and that updates the user interface in response to events. It also does this in a more structured way by grouping logic together. Some might also note that Backbone has made it more approachable to test this button in isolation because we can create a LikeButton instance and then call its render method to test it.
We’d test this component like so:
test("LikeButton",()=>{constlikeButton=newLikeButton({model:newBackbone.Model({liked:false,}),});expect(likeButton.render().el.textContent).toBe("Like");});
We can even test the button’s behavior after its state changes, as in the case of a click event like so:
test("LikeButton",()=>{constlikeButton=newLikeButton({model:newBackbone.Model({liked:false,}),});expect(likeButton.render().el.textContent).toBe("Like");likeButton.onClick();expect(likeButton.render().el.textContent).toBe("Liked");});
For this reason, Backbone was a very popular solution at the time. The alternative was to write a lot of imperative code that was hard to test and hard to reason about, with no guarantees that the code would work as expected in a reliable way. Therefore, Backbone was a very welcome solution.
Let’s compare this approach with another popular solution at the time: KnockoutJS. KnockoutJS was a library that provided a way to create “observables” and “bindings”. Observables were conceptually sources of data, and bindings were conceptually user interfaces that consumed and rendered that data: observables were like models, and bindings were like views. KnockoutJS exported APIs to work with these observables and bindings. Let’s look at how we’d implement this button in KnockoutJS. This will help us understand “why React” a little better. Here’s the KnockoutJS version of our button:
functioncreateViewModel({liked}){constisPending=ko.observable(false);consthasFailed=ko.observable(false);constonClick=()=>{isPending(true);fetch("/like",{method:"POST",body:JSON.stringify({liked:!liked()}),}).then(()=>{liked(!liked());}).catch(()=>{hasFailed(true);}).finally(()=>{isPending(false);});};return{isPending,hasFailed,onClick,liked,};}ko.applyBindings(createViewModel({liked:ko.observable(false)}));
In KnockoutJS, a “view model” is a JavaScript object that contains keys and values that we bind to various elements in our page using the data-bind attribute. There are no “components” or “templates” in KnockoutJS, just a view model and a way to bind it to the browser.
Our function createViewModel is how we’d create a view model with Knockout. We then use ko.applyBindings to connect the view model to the host environment (the browser). The ko.applyBindings function takes a view model and then finds all the elements in the browser that have a data-bind attribute, which Knockout uses to bind them to the view model.
A button in our browser would be bound to this view model’s properties like so:
<buttontype="button"data-bind="click: onClick, text: liked ? 'Liked' : isPending ? 'Pending...' : hasFailed ? 'Failed' : 'Like'"></button>
We bind the HTML element to the “view model” we created using our createViewModel function, and the site becomes interactive. As you can imagine, explicitly subscribing to changes in observables and then updating the user interface in response to these changes is a lot of work. KnockoutJS was a great library for its time, but it was also a library that required a lot of boilerplate code to get things done.
Moreover, view models often grew to be very large and complex, which led to increasing uncertainty around refactors and optimizations to code. Eventually, we’d end up with verbose monolithic view models that were hard to test and hard to reason about. Still, KnockoutJS was a very popular solution and it was a great library for its time. It was also relatively easy to test in isolation, which was a big plus.
For posterity, here’s how we’d test this button in KnockoutJS:
test("LikeButton",()=>{constviewModel=createViewModel({liked:ko.observable(false)});expect(viewModel.liked()).toBe(false);viewModel.onClick();expect(viewModel.liked()).toBe(true);});
Okay—we’re slowly understanding why React was needed, introduced, and loved. Let’s take a penultimate detour into jQuery to get a full picture of the state of the art at the time and the value that React brought to the table. Here’s our button example from before, but this time using jQuery:
<buttontype="button"id="like-button">Like</button>
$("#like-button").on("click",function(){this.prop("disabled",true);fetch("/like",{method:"POST",body:JSON.stringify({liked:this.text()==="Like"}),}).then(()=>{this.text(this.text()==="Like"?"Liked":"Like");}).catch(()=>{this.text("Failed");}).finally(()=>{this.prop("disabled",false);});});
From this example, we observe a common pattern that previous libraries for building user interfaces implemented: they bound data to the user interface and used this data binding to update the user interface in place. While Knockout and Backbone bound models to views in one way or another, jQuery was far more active in directly manipulating the user interface itself.
jQuery ran in a heavily “side-effectful” way, constantly interacting with and altering state outside of its own control. This was a pattern that was common at the time, and it was a pattern that was difficult to reason about and test because the world around the code was constantly changing. At some point, we’d have to stop and ask ourselves: “what is the state of the browser right now?”—a question that became increasingly difficult to answer as our codebases grew.
This button with jQuery is hard to test because it’s just an event handler. If we were to write a test, it’d look like this:
test("LikeButton",()=>{const$button=$("#like-button");expect($button.text()).toBe("Like");$button.trigger("click");expect($button.text()).toBe("Liked");});
The only problem, is $('#like-button') will have returned null in the testing environment because it’s not a real browser. We’d have to mock out the browser environment to test this code, which is a lot of work. This is a common problem with jQuery: it’s hard to test because it’s hard to isolate and depends heavily on the browser environment. Moreover, jQuery shared ownership of the user interface with the browser, which made it difficult to reason about and test: the browser owned the interface, and jQuery was just a guest. This deviation from the “one-way data flow” paradigm was a common problem with libraries at the time.
These factors, combined with jQuery’s large bundle size and performance-intensive DOM manipulation led the web development community to search for better (faster and lighter) alternatives.
AngularJS was developed by Google in 2010. It was a pioneering JavaScript framework that had a significant impact on the web development landscape. It stood in sharp contrast to the libraries and frameworks we’ve been discussing by incorporating several innovative features, the ripples of which can be seen in subsequent libraries, including React. Through a detailed comparison of AngularJS with these other libraries and a look at its pivotal features, let’s try to understand the path it carved for React.
Two-way data binding was a hallmark feature of AngularJS that greatly simplified the interaction between the user interface (UI) and the underlying data. If the model (the underlying data) changes, the view (the UI) gets updated automatically to reflect the change, and vice versa. This was a stark contrast to libraries like jQuery, where developers had to manually manipulate the DOM to reflect any changes in the data and capture user inputs to update the data.
Let’s consider a simple AngularJS application where two-way data binding plays a crucial role:
<!DOCTYPE html><html><head><scriptsrc="https://ajax.googleapis.com/ajax/libs/angularjs/1.8.2/angular.min.js"></script></head><bodyng-app=""><p>Name:<inputtype="text"ng-model="name"/></p><png-if="name">Hello, {{name}}!</p></body></html>
In this application, the ng-model directive binds the value of the input field to the variable name. As you type into the input field, the model name gets updated, and the view - in this case, the greeting “Hello, {{name}}!” - gets updated in real-time.
Dependency Injection (DI) is a design pattern where an object receives its dependencies, instead of creating them. AngularJS incorporated this design pattern at its core, which was not a common feature in other JavaScript libraries at the time. This had a profound impact on the way modules and components were created and managed, promoting a higher degree of modularity and reusability.
Here is an example of how DI works in AngularJS:
varapp=angular.module("myApp",[]);app.controller("myController",function($scope,myService){$scope.greeting=myService.sayHello();});app.factory("myService",function(){return{sayHello:function(){return"Hello, World!";},};});
In the example above, myService is a service that is injected into the myController controller through DI. The controller does not need to know how to create the service. It just declares the service as a dependency, and AngularJS takes care of creating and injecting it. This simplifies the management of dependencies and enhances the testability and reusability of components.
AngularJS introduced a modular architecture that allowed developers to logically separate their application’s components. Each module could encapsulate a functionality and could be developed, tested, and maintained independently.
varapp=angular.module("myApp",["ngRoute","appRoutes","userCtrl","userService",]);varuserCtrl=angular.module("userCtrl",[]);userCtrl.controller("UserController",function($scope){$scope.message="Hello from UserController";});varuserService=angular.module("userService",[]);userService.factory("User",function($http){//...});
In the example above, the myApp module depends on several other modules: ngRoute, appRoutes, userCtrl, and userService. Each dependent module could be in its own JavaScript file, and could be developed separately from the main myApp module. This concept was significantly different from jQuery and Backbone.js, which didn’t have a concept of a “module” in this sense.
Backbone.js and Knockout.js were two popular libraries used around the time AngularJS was introduced. Both libraries had their strengths, but they lacked some features that were built into AngularJS.
Backbone.js, for example, gave developers more control over their code and was less opinionated than AngularJS. This flexibility was both a strength and a weakness: it allowed for more customization, but also required more boilerplate code. AngularJS, with its two-way data binding and DI, allowed for more structured and easier-to-maintain code. It had more opinions that led to greater developer velocity: something we see with modern frameworks like Next.js, Remix, etc. This is one way AngularJS was far ahead of its time.
Knockout.js introduced developers to two-way data binding, similar to AngularJS. However, it was primarily focused on this feature and lacked some of the other powerful tools that AngularJS provided, such as DI and a modular architecture. AngularJS, being a full-fledged MVC (Model-View-Controller) framework, offered a more comprehensive solution for building SPAs.
AngularJS (1.x) represented a significant leap in web development practices when it was introduced. However, as the landscape of web development continued to evolve rapidly, certain aspects of AngularJS were seen as limitations or weaknesses that contributed to its relative decline. Some of these include:
Performance: AngularJS had performance issues, particularly in large-scale applications with complex data bindings. The digest cycle in AngularJS, a core feature for change detection, could result in slow updates and laggy user interfaces in large applications. The two-way data binding, while innovative and useful in many situations, also contributed to the performance issues.
Complexity: AngularJS introduced a range of novel concepts including directives, controllers, services, dependency injection, factories, and more. While these features made AngularJS powerful, they also made it complex and hard to learn, especially for beginners.
Migration issues to Angular 2+: When Angular 2 was announced, it was not backward compatible with AngularJS 1.x. This meant that developers had to rewrite significant portions of their code to upgrade to Angular 2, which was seen as a big hurdle. The introduction of Angular 2+ essentially split the Angular community and caused confusion.
It was around this time that React rose to prominence. One of the core ideas that React borrowed from AngularJS was the component-based architecture. Although the implementation is different, the underlying idea is similar: building UIs by composing reusable components.
While AngularJS uses directives and has a more complex API, React introduced JSX and a simpler component model. Yet, without the ground laid by AngularJS in promoting a component-based architecture, some would argue the transition to React’s model might not have been as smooth.
In AngularJS, the two-way data binding model was the industry standard—however, it also had some downsides, such as potential performance issues on large applications. React learned from this and introduced a one-way data flow (then called Flux), giving developers more control over their applications and making it easier to understand how data changes over time.
React also introduced the virtual DOM as we’ll read about in an upcoming chapter: a concept that improved performance by minimizing direct DOM manipulation. AngularJS, on the other hand, often directly manipulated the DOM, which could lead to performance issues and other incosistent state issues we recently discussed with jQuery.
That said, AngularJS represented a significant shift in web development practices and we’d be remiss if we didn’t mention that AngularJS not only revolutionized the web development landscape when it was introduced, but also paved the way for the evolution of future frameworks and libraries, React being one of them. Speaking of React, let’s take a look at its history.
Facebook was no exception to the problem of UI complexity and scale. As a result, Facebook created a number of internal solutions complementary to what already existed at the time. Among the first of these was “BoltJS”: a tool Facebook engineers would say “bolted together” a bunch of things that each of them liked. It was a combination of tools that was assembled to make updates to Facebook’s web user interface more intuitively.
Around this time, Facebook engineer Jordan Walke had a radical idea that did away with the status quo of the time and instead entirely replaced minimal portions of web pages with new ones as updates happened. As we’ve seen previously, JavaScript libraries would manage relationships between views (user interfaces) and models (conceptually, sources of data) using a paradigm called “two-way data binding”. This was pretty complicated, and often proved difficult to keep the views and models in sync because of the way the web worked. Jordan’s idea was to instead use a paradigm called “one-way data flow”. This was a much simpler paradigm and it was much easier to keep the views and models in sync. This was the birth of the “Flux” architecture that would go on to be the foundation of React.
The Flux architecture was a simple idea: instead of two-way data binding and having mixed sources of truth, data would flow from the top of a component tree down to its leaves. This was a radical departure from the way we had been building web apps for years, and it was a departure that was met with skepticism. The fact that Facebook was a large company with a lot of resources, a lot of users, and a lot of engineers with opinions made its upward climb a steep one. After much scrutiny, React was an internal success. It was adopted by Facebook, and then by Instagram.
It was then open sourced in 2013 and released to the world where it was met with tremendous amounts of backlash. People heavily criticized React for its use of JSX, accusing Facebook of “putting HTML in JavaScript” and breaking separation of concerns. Facebook became known as the company that “rethinks best practices” and breaks the web. Eventually, after slow and steady adoption by companies like Netflix, Airbnb, and The New York Times, React became the de facto standard for building user interfaces on the web.
There are a number of details that I’ve left out of this story because they fall out of the scope of this book, but I think it’s important to understand the context of React before we dive into the details: specifically the class of technical problems React was created to solve. Should you be more interested in the story of React, there is a full documentary on the history of React that is freely available on YouTube under “React.js: the Documentary” by Honeypot.
Now that we understand the motivation for React and the state of the world around its introduction, let’s explore a little bit how browsers work, and how React fits into this picture.
This book assumes we have a satisfactory understanding of this statement: browsers render web pages. Web pages are HTML documents that are styled by CSS and made interactive with JavaScript. This has worked great for decades and still does, but building modern web applications that are intended to service a significant (think millions) amount of users with these technologies requires a good amount of abstraction in order to do it safely and reliably with as little possibility for error as possible.
Let’s consider another example that’s a little bit more complex than our like button previously by exploring the problem with performing interactive application updates without React to understand this a little better. We’ll start with a simple example: a list of items. Let’s say we have a list of items and we want to add a new item to the list. We could do this with an HTML form that looks something like this.
<ulid="list-parent"></ul><formid="add-item-form"action="/api/add-item"method="POST"><inputtype="text"id="new-list-item-label"/><buttontype="submit">Add Item</button></form>
JavaScript gives us access to DOM APIs, where DOM stands for Document Object Model, and API stands for Application Programming Interface. For the uninitiated, the DOM is an in-memory model of a web page’s document structure: it’s a tree of objects that represents the elements on your page, giving you ways to interact with them via JavaScript. The problem is, the DOM on some user of your web app’s device is like an alien planet: we have no way of knowing what browser they’re using, in what network conditions, and on what operating system (OS) they’re working. The result? We have to write code that is resilient to all of these factors.
Moreover, application state becomes quite hard to predict when it updates without some type of state-reconciliation mechanism to keep track of things. To continue with our list example, let’s consider some JavaScript code to add a new item to the list:
(functionmyApp(){varlistItems=["I love","React","and","TypeScript"];varparentList=document.getElementById("list-parent");varaddForm=document.getElementById("add-item-form");varnewListItemLabel=document.getElementById("new-list-item-label");addForm.onsubmit=function(event){event.preventDefault();listItems.push(newListItemLabel.value);renderListItems();};functionrenderListItems(){for(i=0;i<listItems.length;i++){varel=document.createElement("li");el.textContent=listItems[i];parentList.appendChild(el);}}renderListItems();})();
This code snippet is written to look as similar as possible to early web applications. Why does this go haywire over time? It’s mainly because building applications intended to scale this way over time presents some footguns making them:
Error-prone: addForm’s onsubmit attribute could be easily rewritten by other client-side JavaScript on the page. We could use addEventListener instead, but this presents more questions:
removeEventListener?Unpredictable: Our sources of truth are mixed: we’re holding list items in a JavaScript array, but relying on existing elements in the DOM (like an element with id="list-parent") to complete our app. Because of these interdependencies between JavaScript and HTML, we have a few more things to consider:
id?ul? Can we append list items (li elements) to other parents?Our sources of truth are mixed between JavaScript and HTML, thus making the truth unreliable. We’d benefit more from having a single source of truth. Moreover, elements are added and removed from the DOM by client-side JavaScript all the time. If we rely on the existence of these specific elements, our app has no guarantees of working reliably as the UI keeps updating. Our app in this case is full of “side-effects”, where its success or failure depends on some userland concern. React has remedied this by advocating a functional programming-inspired model where side effects are intentionally marked and isolated.
Inefficient: renderListItems renders items on the screen sequentially. Each mutation of the DOM can be computationally expensive, especially where layout shift and reflows are concerned. Since we’re in an alien planet with unknown computational power, this can be quite unsafe for performance in case of large lists. Remember—we’re intending our large-scale web application to be used by millions worldwide—including those with low-power devices from communities across the world without access to the latest and greatest Apple M2 Pro Max processors. What may be more ideal in this scenario, instead of sequentially updating the DOM per single list item, would be to batch these operations somehow and apply them all to the DOM at the same time. But maybe this isn’t worth doing for us as engineers because perhaps browsers will eventually update the way they work with quick updates to the DOM and automatically batch things for us?
These are some of the problems that has plagued web developers like you and I for years before React and other abstractions appeared. Packaging code in a way that was maintainable, reusable, and predictable at scale was another problem without much standardized consensus in the industry at the time. Given that Facebook had a front-row seat to these problems at enormous scale, React pioneered a component-based approach to building user interfaces that would solve these problems and more, where each component would be a self-contained unit of code that could be reused and composed with other components to build more complex user interfaces.
Okay, history lesson’s over. Hopefully we now have enough context to begin to understand why React is a thing. Given how easy it was to fall into the pit of unsafe, unpredictable, and inefficient JavaScript code at scale, we needed a solution to steer us more towards a pit of success where we accidentally win. Let’s talk about exactly how React does that.
React provides a declarative abstraction on the DOM. We’ll talk more about how it does this in more detail later in the book, but essentially it provides us a way to write code that expresses what we want to see, while then taking care of how it happens, ensuring our user interface is created and works in a safe, predictable, and efficient manner.
Let’s consider the list app that we created above. In React, we could rewrite it like this:
functionMyList(){const[items,setItems]=useState(["I love"]);return(<div><ul>{items.map((i)=>(<likey={i/* keep items unique */}>{i}</li>))}</ul><NewItemForm/></div>);}
Notice how in the return, we literally write something that looks like HTML: it looks like what we want to see. I want to see a box with a NewItemForm, and a list. Boom. How does it get there? That’s React’s problem. Do we batch list items to add chunks of them at once? Do we add them sequentially, one-by-one? React deals with how this is done, while we merely describe what we want done. In further chapters, we’ll dive into React and explore how exactly it does this.
Do we then depend on class names to reference HTML elements? Do we getElementById in JavaScript? Nope. React creates unique “React elements” for us under the hood that it uses to detect changes and make incremental updates so we don’t need to read class names and other identifiers from user code whose existence we cannot guarantee: our source of truth becomes exclusively JavaScript with React. We can then write code that is safe, predictable, and efficient.
We export our MyList component to React, and React gets it on the screen for us in a way that is safe, predictable, and performant—no questions asked. It does this by using a “virtual DOM”, which is a clone of the real DOM. It then compares the virtual DOM to the real DOM, and makes incremental updates to the real DOM to make it match the virtual DOM. This is how React is able to make updates to the DOM in a safe, predictable, and efficient manner.
The virtual DOM is a programming concept that represents the real DOM but as a more efficient JavaScript object. If this is a little too in the weeds for now, don’t worry: we have an entire chapter dedicated to this soon that breaks things down in a little more detail. For now, it’s just important to know that the virtual DOM allows developers to update the UI without directly manipulating the actual DOM. React uses the virtual DOM to keep track of changes to a component and re-renders the component only when necessary. This approach is faster and more efficient than updating the entire DOM tree every time there is a change.
In React, the virtual DOM is a lightweight representation of the actual DOM tree. It is a plain JavaScript object that describes the structure and properties of the UI elements. React creates and updates the virtual DOM to match the actual DOM tree, and any changes made to the virtual DOM are applied to the actual DOM using a process called reconciliation.
To understand how the virtual DOM works, let’s consider the example of a like button. We will create a React component that displays a like button and the number of likes. When the user clicks the button, the number of likes should increase by one.
Here is the code for our component:
importReact,{useState}from"react";functionLikeButton(){const[likes,setLikes]=useState(0);functionhandleLike(){setLikes(likes+1);}return(<div><buttononClick={handleLike}>Like</button><p>{likes}Likes</p></div>);}exportdefaultLikeButton;
In this code, we have used the useState hook to create a state variable likes, which holds the number of likes. We have also defined a function handleLike that increases the value of likes by one when the button is clicked. Finally, we render the like button and the number of likes using JSX.
Now, let’s take a closer look at how the virtual DOM works in this example.
When the LikeButton component is first rendered, React creates a virtual DOM tree that mirrors the actual DOM tree. The virtual DOM contains a single div element that contains a button element and a p element.
{type:'div',props:{},children:[{type:'button',props:{onClick:handleLike},children:['Like']},{type:'p',props:{},children:[0,' Likes']}]}
The children property of the p element contains the value of the likes state variable, which is initially set to zero.
When the user clicks the like button, the handleLike function is called, which updates the likes state variable. React then creates a new virtual DOM tree that reflects the updated state.
{type:'div',props:{},children:[{type:'button',props:{onClick:handleLike},children:['Like']},{type:'p',props:{},children:[1,' Likes']}]}
Notice that the virtual DOM tree contains the same elements as before, but the children property of the p element has been updated to reflect the new value of likes. What we’re doing here is drafting a data structure called a “Fiber Node”, which we will go into more detail about in a later chapter. For now, let’s explore reconciliation.
After updating the virtual DOM, React performs a process called reconciliation to update the actual DOM. Reconciliation is the process of comparing the old virtual DOM tree with the new virtual DOM tree and determining which parts of the actual DOM need to be updated. If you’re interested in how exactly this is done, there’s a chapter that goes into a lot of detail about this coming up. For now, let’s consider our like button.
In our example, React compares the old virtual DOM tree with the new virtual DOM tree and finds that the p element has changed. React then updates the actual DOM to reflect the changes made in the virtual DOM tree.
React updates only the necessary parts of the actual DOM to minimize the number of DOM manipulations. This approach is much faster and more efficient than updating the entire DOM tree every time there is a change.
Using the virtual DOM provides several benefits in React. Some of the key benefits are:
The virtual DOM is a critical concept in React that enables efficient rendering of user interfaces. React uses the virtual DOM to abstract the actual DOM tree and perform operations on it, reducing the number of DOM manipulations needed and improving performance. Our example of a like button is an indication of how React creates a virtual DOM tree that mirrors the actual DOM tree and updates it to reflect changes in the UI. In coming chapters, we will explore how React then performs reconciliation to update the actual DOM only where necessary, resulting in faster and more efficient rendering.
The virtual DOM has been a powerful and influential invention for the modern web, with newer libraries like Preact and Inferno adopting it once it was proven in React. We will cover more of the virtual DOM further in the book, but for now, let’s move on to the next section.
React revolutionized the web because it highly encouraged “thinking in components”: that is, breaking your app into smaller pieces, and adding them to a larger tree to compose your application. The component model is a key concept in React, and it’s what makes React so powerful. Let’s talk about why.
Button component, we can use it in many places in our app, and if we need to change the style of the button, we can do it in one place and it’s changed everywhere.Button component, we can give it a key prop and React will be able to keep track of the Button component over time and “know” when to update it, or when to skip updating it and continue making minimal changes to the user interface. Most components have implicit keys, but we can also explicitly provide them if we want to.RegisterButton component, we can put the logic for what happens when the button is clicked in the same file as the RegisterButton component, instead of having to jump around to different files to find the logic for what happens when the button is clicked. The RegisterButton component would wrap a more simple Button component, and the RegisterButton component would be responsible for handling the logic for what happens when the button is clicked. This is called “composition”.React’s component model is a fundamental concept that underpins the framework’s popularity and success. This approach to development has numerous benefits, including increased modularity, easier debugging, and more efficient code reuse.
Let’s explore the advantages of React’s component model in greater depth, using code examples to demonstrate its power. We will begin by examining the concept of modularity and how it is enabled by the component model. We will then move on to discuss how the component model facilitates efficient code reuse, and how it helps developers to debug their applications more easily.
Modularity is a critical concept in software development, as it allows developers to break up an application into smaller, more manageable pieces. By doing so, developers can create more maintainable code that is easier to work with and modify over time. Modularity also facilitates code reuse, as developers can create smaller, self-contained pieces of code that can be reused throughout an application or even across multiple applications.
React’s component model is designed to promote modularity by breaking up an application into smaller, reusable components. Each component is a self-contained piece of code that is responsible for rendering a specific part of the user interface. This approach to development is particularly well-suited to building complex applications that require a high degree of reusability and maintainability.
Let’s look at a simple example of how the component model can be used to create a modular user interface. In this example, we will create a simple form that consists of a text input and a submit button. We will use React to create two separate components: one for the input field and another for the button.
importReactfrom"react";functionInputField(props){return<inputtype="text"name={props.name}/>;}functionSubmitButton(props){return<buttontype="submit">{props.label}</button>;}
In this example, we have defined two separate components: InputField and SubmitButton. Each of these components takes in some props (short for “properties”) and returns a piece of JSX that renders the appropriate UI element. By separating these two pieces of functionality into separate components, we have created a more modular, reusable design.
Now let’s look at how we can use these components to create our form:
importReactfrom"react";functionInputField(props){return<inputtype="text"name={props.name}/>;}functionSubmitButton(props){return<buttontype="submit">{props.label}</button>;}functionForm(props){return(<formonSubmit={props.onSubmit}><InputFieldname="email"/><SubmitButtonlabel="Submit"/></form>);}
In this example, we have created a new component called Form that combines our InputField and SubmitButton components into a single, reusable form component. The Form component takes in a single prop called onSubmit, which is a function that will be called when the form is submitted.
By breaking our user interface down into smaller, reusable components, we have created a more modular and maintainable design. If we need to modify the input field or the submit button in the future, we can do so without affecting the rest of our code.
In the past, this used to be a lot more difficult to achieve, for example in Knockout.js, where you would have to create a custom binding for each component. In React, you can just create a component and use it anywhere in your application.
One of the primary benefits of React’s component model is that it enables efficient code reuse. Because each component is a self-contained piece of code, it can be reused throughout an application or even across multiple applications. This makes it easier for developers to create maintainable, reusable code that can be leveraged in multiple contexts.
Let’s look at a more complex example to see how the component model can facilitate code reuse. In this example, we will create a simple accordion component that can be used to display a list of items that can be expanded or collapsed.
importReact,{useState}from"react";functionAccordionItem(props){const[isOpen,setIsOpen]=useState(false);consttoggleOpen=()=>{setIsOpen(!isOpen);};return(<div><divonClick={toggleOpen}>{props.title}</div>{isOpen&&<div>{props.content}</div>}</div>);}functionAccordion(props){return(<div>{props.items.map((item)=>(<AccordionItemkey={item.id}title={item.title}content={item.content}/>))}</div>);}
In this example, we have defined two separate components: AccordionItem and Accordion. The AccordionItem component is responsible for rendering a single item in the accordion list, while the Accordion component is responsible for rendering the entire list of items.
The AccordionItem component uses React’s useState hook to manage its state. We use the state variable isOpen to keep track of whether the item is currently expanded or collapsed, and we define a toggleOpen function that toggles the value of isOpen when the item is clicked. We then use conditional rendering to display the item’s content only when it is expanded.
The Accordion component takes in an array of items as a prop and uses the map function to render each item as an AccordionItem component. By breaking the accordion down into smaller, reusable components, we have created a design that is easy to modify and extend.
We can now use the Accordion component to display a list of items in our application:
importReactfrom"react";import{Accordion}from"./components";constitems=[{title:"Item 1",content:"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",},{title:"Item 2",content:"Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.",},{title:"Item 3",content:"Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.",},];functionApp(){return(<div><Accordionitems={items}/></div>);}exportdefaultApp;
In this example, we have created an array of items and passed it as a prop to the Accordion component. The Accordion component then renders each item as an AccordionItem component, displaying its title and content. Because we have defined these components as reusable, self-contained units of code, we can easily reuse them in other parts of our application or even in other applications altogether.
Another advantage of React’s component model is that it makes debugging easier. Because each component is responsible for rendering a specific part of the user interface, it is easier to isolate and fix bugs when they occur.
Let’s look at a simple example to see how this works. In this example, we will create a simple counter component that displays a number and allows the user to increment or decrement it.
importReact,{useState}from"react";functionCounter(){const[count,setCount]=useState(0);constincrement=()=>{setCount(count+1);};constdecrement=()=>{setCount(count-1);};return(<div><div>{count}</div><buttononClick={increment}>+</button><buttononClick={decrement}>-</button></div>);}
In this example, we have defined a Counter component that uses React’s useState hook to manage its state. We use the state variable count to keep track of the current count, and we define two functions, increment and decrement, that update the count when the user clicks the corresponding buttons. Let’s say we have a hard requirement that this counter only allows positive numbers.
Now let’s say that we discover a bug in our Counter component: when the user clicks the decrement button and the count reaches zero, the count continues to decrease, resulting in negative numbers. Using the component model, we can isolate the bug to a single component and quickly identify the source of the problem.
We can start by adding some console.log statements to our decrement function to see what is happening:
constdecrement=()=>{console.log("Before decrement: ",count);setCount(count-1);console.log("After decrement: ",count);};
When we run our application and click the decrement button, we can see the output in the console:
Before decrement:1After decrement:1Before decrement:0After decrement:0Before decrement:-1After decrement:-1
We can see that the count variable is being updated correctly, but the value of count is not being updated immediately. This is because the setCount function is asynchronous and does not update the count variable immediately. If it was synchronous, it could leave the browser unresponsive and block user input—which is a huge no no for user experience.
To fix this issue, we can modify our decrement function to use the previous state value instead of the current state value:
constdecrement=()=>{setCount((prevCount)=>prevCount-1);};
By using the previous state value, we can ensure that the count variable is always updated correctly.
React’s component model is a powerful tool that allows developers to build complex, modular user interfaces with ease. By breaking the user interface down into smaller, reusable components, we can create designs that are easy to modify and extend. The component model also makes debugging easier, as each component is responsible for rendering a specific part of the user interface.
So far, we have explored the advantages of React’s component model and demonstrated how it can be used to build a variety of user interfaces. We have also looked at several examples of how the component model can be used to create reusable, self-contained units of code. There are more reasons why the component mode is so revolutionary that we’ll get into throughout the course of this book as we become more and more fluent in React, but for now, let’s wrap up and answer our own question “why is React a thing?”
React is a thing because it allows developers to build user interfaces with greater ease and reliability by enabling us to declaritively express what we’d like on the screen while React takes care of the how, as it makes incremental updates to the DOM in a safe, predictable, and efficient manner. It also encourages us to think in components, which helps us separate concerns and reuse code more easily. It is battle tested at Facebook, and designed to be used at scale. It’s also open source and free to use.
React also has a vast and active ecosystem, with a wide range of tools, libraries, and resources available to developers. This ecosystem includes tools for testing, debugging, and optimizing React applications, as well as libraries for common tasks such as data management, routing, and state management. Additionally, the React community is highly engaged and supportive, with many online resources, forums, and communities available to help developers learn and grow.
React is platform agnostic, meaning that it can be used to build web applications for a wide range of platforms, including desktop, mobile, and VR. This flexibility makes React an attractive option for developers who need to build applications for multiple platforms, as it allows them to use a single codebase to build applications that run across multiple devices.
Finally, React is backed by Meta, which is one of the largest and most influential technology companies in the world. This backing provides developers with confidence that React is a stable and well-supported technology, with a long-term roadmap for future development. Additionally, Facebook’s backing has helped to promote and popularize React, making it a widely recognized and respected technology in the development community.
To conclude, React’s value proposition is centered around its component-based architecture, declarative programming model, virtual DOM, JSX, extensive ecosystem, platform agnostic nature, and backing by Facebook. Together, these features make React an attractive option for developers who need to build fast, scalable, and maintainable web applications. Whether you’re building a simple website or a complex enterprise application, React can help you achieve your goals more efficiently and effectively than many other technologies. Let’s review.
In this chapter, we covered a brief history of React, its initial value proposition, and how it solves the problems of unsafe, unpredictable, and inefficient user interface updates at scale. We also talked about the component model and why has been revolutionary for interfaces on the web. Let’s recap what we’ve learned. Ideally after this chapter, you are more informed about the roots of React and where it comes from as well as its main strengths and value proposition.
Let’s make sure you’ve fully grasped the topics we covered. Take a moment to answer the following questions:
If you have trouble answering these questions, this chapter may be worth another read. If not, let’s explore the next chapter.
In chapter 2 we will dive a little deeper into this declarative abstraction that allows us to express what we want to see on the screen: the syntax and inner workings of JSX—the language that looks like HTML in JavaScript that got React into a lot of trouble in its early days.
In the previous chapter, we learned about the basics of React and its origin story, comparing it to other popular JavaScript libraries and frameworks of its time. We learned about the true value proposition of React and why it’s a thing. In this chapter, we’ll learn about JSX, which is a syntax extension for JavaScript that allows us to write HTML-like code within our JavaScript code.
JS is JavaScript. Does that mean JSX is JavaScript version 10? Like Mac OS X? Is it “JS Xtra”? You might be wondering what the X in JSX stands for. I could understand you thinking it means ’10’ or ‘Xtra', which would both good guesses! But the X in JSX stands for JavaScript Syntax eXtension. It’s also sometimes called JavaScript XML. Let’s explore this language extension further while also exploring some creative ways we can leverage it to build powerful React applications.
If you’ve been around the web for a while, you might remember the term AJAX or Asynchronous JavaScript and XML from around the 2000s. AJAX was essentially a new way of using existing technologies at the time to create highly interactive web pages that update asynchronously and in-place, instead of the status quo at the time: each interaction would load an entire new page.
Using tools like XMLHttpRequest in the browser, the browser would initiate an asynchronous (that is, non-blocking) request over HTTP (HyperText Transfer Protocol). The response to this request traditionally would be a response in XML. Today, we tend to respond with JSON instead. This is likely one of the reasons why fetch has overtaken XMLHTTPRequest, since we don’t even request XML anymore and the name can be misleading.
JSX is a syntax extension for JavaScript that allows developers to write HTML-like code within their JavaScript code. It was originally developed by Facebook to be used with React, but it has since been adopted by other libraries and frameworks as well. JSX is not a separate language, but rather a syntax extension that is transformed into regular JavaScript code by a compiler or transpiler. When JSX code is compiled, it is transformed into plain JavaScript code. More on this later.
JSX syntax looks similar to HTML, but there are some key differences. For example, JSX uses curly braces {} to embed JavaScript expressions within the HTML-like code. Additionally, JSX attributes are written in camelCase instead of HTML attributes, and some HTML elements are written in lowercase instead of uppercase.
I should mention that it’s possible to create React applications without JSX at all, but the code tends to become hard to read, reason about, and maintain. Still, if you want to, you can. Let’s look at a React component expressed with JSX and without.
Here’s an example of a list with JSX (figure 0):
constMyComponent=()=>(<sectionid="list"><h1>Thisismylist!</h1><p>Isn'tmylistamazing?Itcontainsamazingthings!</p><ul>{amazingThings.map((t)=>(<likey={t.id}>{t.label}</li>))}</ul></section>);
Here’s an example of a list without JSX:
constMyComponent=()=>React.createElement("section",{id:"list"},React.createElement("h1",{},"This is my list!"),React.createElement("p",{},"Isn't my list amazing? It contains amazing things!"),React.createElement("ul",{},amazingThings.map((t)=>React.createElement("li",{key:t.id},t.label))));
Do you see the difference? You might find the first example with JSX far more readable and maintainable than the latter. The former is JSX, the latter is vanilla JS. Let’s talk about its tradeoffs.
There are several benefits to using JSX in web development:
There are also some drawbacks to using JSX:
Despite its drawbacks, JSX has become a popular choice for web developers, particularly those of us working with React. It offers a powerful and flexible way to create components and build user interfaces, and has been embraced by a large and active community. In addition to its use with React, JSX has also been adopted by other libraries and frameworks, including Vue.js, Angular, and Ember.js. This shows that JSX has wider applications beyond just React, and its popularity is likely to continue to grow in the coming years.
Overall, JSX is a powerful and flexible tool that can help us build dynamic and responsive user interfaces. JSX was created with one job: make expressing, presenting, and maintaining the code for React components simple while preserving powerful capabilities such as iteration, computation, and inline execution.
JSX becomes vanilla JavaScript before it makes it to the browser. How does it accomplish this? Let’s take a look under the hood!
How does one make a language extension? How do they work? To answer this question, we need to understand a little bit about programming languages themselves. Specifically, we need to explore how exactly code like this:
consta=1;letb=2;console.log(a+b);
outputs 3. Understanding this will help us understand JSX better, which will in turn help us understand React deeper, thereby increasing our fluency with React.
The code snippet in the section above is literally just text. How is this interpreted by a computer and then executed? For starters, it’s not a big clever RegExp (Regular Expression) that can identify key words in a text file. I once tried to build a programming language this way and failed miserably, because regular expressions are often hard to get right, harder still to read back and mentally parse, and quite difficult to maintain.
Instead, this code is compiled using a compiler. A compiler is a software tool that translates source code written in a high-level programming language into machine code that can be executed by a computer. The process of compiling involves several steps, including lexical analysis, parsing, semantic analysis, optimization, and code generation. Let’s explore each of these steps in more detail and discuss the role of compilers in the modern software development landscape.
A compiler uses a three-step process (at least in JavaScript anyway) that is in play here. These steps are called tokenization, parsing, and code generation. Let’s look at each of these steps in more detail.
Tokenization is essentially breaking up a string of characters into meaningful tokens. When a tokenizer is stateful and each token contains state about its parents and/or children, a tokenizer is called a lexer. Lexing is basically stateful tokenization.
Lexers have lexer rules that, in common cases, use a regular expressions or similar to detect key words in a text string representing a programming language. The lexer then maps these key words to some type of enumerable value. For example,
const becomes 0let becomes 1function becomes 2Once a string is tokenized or lexed, we move on to the next step.
Parsing is the process of taking the tokens and converting them into an abstract syntax tree (AST). The syntax tree is a data structure that represents the structure of the code. For example, the code snippet we looked at in the section above would be represented as a syntax tree like this:
{type:"Program",body:[{type:"VariableDeclaration",declarations:[{type:"VariableDeclarator",id:{type:"Identifier",name:"a"},init:{type:"Literal",value:1,raw:"1"}}],kind:"const"},{type:"VariableDeclaration",declarations:[{type:"VariableDeclarator",id:{type:"Identifier",name:"b"},init:{type:"Literal",value:2,raw:"2"}}],kind:"let"},{type:"ExpressionStatement",expression:{type:"CallExpression",callee:{type:"Identifier",name:"console"},arguments:[{type:"BinaryExpression",left:{type:"Identifier",name:"a"},right:{type:"Identifier",name:"b"},operator:"+"}]}}]}
The string, thanks to the parser, becomes effectively a JSON object. As programmers, when we have a data structure like this, we can do some really fun things. Language engines use these data structures to complete the process with the third step,
Code Generation, where the compiler generates machine code from the AST. This involves translating the code in the AST into a series of instructions that can be executed directly by the computer’s processor. The resulting machine code is then executed by the JavaScript engine. Overall, the process of converting an AST into machine code is complex and involves a lot of different steps. However, modern compilers are highly sophisticated and can produce highly optimized code that runs efficiently on a wide range of hardware architectures.
There are several types of compilers, each with different characteristics and use cases. Some of the most common types of compilers include:
Native Compilers - These are compilers that produce machine code that can be executed directly by the target platform’s processor. Native compilers are typically used to create standalone applications or system-level software.
Cross-Compilers - These are compilers that produce machine code for a different platform than the one on which the compiler is running. Cross-compilers are often used in embedded systems development or when targeting specialized hardware.
Just-In-Time (JIT) Compilers - These are compilers that translate code into machine code at runtime, rather than ahead of time. JIT compilers are commonly used in virtual machines, such as the Java Virtual Machine, and can offer significant performance advantages over traditional interpreters.
Interpreters - These are programs that execute source code directly, without the need for compilation. Interpreters are typically slower than compilers, but offer greater flexibility and ease of use.
To run JavaScript code in browsers, we use a Just-In-Time (JIT) compiler. In a JIT compiler, the source code is first compiled into an intermediate representation, such as bytecode, which is then translated into machine code as needed. This allows the compiler to optimize the code based on runtime information, such as the values of variables and the execution paths taken by the program. JavaScript engines inside web browsers include a JIT compiler. The JIT compiler translates JavaScript code into machine code on the fly, as the code is executed.
Runtimes usually interface with engines to provide more contextual helpers and features for their specific environment. The most popular JavaScript runtime, by far, is the common web browser; like Google Chrome. This runtime gives JavaScript context like the window object and the document object. Another very popular runtime is Node.js, which is a JavaScript runtime that runs on servers. If you’ve worked with both browsers and Node.js before, you may have noticed Node.js does not have a global window object. This is because it’s a different runtime and, as such, provides different context. Cloudflare created a similar runtime called Workers whose sole responsibility is executing JavaScript on globally distributed edge servers, but we’re digressing. How does this all relate to JavaScript eXtended syntax?
Now that we understand how one would extend JavaScript syntax, how does JSX work? How would we do it? To extend JavaScript syntax, we’d need to either have a different engine that can understand our new syntax, or deal with our new syntax before it reaches the engine. The former is nearly impossible to do because engines require a lot of thought to create and maintain since they tend to be widely used. If we decided to go with that option, it might take years or decades before we can use our extended syntax! We’d then even have to make sure our “bespoke special engine™” is used everywhere. How would we convince browser vendors et al to switch to our unpopular new thing? This wouldn’t work.
The latter is quicker: let’s explore how we can deal with our new syntax before it reaches the engine. To do this, we’d need to create our own lexer and parser that can understand our extended language: that is, take a text string of code and understand it. Then, instead of generating machine code as is traditional, we can take this syntax tree and instead generate plain old regular vanilla JavaScript that all current engines can understand. This is precisely what Babel in the JavaScript ecosystem does, along with other tools like TypeScript, Traceur, and swc.

Because of this, JSX cannot be used directly in the browser, but instead requires a “build step” where a custom parser runs against it, then compiles it into a syntax tree. This code is then transformed into vanilla JavaScript in a final distributable bundle. This is called “transpilation”: "transformed, then compiled code”.
Whew! What a deep dive! Now that we understand how we can build our own extension of JavaScript, let’s look at what we can do with this specific extension JSX.
It all starts with <, which is an unrecognizable character in JavaScript. When a JavaScript engine encounters this, it throws a SyntaxError: Unexpected token '<'. In JSX, this “JSX Pragma” can be transpiled into a function call. The name of the function to call when a parser sees < is configurable, and defaults to the function React.createElement. The signature of this function is expected to be this:
functionpragma(tag,props,...children)
That is, it receives a tag, props, and children as arguments. Here’s how JSX maps to regular JavaScript syntax:
The following JSX code:
<MyComponentprop="value">contents</MyComponent>
Becomes the following JavaScript code:
React.createElement(MyComponent,{prop:"value"},"contents");
Notice the mapping between the tag (MyComponent), the props (prop="value") and the children ("contents")? This is essentially the role of the JSX pragma: syntax sugar over multiple, recursive function calls. The JSX pragma is effectively an alias: < instead of React.createElement.
One of the most powerful features of JSX is the ability to execute code inside a tree of elements. To iterate over a list as we did under the section titled “Under the Hood”, we can put executable code inside curly brackets like we did with our map earlier in this chapter. If we want to show a sum of two numbers in JSX, we’d do it like this:
consta=1;constb=2;constMyComponent=()=><Box>Here'sanexpression:{a+b}</Box>;
This will render Here's an expression: 3, because the stuff inside curly brackets is executed as an expression. Using JSX expressions, we can iterate over lists, and execute a variety of expressions including conditional checks with ternary operations, string replacement, and more.
Here’s another example with a conditional check using a ternary operation:
consta=1;constb=2;constMyComponent=()=><Box>Isbmorethana?{b>a?"YES":"NO"}</Box>;
This will render Is b more than a? YES since the comparison is an evaluated expression. For posterity, it’s worth mentioning here that JSX expressions are exactly that: expressions. It is not possible to execute statements inside of a JSX element tree. This will not work:
constMyComponent=()=><Box>Here'sanexpression:{consta=1;constb=2;if(a>b){3}}</Box>;
It doesn’t work because statements do not return anything and are considered side effects: they set state without yielding a value. After statements and computations, how would we print a value inline? Notice in the previous example, we just put the number 3 in there on line 6. How is our renderer supposed to know we intend to print 3? This is why expressions are evaluated, but statements are not.
Okay, we’re getting pretty good at this JSX thing. Let’s dive into some patterns we can use with JSX to boost our React fluency.
Software design patterns are commonly used solutions to recurring problems in software development. They provide a way to solve problems that have been encountered and solved by other developers, saving time and effort in the software development process. They are often expressed as templates or guidelines for creating software that can be used in different situations. Software design patterns are typically described using a common vocabulary and notation, which makes them easier to understand and communicate among developers. They can be used to improve the quality, maintainability, and efficiency of software systems.
Software design patterns are important for several reasons:
Software design patterns usually naturally arrive over time in response to real-world needs. These patterns solve specific problems that engineers experience, and find their way into an “engineer’s arsenal” of tools to use in different use case. One pattern is not inherently worse than the other, each has its place.
Most patterns help us identify ideal levels of abstraction: how we can write code that ages like fine wine instead of accruing extra state and configuration to the point where it becomes unreadable and/or unmaintainable. This is why a common consideration when picking a design pattern is control: how much of it we give to users vs. how much of it our program handles.
With that, let’s dive in to some popular React patterns, following a rough chronological order of when these patterns emerged.
It’s common to see a React design pattern that is a combination of two components: a presentational component and a container component. The presentational component is the one that renders the UI, and the container component is the one that handles the state of the UI. Consider a counter. This is how a counter would look implementing this pattern:
constPresentationalCounter=(props)=>{return(<section><buttononClick={props.increment}>+</button><buttononClick={props.decrement}>-</button><buttononClick={props.reset}>Reset</button><h1>CurrentCount:{props.count}</h1></section>);};constContainerCounter=()=>{const[count,setCount]=useState(0);constincrement=()=>setCount(count+1);constdecrement=()=>setCount(count-1);constreset=()=>setCount(0);return(<PresentationalCountercount={count}increment={increment}decrement={decrement}reset={reset}/>);};
In this example, we’ve got two components: a presentational component (PresentationalCounter) and a container component (ContainerCounter). The presentational component is the one that renders the UI, and the container component is the one that handles the state.
Why is this a thing? This pattern is quite useful because of the principle of single responsibility. Instead of having a component be responsible for how it should look and how it should work, we split these concerns. The result? PresentationalCounter can be passed between other stateful containers and preserve the look we want, while ContainerCounter can be replaced with another stateful container and preserve the functionality we want.
We can also unit test ContainerCounter in isolation and instead visually test (using Storybook or similar) PresentationalCounter in isolation. We can also assign engineers or engineering teams more comfortable with visual work to PresentationalCounter while assigning engineers who prefer data structures and algorithms to ContainerCounter.
We have so many more options because of this decoupled approach. For these reasons, the Presentational/Container component pattern has gained quite a lot of popularity and is still in use today.
From Wikipedia,
In mathematics and computer science, a higher-order function (HOF) is a function that does at least one of the following: takes one or more functions as arguments (i.e. a procedural parameter, which is a parameter of a procedure that is itself a procedure), returns a function as its result.
In the JSX world, an HOC is basically this: a component that takes another component as an argument and returns a new component that is the result of the composition of the two. HOC’s are great for shared behavior across components that we’d rather not repeat.
For example: many web applications need to request data from some data source asynchronously. Loading and error states are often inevitable, but we sometimes forget to account for them in our software. If we manually add loading, data, and error props to our components, the chance we miss a few gets even higher. Let’s consider a basic todo list app:
constApp=()=>{const[data,setData]=useState([]);useEffect(()=>{fetch("https://mytodolist.com/items").then((res)=>res.json()).then(setData);},[]);return<BasicTodoListdata={data}/>;};
This app has a few problems. We don’t account for loading or error states. Let’s fix this.
constApp=()=>{const[isLoading,setIsLoading]=useState(true);const[data,setData]=useState([]);const[error,setError]=useState([]);useEffect(()=>{fetch("https://mytodolist.com/items").then((res)=>res.json()).then((data)=>{setIsLoading(false);setData(data);}).catch(setError);},[]);returnisLoading?("Loading..."):error?(error.message):(<BasicTodoListdata={data}/>);};
Yikes. This got pretty unruly pretty fast. Moreover, this solves the problem for just one component. Do we need to add these pieces of state (i.e. loading, data, and error) to each component that interacts with a foreign data source? This is a cross-cutting concern, and exactly where higher-order components shine.
Instead of repeating this loading, error, data pattern for each component that talks to a foreign data source asynchronously, we can use a higher-order component factory to deal with these states for us. Let’s consider a withAsync higher-order component factory that remedies this.
constTodoList=withAsync(BasicTodoList);
withAsync will deal with loading and error states, and render any component when data is available. Let’s look at its implementation.
constwithAsync=(Component)=>(props)=>{if(props.loading){return"Loading...";}if(props.error){returnerror.message;}return(<Component//Passthroughwhateverotherpropswegive`Component`.{...props}/>);};
So now, when any Component is passed into withAsync, we get a new component that renders appropriate pieces of information based on its props. This changes our initial component into something more workable:
constTodoList=withAsync(BasicTodoList);constApp=()=>{const[isLoading,setIsLoading]=useState(true);const[data,setData]=useState([]);const[error,setError]=useState([]);useEffect(()=>{fetch("https://mytodolist.com/items").then((res)=>res.json()).then((data)=>{setIsLoading(false);setData(data);}).catch(setError);},[]);return<TodoListloading={isLoading}error={error}data={data}/>;};
No more nested ternaries, and the TodoList itself can show appropriate information depending on whether it’s loading, has an error, or has data. Since the withAsync HOC factory deals with this cross-cutting concern, we can wrap any component that talks to an external data source with it and get back a new component that responds to loading and error props. Consider a blog:
constPost=withAsync(BasicPost);constComments=withAsync(BasicComments);constBlog=({req})=>{const{loading:isPostLoading,error:postLoadError}=usePost(req.query.postId);const{loading:areCommentsLoading,error:commentLoadError}=useComments({postId:req.query.postId,});return(<><Postid={req.query.postId}loading={isPostLoading}error={postLoadError}/><CommentspostId={req.query.postId}loading={areCommentsLoading}error={commentLoadError}/></>);};exportdefaultBlog;
In this example, both Post and Comments use the withAsync HOC pattern which returns a newer version of BasicPost and BasicComments respectively that now respond to loading and error props. The behavior for this cross-cutting concern is centrally managed in withAsync’s implementation, so we account for loading and error states “for free” just by using the HOC pattern here.
Since we’ve talked about JSX expressions above, a common pattern is to have props that are functions that receive component-scoped state as arguments to facilitate code reuse. Here’s a simple example:
<WindowSizerender={({width,height})=>(<div>Yourwindowis{width}x{height}px</div>)}/>
Notice how there’s a prop called render that receives a function as a value? This prop even outputs some JSX markup that’s actually rendered. But why? Turns out WindowSize does some magic internally to compute the size of a user’s window, and then calls props.render to return the structure we declare.
Let’s take a look at WindowSize to understand this a bit more:
constWindowSize=(props)=>{const[size,setSize]=useState({width:-1,height:-1});useEffect(()=>{consthandleResize=()=>{setSize({width:window.innerWidth,height:window.innerHeight});};window.addEventListener("resize",handleResize);return()=>window.removeEventListener("resize",handleResize);},[]);returnprops.render(size);};
From this example, what we can see is that WindowSize uses an event listener to store some stuff in state on every resize, but the component itself is headless: it has no opinions about what UI to present. Instead, it yields control to whatever parent is rendering it and calls the render prop it’s supplied: effectively inverting control to its parent for the rendering job.
This helps a component that depends on the window size for rendering receive this information without duplicating the useEffect blocks and keeping our code a little bit more DRY (Don’t Repeat Yourself). This pattern is no longer as popular, and has since been effectively replaced with React Hooks (more on them later in the book).
Since children is a prop, some have preferred to drop the render prop name altogether and instead just use children. This would change the use of WindowSize to look like this:
<WindowSize>{({width,height})=>(<div>Yourwindowis{width}x{height}px</div>)}</WindowSize>
Some React authors prefer this, because it’s truer to the intent of the code: WindowSize in this case looks a bit like a React Context, and whatever we display tends to feel like children that consume this context. Still, React hooks eliminate the need for this pattern altogether, so maybe proceed with caution.
The Control Props pattern is a powerful way to control multiple components dependent on a common piece of state. React’s own documentation is pretty clear about the concept of controlled components, where a component’s state is controlled by React instead of the component itself, which is called an uncontrolled component.
Here’s an example of an uncontrolled component:
constForm=()=>{return(<form><inputtype="text"/><buttontype="submit">Submit</button></form>);};
The input’s value is self-contained by the browser, and React doesn’t have access to it nor controls it. This is an uncontrolled component because React does not manage its state. Let’s control it.
constForm=()=>{const[value,setValue]=useState("");return(<form><inputtype="text"value={value}onChange={(e)=>setValue(e.target.value)}/><buttontype="submit">Submit</button></form>);};
By binding this component’s value prop to the state managed by React, this component is now controlled by React. But you might be wondering, why we’re talking about controlled components instead of control props. The control props pattern is close to this: a control prop is a prop on a child component whose value is state managed by a parent component.
Let’s illustrate this with an example:
constListPage=({items})=>{const[filter,setFilter]=useState("");return(<section><HeaderBar><inputtype="checkbox"checked={filter.length>0}//<-controlproptokeepthisinsyncwith`filter`state/>{" "}Filteredbyterm{filter}<inputtype="text"value={filter}//<-controlproptokeepthisinsyncwith`filter`stateonChange={(e)=>setFilter(e.target.value)}/></HeaderBar><Listitems={items}filter={filter}//<-controlproptokeepthisinsyncwith`filter`state/></section>);};
From the above example, we use three control props based on one piece of state to keep our entire component tree in sync with our page’s “filtered” state. Looking at this, we see that the control props pattern helps us manage state and keep it synchronized across our React applications.
We often need to bundle a whole bunch of props together. For example, when creating drag-and-drop user interfaces, there are quite a few props to manage:
onDragStart to tell the browser what to do when a user starts dragging an element.onDragOver to identify a dropzone.onDrop to execute some code when an element is dropped on this element.onDragEnd to tell the browser what to do when an element is done being dragged.Moreover, data/elements cannot be dropped in other elements by default. To allow an element to be dropped on another, we must prevent the default handling of the element. This is done by calling the event.preventDefault method for the onDragOver event for possible drop zone.
Since these props usually go together, and since onDragOver usually defaults to event => { event.preventDefault(); moreStuff(); }, we can collect these props together and reuse them in various components like so:
exportconstdroppableProps={onDragOver:(event)=>{event.preventDefault();},onDrop:(event)=>{},};exportconstdraggableProps={onDragStart:(event)=>{},onDragEnd:(event)=>{},};
Now, if we have a React component we expect to behave like a dropzone, we can use the prop collection on it like this:
<Dropzone{...droppableProps}/>
This is the prop collection pattern, and it makes a number of props reusable. This is often quite widely used in the accessibility space to include a number of aria-* props on accessible components. One problem that’s still present though is that if we write a custom onDragOver prop and override the collection, we lose the event.preventDefault call that we get out of the box using the collection.
This can cause unexpected behavior, removing the ability to drop a component on Dropzone:
<Dropzone{...droppableProps}onDragOver={()=>{alert("Dragged!");}}/>
It removes the ability to drop something on Dropzone since we no longer call event.preventDefault on it. Thankfully, we can fix this using prop getters.
Prop getters essentially compose prop collections with custom props and merge them. From the example above, we’d like to preserve the event.preventDefault call in the droppableProps collection’s onDragOver handler, while also adding a custom alert("Dragged!"); call to it. We can do this using prop getters like so.
First, we’ll change droppableProps the collection to a prop getter:
exportconstgetDroppableProps=()=>{return{onDragOver:(event)=>{event.preventDefault();},onDrop:(event)=>{},};};
At this point, nothing has changed besides where we once exported a prop collection, we now export a function that returns a prop collection. This is a prop getter. Since this is a function, it can receive arguments—like a custom onDragOver. We can compose this custom onDragOver with our default one like so:
constcompose=(...functions)=>(...args)=>functions.forEach((fn)=>fn?.(...args));exportconstgetDroppableProps=({onDragOver:replacementOnDragOver,...replacementProps})=>{constdefaultOnDragOver=(event)=>{event.preventDefault();};return{onDragOver:compose(replacementOnDragOver,defaultOnDragOver),onDrop:(event)=>{},...replacementProps,};};
Now, we can use the prop getter like this:
<Dropzone{...getDroppableProps({onDragOver:()=>{alert("Dragged!");},})}/>
This custom onDragOver will compose into our default onDragOver, and both things will happen: event.preventDefault(), and alert("Dragged!"). This is the prop getter pattern.
Sometimes, we have accordion components like this:
<Accordionitems={[{label:"One",content:"lorem ipsum for more, see https://one.com"},{label:"Two",content:"lorem ipsum for more, see https://two.com"},{label:"Three",content:"lorem ipsum for more, see https://three.com"},]}/>
This component is intended to render a list similar to this, except only one item can be open at a given time.
The inner workings of this component would look something like this:
exportconstAccordion=({items})=>{const[activeItemIndex,setActiveItemIndex]=useState(0);return(<ul>{items.map((item,index)=>(<lionClick={()=>setActiveItemIndex(index)}key={item.id}><strong>{item.label}</strong>{index===activeItemIndex&&i.content}</li>))}</ul>);};
But what if we wanted a custom separator in between items Two and Three? What if we wanted the third link to be red or something? We’d probably resort to some type of hack like this:
<Accordionitems={[{label:"One",content:"lorem ipsum for more, see https://one.com"},{label:"Two",content:"lorem ipsum for more, see https://two.com"},{label:"---"},{label:"Three",content:"lorem ipsum for more, see https://three.com"},]}/>
But that wouldn’t look the way we want. So we’d probably do more hacks on our current hack:
exportconstAccordion=({items})=>{const[activeItemIndex,setActiveItemIndex]=useState(0);return(<ul>{items.map((item,index)=>item==="---"?(<hr/>):(<lionClick={()=>setActiveItemIndex(index)}key={item.id}><strong>{item.label}</strong>{index===activeItemIndex&&i.content}</li>))}</ul>);};
Now is that code we’d be proud of? I’m not sure. This is why we need Compound Components: they allow us to have a grouping of interconnected, distinct components that share state, but are atomically renderable, giving us more control of the element tree.
This accordion, expressed using the compound components pattern, would look like this:
<Accordion><AccordionItemitem={{label:"One"}}/><AccordionItemitem={{label:"Two"}}/><AccordionItemitem={{label:"Three"}}/></Accordion>
Let’s explore how this pattern can be implemented in React. First, we’ll start with a context that each part of the accordion can read from:
constAccordionContext=createContext({activeItemIndex:0,setActiveItemIndex:()=>0,});
Then, our Accordion component will just provide context to its children:
exportconstAccordion=({items})=>{const[activeItemIndex,setActiveItemIndex]=useState(0);return(<AccordionContext.Providervalue={{activeItemIndex,setActiveItemIndex}}><ul>{children}</ul></AccordionContext.Provider>);};
Now, let’s create discrete AccordionItem components that consume and respond to this context as well:
exportconstAccordionItem=({item,index})=>{// Note we're using the context here, not state!const{activeItemIndex,setActiveItemIndex}=useContext(AccordionContext);return(<lionClick={()=>setActiveItemIndex(index)}key={item.id}><strong>{item.label}</strong>{index===activeItemIndex&&i.content}</li>);};
Now that we’ve got multiple parts for our Accordion making it a compound component, our usage of the Accordion goes from this:
<Accordionitems={[{label:"One",content:"lorem ipsum for more, see https://one.com"},{label:"Two",content:"lorem ipsum for more, see https://two.com"},{label:"Three",content:"lorem ipsum for more, see https://three.com"},]}/>
to this:
<Accordion>{items.map((item,index)=>(<AccordionItemkey={item.id}item={item}index={index}/>))}</Accordion>
The benefit of this that we have far more control, while each AccordionItem is aware of the larger state of Accordion. So now, if we wanted to include a horizontal line between items Two and Three, we could break out of the map and go more manual if we wanted to:
<Accordion><AccordionItemkey={items[0].id}item={items[0]}index={0}/><AccordionItemkey={items[1].id}item={items[1]}index={1}/><hr/><AccordionItemkey={items[2].id}item={items[2]}index={2}/></Accordion>
Or, we could do something more hybrid like:
<Accordion>{items.slice(0,2).map((item,index)=>(<AccordionItemkey={item.id}item={item}index={index}/>))}<hr/>{items.slice(2).map((item,index)=>(<AccordionItemkey={item.id}item={item}index={index}/>))}</Accordion>
This is the benefit of compound components: they invert control of rendering to the parent, while preserving contextual state awareness among children. The same approach could be used for a tab UI, where tabs are aware of the current tab state while having varying levels of element nesting.
The context module pattern is a powerful pattern at scale where application state can be fairly complex. In essence, it works by exporting utility functions that mutate state by accepting dispatch functions as arguments in a tree-shakeable and lazy-loadable way.
Here’s a contrived example with a counter, coming full circle from our first pattern which was also a counter. Let’s start by creating a context:
importReactfrom"react";constCounterContext=React.createContext(null);CounterContext.displayName="CounterContext";
Now, let’s create a provider so we can share this context with our React app:
exportfunctionCounterProvider({children}){const[state,dispatch]=useState({counter:0});return(<CounterContext.Providervalue={{state,dispatch}}>{children}</CounterContext.Provider>);}
The final step to implement this pattern is to export a few utility functions that can be used to manipulate state:
exportconstincrement=(dispatch,by=1)=>dispatch((oldValue)=>oldValue+by);exportconstdecrement=(dispatch,by=1)=>dispatch((oldValue=oldValue-by));exportconstreset=(dispatch)=>dispatch(0);
Notice how they accept dispatch as their first argument. This makes them pluggable, and able to run in a higher degree of isolation. Now, we can use these functions in our application, and import them on-demand, reducing bundle size and preserving performance.
constPresentationalCounter=(props)=>{return(<section><buttononClick={props.increment}>+</button><buttononClick={props.decrement}>-</button><buttononClick={props.reset}>Reset</button><h1>CurrentCount:{props.count}</h1></section>);};constContainerCounter=()=>{const{dispatch}=useContext(CounterContext);return(<PresentationalCountercount={count}increment={()=>increment(dispatch)}decrement={()=>decrement(dispatch)}reset={()=>reset(dispatch)}/>);};
We can lazy load increment, decrement, and reset using dynamic import in JavaScript, which would mean shipping a smaller amount of first-load JavaScript to our users, and making things faster for them: a UI can render closer to instant, and then immediately become interactive via our context modules shortly after. This is a powerful pattern that can be used to build a large application while shipping a smaller amount of code.
Okay, we’ve covered a fair amount of ground on the topic of JSX. At this point we should be feeling pretty confident (or even fluent, if you will) about the topic to the point where we can confidently explain aspects of it to people.
Let’s make sure you’ve fully grasped the topics we covered. Take a moment to answer the following questions:
If you have trouble answering these questions, this chapter may be worth another read. If not, let’s explore the next chapter.
Now that we’re pretty fluent with JSX, let’s turn our attention to the next aspect of React and see how we can squeeze the most knowledge out of it to further boost our fluency. Let’s explore the virtual DOM.
In this chapter, we’ll dive deep into the concept of virtual DOM (Document Object Model) and its significance in React. We’ll also explore how React uses the virtual DOM to make web development easier and more efficient.
As web applications become more complex, it becomes increasingly difficult to manage the “real DOM”, which is a complex and error-prone process as we’ll see soon enough. React’s virtual DOM provides an alternative solution to this problem.
Throughout this chapter, we’ll explore the workings of React’s virtual DOM, its advantages over the real DOM, and how it is implemented to make web development easier and more efficient. We’ll also cover how React optimizes performance around the real DOM.
Through a series of code examples and detailed explanations, we’ll understand the virtual DOM’s role in React and how to take advantage of its benefits to create robust and efficient web applications. Let’s get started!
The virtual DOM is a programming concept that allows web developers to create user interfaces in a more efficient and performant way. It does this by creating a virtual representation of the real DOM. The real DOM is a tree-like data structure that represents the HTML elements on a web page.
The virtual DOM is essentially a lightweight copy of the real DOM that is kept in memory. Whenever a change is made to the UI, the virtual DOM is updated first, and then the real DOM is updated to match the changes in the virtual DOM.
The reason for this is that updating the real DOM can be a slow and expensive process. We’ll cover this in the next section, but the gist of it is every time a change is made to the real DOM, the browser has to recalculate the layout of the page, repaint the screen, and perform other operations that can be time-consuming.
On the other hand, updating the virtual DOM is much faster since it doesn’t involve any changes to the actual page layout. Instead, it is a simple JavaScript object that can be manipulated quickly and efficiently.
When updates are made to the virtual DOM, React uses a diffing algorithm to identify the differences between the old and new versions of the virtual DOM. This algorithm then determines the minimal set of changes required to update the real DOM, and these changes are applied in a batched and optimized way to minimize the performance impact.
By using the virtual DOM, React allows us to create more responsive and efficient user interfaces that are faster and more performant than traditional UIs that rely solely on the real DOM. This allows for a better user experience and can also improve the scalability and maintainability of web applications.
A better user experience in the context of the virtual DOM means that the web application feels more responsive and faster to the user. Not only that, but the use of the virtual DOM can also lead to a more consistent and reliable user experience. Since updates are applied in a controlled and optimized manner, there is less chance of errors or inconsistencies in the UI.
Finally, we can think of the virtual DOM as a technique used by React to reduce the performance cost of updating a web page. In this chapter, we will explore the differences between the virtual DOM and the real DOM, the pitfalls of the real DOM, and how the virtual DOM helps in creating better user interfaces. We will also dive into React’s implementation of the virtual DOM and the algorithms it uses for efficient updates.
The Document Object Model, or the “Real” DOM, is the programming interface for web pages. It represents the page so that programs can change the document structure, style, and content. The DOM represents the document as nodes and objects, which can be modified with a scripting language such as JavaScript.
When an HTML page is loaded into a web browser, it is parsed and converted into a tree of nodes and objects, which is the DOM. The DOM is a live representation of the web page, meaning that it is constantly being updated as users interact with the page.
Here is an example of the real DOM for a simple HTML page:
<!DOCTYPE html><html><head><title>Example Page</title></head><body><h1>Welcome to my page!</h1><p>This is an example paragraph.</p><ul><li>Item 1</li><li>Item 2</li><li>Item 3</li></ul></body></html>
In this example, the real DOM is represented by a tree-like structure that consists of nodes for each HTML element in the page. Here is what the tree structure would look like:
Document-html-head-title-body-h1-p-ul-li-li-li
Each node in the tree represents an HTML element, and it contains properties and methods that allow it to be manipulated through JavaScript. For example, we can use the document.querySelector() method to retrieve a specific node from the real DOM and modify its contents:
<!DOCTYPE html><html><head><title>Example Page</title></head><body><h1class="heading">Welcome to my page!</h1><p>This is an example paragraph.</p><ul><li>Item 1</li><li>Item 2</li><li>Item 3</li></ul><script>// Retrieve the "heading" element from the real DOMconstheading=document.querySelector(".heading");// Change the contents of the elementheading.innerHTML="Hello, world!";</script></body></html>
In this example, we retrieve the h1 element with the class of "heading" using the document.querySelector() method. We then modify the contents of the element by setting its innerHTML property to "Hello, world!". This changes the text displayed on the page from "Welcome to my page!" to "Hello, world!".
That doesn’t seem too complicated, but there are a few things to note here. First, we are using the document.querySelector() method to retrieve the element from the real DOM. This method accepts a CSS selector as an argument and returns the first element that matches the selector. In this case, we are passing in the class selector .heading, which matches the h1 element with the class of "heading".
There’s a bit of a danger here, because while the document.querySelector method is a powerful tool for selecting elements in the real DOM based on CSS selectors, one potential performance issue with this method is that it can be slow when working with large and complex documents because the method has to start at the top of the document and traverse downward to find the desired element, which can be a time-consuming process.
When we call document.querySelector() with a CSS selector, the browser has to search the entire document tree for matching elements. This means that the search can be slow, especially if the document is large and has a complex structure. In addition, the browser has to evaluate the selector itself, which can be a complex process depending on the complexity of the selector.
The performance of document.querySelector can also be impacted by the location of the element we are trying to find in the document tree. If the element is located deep within the tree, the browser has to traverse a large number of nodes before it can find the element, which can be a slow process.
Another potential performance issue with document.querySelector is that it returns only a single element, even if multiple elements match the selector. This means that if we want to select multiple elements with the same selector, we have to call the method multiple times, which can further slow down our applications.
To mitigate the performance issues of document.querySelector, there are several strategies we can use. One approach is to optimize our selectors to make them more efficient. For example, we can use IDs instead of classes to narrow down the scope of the search, or use more specific selectors to avoid unnecessary node traversal.
Another strategy is to use other methods like document.getElementsByTagName or document.getElementById instead of document.querySelector, depending on our specific use case. These methods are more limited in their scope, but they can be faster and more efficient when we know exactly what type of element we are looking for.
Overall, the performance of document.querySelector can be impacted by a number of factors, including the size and complexity of the document, the location of the element we are trying to find, and the specificity of the selector we are using. By optimizing our selectors and using other methods when appropriate, we can help to mitigate these issues and improve the performance of our web applications.
Another factor that can impact the performance of document.querySelector is the specificity of the selector. More complex and specific selectors require more computation to evaluate and may require the browser to search a larger portion of the document tree.
In contrast, document.getElementById always searches a specific area of the document tree, so it is generally more efficient and in most cases, document.getElementById is faster than document.querySelector because it is a much simpler and more specific method.
document.getElementById is specifically designed to locate elements based on their unique id attribute. This means that it only needs to search a limited portion of the document tree, since id attributes are unique to a single element. As a result, the browser can find the desired element more quickly and with less computational overhead than with document.querySelector.
On the other hand, document.querySelector is a more general-purpose method that allows us to search for elements using complex CSS selectors. While this flexibility is useful in many cases, it also requires the browser to do more work to evaluate the selector and search the document tree for matching elements.
It’s worth noting that the performance difference between document.getElementById and document.querySelector may be negligible in small documents or when searching for elements in specific areas of the document tree. However, in larger and more complex documents, the difference can become more pronounced.
Overall, if we know the id attribute of the element we want to select, document.getElementById is the faster and more efficient method. However, if we need to search for elements using more complex selectors, document.querySelector is still a powerful and flexible tool that can help we accomplish our goals.
I’m sharing all of these nuanced details because I want you to understand the overall complexity of the DOM: working intelligently with the DOM is no small feat and, with React, we have a choice: do we navigate this minefield ourselves and occasionally step on landmines? Or do we use a tool that helps us navigate the DOM safely—the virtual DOM?
While we’ve discussed some small nuances in how we select elements here, we haven’t had an opportunity to dive deeper into the pitfalls of working with the DOM directly. Let’s do this quickly to fully understand the value that React’s virtual DOM provides.
The real DOM has several pitfalls that can make it difficult to build high-performance web applications. Some of these pitfalls include performance issues, cross-browser compatibility, and complexity.
One of the biggest issues with the real DOM is its performance. Whenever a change is made to the DOM, such as adding or removing an element, or changing the text or attributes of an element, the browser has to recalculate the layout and repaint the affected parts of the page. This can be a slow and resource-intensive process, especially for large and complex web pages.
For example, reading a DOM element’s offsetWidth property may seem like a simple operation, but it can actually trigger a costly recalculation of the layout by the browser. This is because offsetWidth is a computed property that depends on the layout of the element and its ancestors, which means that the browser needs to ensure that the layout information is up-to-date before it can return an accurate value.
Consider the following example, where we have a simple HTML document with a single div element:
<!DOCTYPE html><html><head><title>Reading offsetWidth example</title><style>#my-div{width:100px;height:100px;background-color:red;}</style></head><body><divid="my-div"></div><script>vardiv=document.getElementById("my-div");console.log(div.offsetWidth);</script></body></html>
When we load this document in a browser and open the developer console, we can see that the offsetWidth property of the div element is logged to the console. However, what we don’t see is the behind-the-scenes work that the browser has to do to compute the value of offsetWidth.
To understand this work, we can use the Performance panel in our developer tools to record a timeline of the browser’s activities as it loads and renders the page. When we do that, we can see that the browser is performing several layout and paint operations as it processes the document. In particular, we can see that there are two layout operations that correspond to the reading of offsetWidth in the script.
Each of these layout operations takes a significant amount of time to complete (in this case, about 2ms), even though they are just reading the value of a property. This is because the browser needs to ensure that the layout information is up-to-date before it can return an accurate value, which requires it to perform a full layout of the document.
In general, we should be careful when reading layout-dependent properties like offsetWidth, because they can cause unexpected performance problems. If we need to read the value of such properties multiple times, we should consider caching the value in a variable to avoid triggering unnecessary layout recalculations. Alternatively, we can use the requestAnimationFrame API to defer the reading of the property until the next animation frame, when the browser has already performed the necessary layout calculations.
To understand more about accidental performance issues with the real DOM, let’s take a look at some examples. Consider the following HTML document:
<!DOCTYPE html><html><head><title>Example</title></head><body><ulid="list"><li>Item 1</li><li>Item 2</li><li>Item 3</li></ul></body></html>
Suppose we want to add a new item to the list using JavaScript. We might write the following code:
constlist=document.getElementById("list");constnewItem=document.createElement("li");newItem.textContent="Item 4";list.appendChild(newItem);
Notice we’re using getElementById instead of querySelector here because:
Let’s keep going.
This code selects the ul element with the ID "list", creates a new li element, sets its text content to "Item 4", and appends it to the list. When we run this code, the browser has to recalculate the layout and repaint the affected parts of the page to display the new item.
This process can be slow and resource-intensive, especially for larger lists. For example, suppose we have a list with 1000 items, and we want to add a new item to the end of the list. We might write the following code:
constlist=document.getElementById("list");constnewItem=document.createElement("li");newItem.textContent="Item 1001";list.appendChild(newItem);
When we run this code, the browser has to recalculate the layout and repaint the entire list, even though only one item has been added. This can take a significant amount of time and resources, especially on slower devices or with larger lists.
To further illustrate this issue, consider the following example:
<!DOCTYPE html><html><head><title>Example</title><style>#listli{background-color:#f5f5f5;}.highlight{background-color:yellow;}</style></head><body><ulid="list"><li>Item 1</li><li>Item 2</li><li>Item 3</li></ul><buttononclick="highlight()">Highlight Item 2</button><script>functionhighlight(){constitem=document.querySelector("#list li:nth-child(2)");item.classList.add("highlight");}</script></body></html>
In this example, we have a list with three items and a button that highlights the second item when clicked. When the button is clicked, the browser has to recalculate the layout and repaint the entire list, even though only one item has changed. This can cause a noticeable delay or flicker in the UI, which can be frustrating for users.
Overall, the performance issues of the real DOM can be a significant challenge for us, especially when dealing with large and complex web pages. While there are techniques for mitigating these issues, such as optimizing selectors, using event delegation, or using CSS animations, they can be complex and difficult to implement.
As a result, many of us have turned to the virtual DOM as a solution to these issues. The virtual DOM allows us to create UIs that are more efficient and performant by abstracting away the complexities of the real DOM and providing a more lightweight way of representing the UI.
But... is it really necessary to save a few milliseconds? Well, CPU/processing performance is a critical factor that can greatly impact the success of an application. In today’s digital age, where users expect fast and responsive websites, it’s essential for us web developers to prioritize CPU efficiency to ensure that our applications run smoothly and responsively.
Direct DOM manipulation that triggers layout recalculation (called reflows) and repaints can lead to increased CPU usage and processing times, which can cause delays and even crashes for users. This can be particularly problematic for users on low-powered devices, such as smartphones or tablets, which may have limited processing power and memory. In many parts of the world, users may be accessing our web apps on older or less capable devices, which can further compound the problem.
By prioritizing CPU efficiency, we can create applications that are accessible to users on a wide range of devices, regardless of their processing power or memory. This can lead to increased engagement, higher conversion rates, and ultimately, a more successful online presence.
React’s virtual DOM has emerged as a powerful tool for building CPU-efficient web applications, using its efficient rendering algorithms can help minimize processing times and improve overall performance.
Another issue with the real DOM is cross-browser compatibility. Different browsers implement the DOM API differently, which can lead to inconsistencies and bugs in web applications. This was far more common around the time React was released, however, and is far less common now. Still, this can and did make it difficult for developers to create web applications that work seamlessly across different browsers and platforms.
One of the primary issues with cross-browser compatibility is that certain DOM elements and attributes may not be supported by all browsers. For example, Internet Explorer does not support the HTML5 <canvas> element, while other browsers may not support certain CSS properties or JavaScript methods. As a result, we must spend additional time and effort implementing workarounds and fallbacks to ensure that their applications function correctly on all target platforms.
Consider this example, where we want to draw a circle on a <canvas> element:
<!-- Example of a HTML5 <canvas> element --><canvasid="myCanvas"></canvas>
// Example of checking for support for <canvas> in JavaScriptvarcanvas=document.getElementById("myCanvas");if(canvas.getContext){varctx=canvas.getContext("2d");// <canvas> is supported}else{// <canvas> is not supported}
We perform a feature detection check to see if the browser supports the <canvas> element. If it does, we can draw a circle on the canvas using the arc method. If it doesn’t, we can display a message to the user indicating that their browser does not support the <canvas> element. Dealing with these kinds of things can be a pain, and it’s easy to forget to implement these checks. This can lead to bugs and inconsistencies in web applications, which can be frustrating for users.
Furthermore, different browsers may interpret CSS and JavaScript differently, leading to inconsistencies in layout and behavior. For example, different browsers may apply different default styles to HTML elements, resulting in variations in appearance between browsers. Additionally, JavaScript methods may behave differently or have different performance characteristics on different browsers, leading to varying levels of responsiveness and interactivity.
While not directly related to React, we often have to deal with these issues when using the DOM API. For example, consider the following example, where we want to create a button element with different styles on different browsers:
<!-- Example of an HTML button element with different styles on different browsers --><buttonid="myButton">Click me</button>
/* Example of applying different styles to a button element on different browsers */#myButton{color:blue;background-color:white;border:1pxsolidblack;}/* Fix for Safari */@mediascreenand(-webkit-min-device-pixel-ratio:0){#myButton{padding:1px6px;}}/* Fix for Internet Explorer */@mediascreenand(min-width:0\0){#myButton{padding:2px5px;}}
To address these issues, we can take several steps to ensure cross-browser compatibility in their web applications. One approach is to use browser detection and feature detection to determine which features and capabilities are available on the user’s browser and adjust the code accordingly.
// Example of detecting the user's browser in JavaScriptif(navigator.userAgent.indexOf("Firefox")!==-1){// Code for Firefox}elseif(navigator.userAgent.indexOf("Chrome")!==-1){// Code for Chrome}elseif(navigator.userAgent.indexOf("Safari")!==-1){// Code for Safari}elseif(navigator.userAgent.indexOf("Trident")!==-1){// Code for Internet Explorer}
Additionally, we can use polyfills and shims to provide fallback functionality for unsupported features. Polyfills are JavaScript code that emulate the behavior of modern web standards on older browsers, while shims are small pieces of code that enable features not supported by a particular browser.
The virtual DOM provides a consistent, abstracted interface that is implemented the same way across all browsers and platforms, eliminating the need for browser-specific workarounds like polyfills—only around the DOM. When writing React code, you probably will have your fair share of browser-specific quirks and polyfills to deal with around other APIs your application uses, but since it abstracts away the DOM, we don’t have to worry about that specific set of APIs: instead, we get to lean in to React’s API and focus on the application logic we’re trying to build.
The real DOM can also be complex and difficult to work with. Modifying the DOM directly with JavaScript can be a verbose and error-prone process, especially for complex web applications. This can make it difficult to maintain and update web applications over time.
Imagine we have a web application that displays a list of blog posts. Each blog post is represented as a div element containing a title and some text. We want to add a button to each blog post that allows the user to expand or collapse the text. Here’s how we might implement this with the real DOM:
<divid="blog-posts"><divclass="blog-post"><h2class="blog-post-title">Post 1</h2><pclass="blog-post-text">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus at nisi sed enim scelerisque auctor.</p><buttonclass="expand-button">Expand</button></div><divclass="blog-post"><h2class="blog-post-title">Post 2</h2><pclass="blog-post-text">Morbi at neque ut metus malesuada tempus. Duis vel volutpat libero, id congue mauris. Donec sit amet ornare eros.</p><buttonclass="expand-button">Expand</button></div><!-- More blog posts... --></div>
constblogPosts=document.querySelectorAll(".blog-post");blogPosts.forEach((blogPost)=>{constexpandButton=blogPost.querySelector(".expand-button");constblogPostText=blogPost.querySelector(".blog-post-text");letisExpanded=false;expandButton.addEventListener("click",()=>{if(isExpanded){blogPostText.style.display="none";expandButton.textContent="Expand";isExpanded=false;}else{blogPostText.style.display="block";expandButton.textContent="Collapse";isExpanded=true;}});});
Here, we’re using the querySelectorAll method to select all the blog posts on the page. Then, we’re using the querySelector method to select the expand button and blog post text within each blog post. We’re attaching a click event listener to the expand button that toggles the display of the blog post text and changes the text of the button between “Expand” and “Collapse”.
While this code works, it can be quite verbose and error-prone. We have to manually manipulate the DOM to show or hide the text, and we have to keep track of the state of each blog post’s expansion manually using the isExpanded variable.
Moreover, the state here is shared between the JavaScript and the HTML. This means that if we want to change the initial state of the blog posts, we have to change both the JavaScript and the HTML. If, for some reason, the HTML is out of sync with the JavaScript, we will have problems.
If we wanted to consilidate our state of the blog into just JavaScript, we’d update the HTML to look like this:
<divid="blog-posts"></div>
constblogPostsContainer=document.getElementById("blog-posts");constblogPostsData=[{title:"Post 1",text:"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus at nisi sed enim scelerisque auctor.",},{title:"Post 2",text:"Morbi at neque ut metus malesuada tempus. Duis vel volutpat libero, id congue mauris. Donec sit amet ornare eros.",},// More blog posts...];blogPostsData.forEach((blogPostData)=>{constblogPost=document.createElement("div");blogPost.className="blog-post";constblogPostTitle=document.createElement("h2");blogPostTitle.className="blog-post-title";blogPostTitle.textContent=blogPostData.title;blogPost.appendChild(blogPostTitle);constblogPostText=document.createElement("p");blogPostText.className="blog-post-text";blogPostText.textContent=blogPostData.text;blogPost.appendChild(blogPostText);constexpandButton=document.createElement("button");expandButton.className="expand-button";expandButton.textContent="Expand";blogPost.appendChild(expandButton);blogPostsContainer.appendChild(blogPost);letisExpanded=false;expandButton.addEventListener("click",()=>{if(isExpanded){blogPostText.style.display="none";expandButton.textContent="Expand";isExpanded=false;}else{blogPostText.style.display="block";expandButton.textContent="Collapse";isExpanded=true;}});});
In this example, we loop over an array containing a bunch of blog posts and for each object, we create a new div for the blog post, and append it as a child to the container div.
Inside each blog post div, we create the title and text elements using document.createElement, set their classes and text content appropriately, and append them as children to the blog post div. We also create the expand button element, set its class and text content, and append it as a child to the blog post div.
Finally, we add a click event listener to each expand button that toggles the display of the blog post text and changes the button text between “Expand” and “Collapse” accordingly. The state for whether the text is expanded or not is kept in a state variable isExpanded.
Take note that we have not even considered cleaning up the event listeners we’ve attached to the buttons! That’ll be a whole other can of worms!
This example demonstrates how, by keeping all data and state in JavaScript, we can avoid the pitfalls of shared state between HTML and JS. This is actually very close to React’s approach of owning the entire DOM and all its state and protecting us further. Let’s take a look at how we might implement the same functionality using React:
constblogPostsData=[{title:"Post 1",text:"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus at nisi sed enim scelerisque auctor.",},{title:"Post 2",text:"Morbi at neque ut metus malesuada tempus. Duis vel volutpat libero, id congue mauris. Donec sit amet ornare eros.",},// More blog posts...];functionBlogPost({title,text,isExpanded}){return(<divclassName="blog-post"data-title={title}><h2className="blog-post-title">{title}</h2>{isExpanded?<pclassName="blog-post-text">{text}</p>:null}<buttonclassName="expand-button">{isExpanded?"Collapse":"Expand"}</button></div>);}functionBlogPosts(){const[expandedPosts,setExpandedPosts]=useState([]);constexpand=(e)=>{consttarget=e.target;if(target.className!=="expand-button"){return;}constblogPost=target.parentElement;consttitle=blogPost.dataset.title;if(expandedPosts.includes(title)){setExpandedPosts(expandedPosts.filter((t)=>t!==title));}else{setExpandedPosts([...expandedPosts,title]);}};return(<divclassName="blog-posts"onClick={expand}>{blogPostsdata.map((post)=>(<BlogPostisExpanded={expandedPosts.includes(post.title)}title={post.title}text={post.text}/>))}{/* More blog posts */}</div>);}
The React example above is better than dealing with the DOM directly for several reasons.
First, it abstracts away the complexity of directly manipulating the DOM, making it easier for us to focus on the application’s logic instead of the presentation. We only need to write components and define their behavior, and React takes care of updating the necessary parts of the DOM when the state of the component changes.
Second, the React virtual DOM allows for efficient and performant updates to the UI. When a component’s state changes, React compares the new virtual DOM with the previous one and calculates the minimal number of changes needed to update the actual DOM. This means that updates can be made quickly and without the need to recalculate the entire layout of the page.
Third, the use of JSX allows for a more declarative and readable syntax for creating UI components. Instead of creating elements and appending them to the DOM, we can write code that resembles HTML and let React handle the rendering.
Finally, we make use of event delegation to avoid having to add event listeners to each expand button. Instead, we add a single event listener to the parent element and use event bubbling to handle the click event at a higher level: we attach one event listener instead of n event listeners, where n is the number of expand buttons. While we can do this with the real DOM too, React’s JSX brings a certain clarity that naturally leads us to think about these things and write them in an approachable way. This may be subjective, so take it with a grain of salt.
In summary, the complexity of the real DOM can make it difficult to maintain and update web applications over time, especially for complex applications with a lot of interactivity. React’s virtual DOM makes this process much simpler and more intuitive by allowing developers to work with components and by abstracting away the complexities of the real DOM.
We just discussed event handling. Let’s pull on this thread a little more to fully appreciate the complexity of working with the DOM. Imagine we have a list and we want to add an event listener to each new <li> element that we create so that when a user clicks on it, an alert is shown with the text of the item. With the real DOM, this would require even more code and potentially nested loops:
constitems=["apple","banana","orange"];constlist=document.createElement("ul");document.body.appendChild(list);for(leti=0;i<items.length;i++){constlistItem=document.createElement("li");constitemText=document.createTextNode(items[i]);listItem.appendChild(itemText);list.appendChild(listItem);listItem.addEventListener("click",()=>{alert(items[i]);});}
In this example, we’ve added an event listener to each <li> element that displays the text of the item when clicked. However, because of the way closures work in JavaScript, the value of i will always be the length of items instead of the index of the item that was clicked. This is a common mistake that can be difficult to debug.
Now, let’s contrast this with how React’s virtual DOM makes this process much simpler. In React, we can create a component for the list that takes in the items as a prop and renders the list items as components:
constitems=["apple","banana","orange"];functionListItem(props){functionhandleClick(){alert(props.item);}return<lionClick={handleClick}>{props.item}</li>;}functionList(props){constlistItems=props.items.map((item)=>(<ListItemkey={item}item={item}/>));return<ul>{listItems}</ul>;}
In this example, we’ve created two components: ListItem and List. ListItem is responsible for rendering a single item and attaching an event listener to it. List takes in the items as a prop, maps over them to create an array of ListItem components, and renders them in an unordered list.
This approach is much simpler and more intuitive than manipulating the real DOM directly. Because we’re working with components, we can encapsulate the logic for each item in its own component and reuse it throughout our application. Additionally, because we’re not manipulating the real DOM directly, we don’t have to worry about inconsistencies between different browsers and platforms.
React, using the virtual DOM reduces complexity by having us reason about smaller, individual, and more isolated components instead of larger and more complex documents. Being so eagerly criticized in its early days for breaking separation of concerns, React actually helps separation of concerns this way, through components.
We’ve looked in detail at the real DOM and its pitfalls, as well as seen some of the virtual DOM’s benefits by contrast. Let’s zero in and laser focus on this, exploring more concretely how the virtual DOM works in React.
The virtual DOM is a technique that helps to mitigate the pitfalls of the real DOM. By creating a virtual representation of the DOM in memory, changes can be made to the virtual representation without directly modifying the real DOM. This allows the framework or library to update the real DOM in a more efficient and performant way, without causing the browser to do any work in recomputing the layout of the page and repainting the elements.
The virtual DOM also helps to improve cross-browser compatibility by providing a consistent API that abstracts away the differences between different browser implementations of the real DOM. This makes it easier for developers to create web applications that work seamlessly across different browsers and platforms.
React uses the virtual DOM to build user interfaces. In this section, we will explore how React’s implementation of the virtual DOM works.
In React, user interfaces are represented as a tree of React Elements. React Elements are lightweight representations of a component or HTML element. They are created using the React.createElement function and can be nested to create complex user interfaces.
Here is an example of a React Element:
constelement=React.createElement("div",{className:"my-class"},"Hello, world!");
This creates a React Element that represents a <div> element with a className of my-class and the text content of Hello, world!.
From here, we can see the actual created element if we console.log(element). It looks like this:
{$$typeof:Symbol(react.element),type:"div",key:null,ref:null,props:{className:"my-class",children:"Hello, world!"},_owner:null,_store:{}}
This is a representation of a React element. React elements are the smallest building blocks of a React application, and they describe what should appear on the screen. Each element is a plain JavaScript object that describes the component it represents, along with any relevant props or attributes.
The React element shown in the code block is represented as an object with several properties:
$$typeof: This is a symbol used by React to ensure that an object is a valid React element. In this case, it is Symbol(react.element). $$typeof can have other values depending on the type of the element:
Symbol(react.fragment): When the element represents a React fragment.Symbol(react.portal): When the element represents a React portal.Symbol(react.profiler): When the element represents a React profiler.Symbol(react.provider): When the element represents a React context provider.We’ll cover more of these later in the book.
type: This property represents the type of the component that the element represents. In this case, it is "div", indicating that this is a <div> DOM element. The type property of a React element can be either a string or a function. If it is a string, it represents the HTML tag name, like "div", "span", "button", etc. When it is a function, it represents a custom React component.
Here is an example of an element with a custom component type:
constMyComponent=(props)=>{return<div>{props.text}</div>;};constmyElement=<MyComponenttext="Hello, world!"/>;
In this case, the type property of myElement is MyComponent, which is a function that defines a custom component. The value of myElement as a React element object would be:
{$$typeof:Symbol(react.element),type:MyComponent,key:null,ref:null,props:{text:"Hello, world!"},_owner:null,_store:{}}
Note that type is set to the MyComponent function, which is the type of the component that the element represents, and props contains the props passed to the component, in this case { text: "Hello, world!" }.
When React encounters an element with a function type, it will invoke that function with the element’s props and return value will be used as the element’s children, in this case, that’s a div. This is how custom React components are rendered: React continually goes deeper and deeper and deeper with elements until scalar values are reached, which are then rendered as text nodes, or if null or undefined is reached, nothing is rendered.
Here is an example of an element with a string type:
constmyElement=<div>Hello,world!</div>;
In this case, the type property of myElement is "div", which is a string that represents an HTML tag name. When React encounters an element with a string type, it will create a corresponding HTML element with that tag name and render its children within that element.
key: This property is used by React to identify each element in a list. If two elements have the same key, React knows that they are the same element and will not re-render them. In this case, the key is null.
ref: This property is used to create a reference to the underlying DOM node. It is generally used in cases where direct manipulation of the DOM is necessary. In this case, the ref is null.
props: This property is an object that contains all of the attributes and props that were passed to the component. In this case, it has two properties: className and children. className specifies the class name of the element, and children contains the content of the element.
_owner: This property is used internally by React to track the component that created this element. This information is used to determine which component should be responsible for updating the element when its props or state change.
Here is an example that demonstrates how the _owner property is used:
functionParent(){return<Child/>;}functionChild(){constelement=<div>Hello,world!</div>;console.log(element._owner);// Parentreturnelement;}
In this example, the Child component creates a React element representing a <div> element with the text "Hello, world!". The _owner property of this element is set to the Parent component, which is the component that created the Child component.
React uses this information to determine which component should be responsible for updating the element when its props or state change. In this case, if the Parent component updates its state or receives new props, React will update the Child component and its associated element.
It’s important to note that the _owner property is an internal implementation detail of React and should not be relied upon in application code.
_store: The _store property of a React element object is an object that is used internally by React to store additional data about the element. The specific properties and values stored in _store are not part of the public API and should not be accessed directly.
Here’s an example of what the _store property might look like:
{validation:null,key:null,originalProps:{className:'my-class',children:'Hello, world!'},props:{className:'my-class',children:'Hello, world!'},_self:null,_source:{fileName:'MyComponent.js',lineNumber:10},_owner:{_currentElement:[Circular],_debugID:0,stateNode:[MyComponent]},_isStatic:false,_warnedAboutRefsInRender:false,}
As you can see, _store includes various properties such as validation, key, originalProps, props, _self, _source, _owner, _isStatic, and _warnedAboutRefsInRender. These properties are used by React internally to track various aspects of the element’s state and context.
For example, _source is used to track the file name and line number where the element was created, which can be helpful for debugging. _owner is used to track the component that created the element, as discussed earlier. And props and originalProps are used to store the props passed to the component.
Again, it’s important to note that _store is an internal implementation detail of React and should not be accessed directly in application code.
React’s createElement function and the DOM API’s createElement method are similar in that they both create new elements in the document. React.createElement is a function provided by React that creates a new virtual element in memory, whereas document.createElement is a method provided by the DOM API that creates a new element also in memory until it is attached to the DOM with parent.appendChild. Both functions take a tag name as their first argument, while React.createElement takes additional arguments to specify attributes and children.
For example, let’s compare how we would create a simple <div> element using both methods:
// Using React's createElementconstdivElement=React.createElement("div",{className:"my-class"},"Hello, World!");// Using the DOM API's createElementconstdivElement=document.createElement("div");divElement.className="my-class";divElement.textContent="Hello, World!";
In both cases, we create a new div element with a class of 'my-class' and some text content. However, the React version returns a virtual element that can be used in a React component, whereas the DOM API version creates a real DOM element that can be added to the document directly using document.appendChild.
The virtual DOM in React is similar in concept to the real DOM in that both represent a tree-like structure of elements. When a React component is rendered, React creates a new virtual DOM tree, compares it to the previous virtual DOM tree, and calculates the minimum number of changes needed to update the real DOM to match the new virtual DOM. This is known as the “reconciliation” process.
Here’s an example of how this might work in a React component:
functionApp(){const[count,setCount]=useState(0);return(<div><h1>Count:{count}</h1><buttononClick={()=>setCount(count+1)}>Increment</button></div>);}
For clarity, this component can also be expressed like so:
functionApp(){const[count,setCount]=React.useState(0);returnReact.createElement("div",null,React.createElement("h1",null,"Count: ",count),React.createElement("button",{onClick:()=>setCount(count+1)},"Increment"));}
In the createElement calls, the first argument is the name of the HTML tag or React component, the second argument is an object of properties (or null if no properties are needed), and any additional arguments represent child elements.
When the component is first rendered, React creates a virtual DOM tree that looks like this:
div├─h1│└─"Count: 0"└─button└─"Increment"
When the button is clicked, React creates a new virtual DOM tree that looks like this:
div├─h1│└─"Count: 1"└─button└─"Increment"
React then calculates that only the text content of the h1 element needs to be updated, and updates only that part of the real DOM.
The use of a virtual DOM in React allows for efficient updates to the real DOM, as well as allowing React to work seamlessly with other libraries that also manipulate the DOM directly.
When a React component’s state or props change, React creates a new tree of React Elements that represents the updated user interface. This new tree is then compared to the previous tree to determine the minimal set of changes required to update the real DOM using a diffing algorithm.
This algorithm compares the new tree of React Elements with the previous tree and identifies the differences between the two. It is a recursive comparison. If a node has changed, React updates the corresponding node in the real DOM. If a node has been added or removed, React adds or removes the corresponding node in the real DOM.
Diffing involves comparing the new tree with the old one node-by-node to find out which parts of the tree have changed.
React’s diffing algorithm is highly optimized and aims to minimize the number of changes that need to be made to the real DOM. The algorithm works as follows:
React’s diffing algorithm is pretty efficient and allows React to update the real DOM quickly and with minimal changes. This helps to improve the performance of React applications and makes it easier to build complex, dynamic user interfaces.
To further improve performance, React batches updates to the real DOM. This means that multiple updates are combined into a single update, reducing the number of times the real DOM has to be updated.
When React is rendering a component, it doesn’t immediately update the real DOM with every change. Instead, it batches the updates and performs them all at once, reducing the number of times it has to update the real DOM, further improving performance.
React uses a technique called “transaction” to batch updates. The concept of a transaction is similar to a database transaction. It groups related operations together and executes them as a single unit of work. React uses a similar approach to batch updates to the real DOM.
For example, suppose we have a component that updates its state multiple times in quick succession:
functionExample(){const[count,setCount]=useState(0);consthandleClick=()=>{setCount(count+1);setCount(count+1);setCount(count+1);};return(<div><p>Count:{count}</p><buttononClick={handleClick}>Increment</button></div>);}
In this example, the handleClick function calls setCount three times in quick succession. Without batching, React would update the real DOM three separate times, even though the value of count only changed once. This would be wasteful and slow.
However, because React uses a transaction to batch updates, it only updates the real DOM once, after all three setCount calls have completed. This results in better performance and a smoother user experience.
The transaction API that React uses is not exposed to developers, but we can see its effects in action. To demonstrate, we can modify the previous example to log the current transaction state:
functionExample(){const[count,setCount]=useState(0);consthandleClick=()=>{console.log("before",React.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ReactCurrentOwner.currentDispatcher);setCount(count+1);setCount(count+1);setCount(count+1);console.log("after",React.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ReactCurrentOwner.currentDispatcher);};return(<div><p>Count:{count}</p><buttononClick={handleClick}>Increment</button></div>);}
In this example, we’re logging the current transaction state before and after the setCount calls. We’re using a non-public API to access the current transaction state, so don’t use this technique in production code.
If we run this example, we’ll see that the before and after logs are the same:
beforeObject{...}afterObject{...}
This indicates that React is batching the updates together into a single transaction.
React’s batching behavior can sometimes be a source of confusion for developers, especially when dealing with event handlers. For example, suppose we have a component that updates its state when a button is clicked:
functionExample(){const[count,setCount]=useState(0);consthandleClick=()=>{setCount(count+1);console.log("count",count);};return(<div><p>Count:{count}</p><buttononClick={handleClick}>Increment</button></div>);}
In this example, we’re logging the current value of count after the setCount call. However, because React batches the updates, the value of count logged to the console will always be one less than the current value of count.
When React’s render function is called, it schedules a batch of updates to the real DOM, rather than immediately applying changes to the DOM. This allows React to optimize updates by batching them together and reducing the number of actual changes made to the DOM.
The batched updates work as follows: whenever a state change is made or a prop is updated in a React component, React adds the component to a “dirty” list. This list is essentially a queue of components that need to be updated. React then waits for a brief moment to allow for additional updates to be added to the queue. This is called the “debounce time.”
After the debounce time has passed, React checks the dirty list and begins updating the real DOM. React first generates a new virtual DOM tree, as described earlier, and then compares it to the previous virtual DOM tree to determine what changes need to be made to the real DOM. These changes are then batched together and applied to the real DOM all at once.
This is why the value of count logged to the console is always one less than the current value of count. When the handleClick function is called, it immediately logs the current value of count to the console before actually updating the state of the Example component. The state update happens later, after the debounce time has passed.
Here’s another example to illustrate this batching behavior. Suppose we have a simple React component that updates its state whenever a button is clicked:
functionExample(){const[count,setCount]=useState(0);functionhandleClick(){setCount(count+1);}console.log("Render Example component");return(<div><p>Count:{count}</p><buttononClick={handleClick}>Increment</button></div>);}
When the handleClick function is called, it updates the state of the Example component by calling the setCount function. This causes the component to be added to the dirty list.
Now, suppose we rapidly click the button multiple times. Each time we click the button, the handleClick function is called and adds the Example component to the dirty list. However, React doesn’t immediately apply these changes to the real DOM. Instead, React waits for a brief moment to allow for additional updates to be added to the dirty list.
Once the debounce time has passed, React checks the dirty list and begins updating the real DOM. In this case, React generates a new virtual DOM tree for the Example component and compares it to the previous virtual DOM tree. React determines that the only change that needs to be made to the real DOM is to update the text of the p element to reflect the new count value.
React then applies this change to the real DOM all at once, resulting in a smoother and more performant user interface. By batching updates together and reducing the number of actual changes made to the DOM, React is able to optimize performance and improve the user experience.
React Fiber is a newer (since React 16) implementation of the React rendering engine. It is a complete rewrite of the previous rendering engine and is designed to be more efficient and flexible.
One of the main benefits of React Fiber is that it allows for incremental updates to the virtual DOM. This means that updates can be broken down into smaller chunks, allowing the browser to process user input and other events without blocking the UI.
We’ll discuss this in detail in the next chapter.
Throughout this chapter, we have explored the differences between the real DOM and the virtual DOM in web development, as well as the advantages of using the latter in React.
We first talked about the real DOM and its limitations, such as slow rendering times and cross-browser compatibility issues, which can make it difficult for developers to create web applications that work seamlessly across different browsers and platforms. To illustrate this, we examined how to create a simple web page using the real DOM APIs, and how these APIs can quickly become unwieldy and difficult to manage as the complexity of the page increases.
Next, we dove into the virtual DOM and how it addresses many of the limitations of the real DOM. We explored how React leverages the virtual DOM to improve performance by minimizing the number of updates needed to the real DOM, which can be expensive in terms of rendering time. We also looked at how React uses a elements to compare the virtual DOM with the previous version and calculate the most efficient way to update the real DOM.
To illustrate the benefits of the virtual DOM, we examined how to create the same simple web page using React components. We compared this approach to the real DOM approach and saw how React components were more concise and easier to manage, even as the complexity of the page increased.
We also looked at the differences between React.createElement and document.createElement and how React’s version simplifies the creation of virtual DOM elements. We saw how we could create components using JSX, which provides a syntax similar to HTML, making it easier to reason about the structure of the virtual DOM.
Finally, we explored how React batches updates to the real DOM to improve performance. We used a lot of code examples to illustrate the concepts we discussed. We saw how to create simple web pages using the real DOM and React components, and how the latter approach can make the code more manageable and easier to maintain.
Overall, we have learned about the benefits of using the virtual DOM in web development, and how React leverages this concept to make building web applications easier and more efficient.
In the next chapter, we will dive deep into React reconciliation and its Fiber architecture.
To be truly fluent in React, we need to understand what its functions do. So far, we’ve understood JSX and React.createElement. We’ve also understood the virtual DOM. Let’s explore the practical applications of it in React in this chapter, and understand what ReactDOM.createRoot(element).render does. Specifically, we’ll explore how React builds and interacts with its virtual DOM through a process called reconciliation.
As a quick recap, React’s virtual DOM is a blueprint of our desired UI state. React takes this blueprint and, through a process called reconciliation, makes it a reality in a given host environment; usually a web browser.
Consider the following code snippet,
import{useState}from"react";constApp=()=>{const[count,setCount]=useState(0);return(<main><div><h1>Hello,world!</h1><span>Count:{count}</span><buttononClick={()=>setCount(count+1)}>Increment</button></div></main>);};
This code snippet contains a declarative description of what we want our UI state to be: a tree of elements. Both our teammates and React can read this and understand we’re trying to create a counter app with an increment button that increments the counter. To understand reconciliation, let’s understand what React does on the inside when faced with a component like this.
First, the JSX becomes a tree of React elements. This is what we saw in the last chapter. The App component is a React element, and so are its children. React elements are immutable and represent the desired state of the UI. They are not the actual UI state. React elements are created by React.createElement or JSX, so this would be transpiled into:
constApp=()=>{const[count,setCount]=useState(0);returnReact.createElement("main",null,React.createElement("div",null,React.createElement("h1",null,"Hello, world!"),React.createElement("span",null,"Count: ",count),React.createElement("button",{onClick:()=>setCount(count+1)},"Increment")));};
This would give us a tree of created React elements that looks something like this:
{type:"main",props:{children:{type:"div",props:{children:[{type:"h1",props:{children:"Hello, world!",},},{type:"span",props:{children:["Count: ",count],},},{type:"button",props:{onClick:()=>setCount(count+1),children:"Increment",},},],},},},}
Oh look, we have a virtual DOM. Since this is the first render, this tree is now committed to the browser using minimal calls to imperative DOM APIs. This is what we saw in the last chapter. Now if an update happens, React will create a new tree with updated values. This tree will need to be reconciled with what we’ve already committed to the browser. This is where reconciliation comes in.
Before we understand what modern-day React does under the hood, let’s explore how React used to perform reconciliation before version 16, with a legacy “stack” reconciler. This will help us understand the need for today’s popular fiber reconciler.
Previously, React used a stack data structure for rendering. To make sure we’re on the same page, let’s briefly discuss the stack data structure.
In computer science, a stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle. This means that the last element added to the stack will be the first one to be removed. A stack has two fundamental operations—push and pop–that allow elements to be added and removed from the top of the stack, respectively.
A stack can be visualized as a collection of elements that are arranged vertically, with the topmost element being the most recently added one. Here’s an ASCII illustration of a stack with three elements:
-----|3||___||2||___||1||___|
In this example, the most recently added element is 3, which is at the top of the stack. The element 1, which was added first, is at the bottom of the stack.
As mentioned earlier, a stack has two fundamental operations—push and pop—that allow elements to be added and removed from the top of the stack, respectively.
The push operation adds an element to the top of the stack. In code, this can be implemented using an array and the push method, like this:
conststack=[];stack.push(1);// stack is now [1]stack.push(2);// stack is now [1, 2]stack.push(3);// stack is now [1, 2, 3]
The pop operation removes the top element from the stack. In code, this can be implemented using an array and the pop method, like this:
conststack=[1,2,3];consttop=stack.pop();// top is now 3, and stack is now [1, 2]
In this example, the pop method removes the top element (3) from the stack and returns it. The stack array now contains the remaining elements (1 and 2).
Stacks are a fundamental data structure used in many different areas of computer science and programming. Here are some common uses of stacks:
In programming languages, functions are executed using a call stack. When a function is called, its arguments and local variables are pushed onto the call stack. When the function returns, its arguments and local variables are popped off the stack.
Here’s an example of a function that uses a stack to keep track of the call stack:
functionfoo(){console.log("foo");bar();}functionbar(){console.log("bar");}foo();
When this code is executed, the output will be:
barfoo
This happens because the foo function calls bar, which is added to the top of the call stack. When bar returns, it is popped off the stack, and control is returned to foo.
Stacks are often used in compilers and interpreters to evaluate expressions. In this context, a stack can be used to keep track of operands and operators in an expression.
Here’s an example of how a stack can be used to evaluate a simple expression:
constexpression="2 + 3 * 4";constoperands=[];constoperators=[];for(consttokenofexpression.split(" ")){if(!isNaN(token)){operands.push(parseFloat(token));}elseif(token==="+"||token==="-"){while(operators.length>0){constop=operators[operators.length-1];if(op==="+"||op==="-"||op===""||op==="/"){constb=operands.pop();consta=operands.pop();constresult=evaluate(a,b,operators.pop());operands.push(result);}else{break;}}operators.push(token);}elseif(token===""||token==="/"){while(operators.length>0){constop=operators[operators.length-1];if(op==="*"||op==="/"){constb=operands.pop();consta=operands.pop();constresult=evaluate(a,b,operators.pop());operands.push(result);}else{break;}}operators.push(token);}}while(operators.length>0){constb=operands.pop();consta=operands.pop();constresult=evaluate(a,b,operators.pop());operands.push(result);}console.log(operands.pop());// output: 14
In this example, the expression "2 + 3 * 4" is split into tokens, which are then evaluated using a stack. The operands are pushed onto the operands stack, while the operators are pushed onto the operators stack. The evaluate() function is called whenever an operator is encountered, and it pops the top two operands from the operands stack, applies the operator, and pushes the result back onto the stack.
At the end of the evaluation, the final result is popped off the operands stack and printed to the console.
Anyway, without digressing to much, a stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle. It has two fundamental operations—push and pop—that allow elements to be added and removed from the top of the stack, respectively. Stacks are commonly used in computer science and programming for a variety of purposes, such as function call stacks, expression evaluation, and undo/redo operations.
When implementing a stack, it’s important to choose an appropriate data structure. In JavaScript, an array is often used as the underlying data structure for a stack because it already provides the necessary operations (push and pop) and has good performance characteristics.
React’s original reconciler was a stack-based algorithm that was used to compare the old and new virtual trees and update the DOM accordingly. While the stack reconciler worked well in simple cases, it presented a number of challenges as applications grew in size and complexity. In this section, we’ll discuss some of the challenges that the stack reconciler presented and how they were addressed with the new fiber reconciler.
Before we dive into the challenges of the stack reconciler, let’s take a quick look at how it worked. The stack reconciler used a depth-first search algorithm to traverse the virtual tree and update the DOM. When a component was updated, React would create a new virtual tree and compare it to the previous tree using a diffing algorithm. The stack reconciler would then traverse the two trees and update the DOM as necessary.
Here’s a simple example of how the stack reconciler works:
Originalvirtualtree:A/\BCUpdatedvirtualtree:A/\DCStackreconcileroperations:-UpdateA-CreateD-AppendDtoA'schildren-RemoveB
In this example, the stack reconciler updates component A and replaces its child B with the new child D. It then updates component C as usual. Here’s a high-level summary of the problems the stack reconciler presented:
Consider an example wherein you’ve got a list of updates to make:
inputButton becomes enabled if the input is validForm component holds the state, so it rerendersIn this case, the stack reconciler would render the updates sequentially without being able to pause or defer work. If the computationally expensive component from step 1 blocks rendering, user input will appear on screen with an observable lag. This leads to poor user experience. Instead, it would be far more pleasant to be able to recognize the user input from step 2 as a higher-priority update than step 1 and update the screen to reflect the input, deferring rendering step 1’s computationally expensive component.
There is a need to be able to:
Of course, this is a high-level overview of the problem. There are many other challenges that the stack reconciler presented. Let’s get a bit deeper into the details.
The stack reconciler had poor performance characteristics for large virtual trees. Because it used a depth-first search algorithm, it would traverse the entire virtual tree before updating the DOM. This meant that even small changes to the virtual tree could result in a large number of DOM updates.
Here’s an example of how the stack reconciler could result in many unnecessary DOM updates:
Originalvirtualtree:A/\BC/\DEUpdatedvirtualtree:A/\BC//\DFGStackreconcileroperations:-UpdateA-RemoveE-CreateF-AppendFtoC's children- Create G- Append G to C'schildren
In this example, the only change to the virtual tree is the removal of node E and the addition of node F and G. However, the stack reconciler still traverses the entire virtual tree and updates the DOM accordingly.
The stack reconciler executed synchronously and blocked the main thread, which could result in jank or unresponsive user interfaces. Because the reconciler would traverse the entire virtual tree before updating the DOM, any changes to the virtual tree would be delayed until the reconciler had finished its work.
In JavaScript, all code is executed on a single thread, which is called the main thread. This means that any long-running tasks (such as rendering a large virtual tree) will block the thread and prevent other tasks (such as handling user input) from executing until the long-running task has finished.
In React, the stack reconciler executed synchronously on the main thread, which meant that any updates to the virtual tree would block the thread and prevent other tasks (such as handling user input) from executing until the update had finished. This could result in a janky or unresponsive user interface, especially on slower devices or when dealing with large virtual trees.
Here’s an example of how synchronous execution could affect the performance of a React application:
Originalvirtualtree:A/\BC/\/\DEFGUpdatedvirtualtree:A/\BC//\DFG\E'Stack reconciler operations:- Update A- Remove E- Remove C- Create F- Append F to C'schildren-CreateG-AppendGtoC's children- Create E'-AppendE' to B'schildren
In this example, we’re updating the virtual tree by removing node E and adding node E’ as a child of node B. However, because the stack reconciler executes synchronously on the main thread, any updates to the virtual tree will block other tasks from executing until the update has finished, like user input. You could very easily feel a delayed response from a UI element, such as a button, while the stack reconciler is updating the virtual tree.
The stack reconciler did not prioritize updates, which meant that less important updates could block more important updates. For example, a low-priority update to a tooltip might block a high-priority update to a text input. Updates to the virtual tree were executed in the order they were received.
In a React application, updates to the virtual tree can have different levels of importance. For example, an update to a form input might be more important than an update to a tooltip, because the user is directly interacting with the input and expects it to be responsive.
In the stack reconciler, updates were executed in the order they were received, which meant that less important updates could block more important updates. For example, if a tooltip update was received before a form input update, the tooltip update would be executed first and could block the form input update.
Here’s an example of how update prioritization could affect the performance of a React application:
Original virtual tree:A/ \B C/ \ / \D E F GUpdated virtual tree:A/ \B C/ / \D F G\E'Stack reconciler operations:-Update A-Remove E-Remove C-Create F-Append F to C's children-Create G-Append G to C's children-Create E'-Append E' to B's children-Update tooltip
In this example, we’re updating the virtual tree by removing node E and adding node E’ as a child of node B. However, before this update can be executed, a tooltip update is received and executed first. This means that the tooltip update blocks the update to the virtual tree, even though it’s less important.
If the tooltip update takes a long time to execute (e.g. because it’s performing an expensive computation), this could result in a noticeable delay or jank in the user interface, especially if the user is interacting with the application during the update.
Another challenge with the stack reconciler was that it did not allow updates to be interrupted or cancelled. This meant that updates that were no longer needed (e.g. because the user had navigated to a different page) would still be executed. In this section, we’ll explore this challenge in more detail and look at how it could affect the performance of a React application.
In a React application, updates to the virtual tree can become unnecessary if the user navigates to a different page or otherwise interacts with the application in a way that renders the update irrelevant. For example, if the user navigates to a different page while a tooltip update is being executed, the tooltip update is no longer necessary and should be cancelled.
In the stack reconciler, updates could not be interrupted or cancelled, which meant that unnecessary updates could still be executed. This could result in unnecessary work being performed on the virtual tree and the DOM, which could negatively impact the performance of the application.
Here’s an example of how unnecessary updates could affect the performance of a React application:
Original virtual tree:A/ \B C/ \ / \D E F GUpdated virtual tree:A/ \B C/ / \D F G\E'Stack reconciler operations:-Update A-Remove E-Remove C-Create F-Append F to C's children-Create G-Append G to C's children-Create E'-Append E' to B's children-Update tooltip
In this example, we’re updating the virtual tree by removing node E and adding node E’ as a child of node B. However, before this update can be executed, a tooltip update is received and executed first. If the user navigates to a different page while the tooltip update is being executed, the tooltip update is no longer necessary and should be cancelled. However, because the stack reconciler does not allow updates to be interrupted or cancelled, the tooltip update will still be executed.
In summary, the stack reconciler presented a number of challenges as applications grew in size and complexity. The main challenges were poor performance, synchronous execution, lack of prioritization, and lack of interruptibility. To address these challenges, the React team developed a new reconciler called the fiber reconciler, which is based on a different data structure called a fiber tree. Let’s explore this data structure in the next section.
The fiber Reconciler involves the use of a different data structure and takes inspiration from “double buffering” in the game world. Let’s understand both of these concepts before we move further.
Double buffering is a technique used in computer graphics and video processing to reduce flicker and improve performance. The technique involves creating two buffers (or memory spaces) for storing images or frames, and switching between them at regular intervals to display the final image or video.
Here’s how double buffering works in practice:
By using double buffering, flicker and other visual artifacts can be reduced, since the final image or video is displayed without interruptions or delays.
Double buffering is similar to Fiber reconciliation: the fiber reconciler uses double buffering to update the virtual DOM in a way that is both efficient and user-friendly.
In the fiber reconciler, the virtual DOM is updated asynchronously in a series of small, incremental steps. The fiber reconciler uses a work loop to process updates, and each iteration of the work loop updates a small portion of the virtual DOM.
As each portion of the virtual DOM is updated, the fiber reconciler creates a new Fiber node to represent the updated component. However, instead of immediately updating the real DOM with the new Fiber node, the fiber reconciler stores the Fiber node in a “work in progress tree”.
The work in progress tree is a copy of the real DOM that is used for rendering the updated content. By using a work in progress tree, the fiber reconciler can avoid making unnecessary updates to the real DOM, which can improve performance and reduce flicker.
Once the work loop is complete, the fiber reconciler switches the real DOM with the work in progress tree, using double buffering to ensure that the update is smooth and seamless.
Here’s an example of how double buffering works in React Fiber:
OriginalvirtualDOM:A/\BC/\/\DEFGUpdatedvirtualDOM:A/\BC//\DFG\E'Fiber reconciler operations:- Create Fiber node for A- Create Fiber node for B- Create Fiber node for C- Create Fiber node for D- Create Fiber node for E- Update Fiber node for A- Update Fiber node for B- Create Fiber node for F- Append F to C'schildren-CreateFibernodeforG-AppendGtoC's children- Create Fiber node for E'-AppendE' to B'schildren-SwitchrealDOMwithworkinprogresstree
In this example, the fiber reconciler updates the virtual DOM by creating a new Fiber node for each component and updating the relevant Fiber nodes with the new content. Instead of immediately updating the real DOM with the new Fiber nodes, the fiber reconciler stores the Fiber nodes in a work in progress tree. Once the update is complete, the fiber reconciler switches the real DOM with the work in progress tree.
This approach has several advantages:
Improves performance: The fiber reconciler can update the virtual DOM incrementally, without making unnecessary updates to the real DOM. This can improve performance and reduce the risk of visual artifacts or flicker.
Provides a smooth user experience: The fiber reconciler can update the virtual DOM in a way that is seamless and smooth, without interrupting the user experience.
Enables asynchronous updates: By updating the virtual DOM asynchronously, the fiber reconciler can prioritize updates and ensure that less important updates don’t block more important ones. This can improve the responsiveness of React applications and make them more user-friendly.
There are also some limitations with this approach:
Complexity: This can be complex to implement, particularly in complex React applications with many components and states. This can make it difficult for developers to debug issues and optimize performance.
Requires more memory: Having two trees requires more memory than traditional rendering techniques, since it involves storing two copies of the virtual DOM. This can be a concern for applications with limited memory resources.
Not suitable for all applications: Having two trees may not be suitable for all React applications, particularly those that require real-time updates or low latency. In these cases, other rendering techniques may be more appropriate.
React Fiber draws inspiration from double buffering to update the virtual DOM incrementally and provide a smooth, responsive user experience. While double buffering has several advantages, it also has some limitations, particularly in complex React applications. By understanding the advantages and limitations of double buffering, developers can make informed decisions about how to optimize performance and improve the user experience of their React applications.
The Fiber data structure in React is a key component of the fiber reconciler. The fiber reconciler allows updates to be prioritized and executed asynchronously, which improves the performance and responsiveness of React applications. Let’s explore the Fiber data structure in more detail and look at how it works.
At its core, the Fiber data structure is a lightweight representation of a component and its state in a React application. The Fiber data structure is designed to be mutable and can be updated and rearranged as needed during the reconciliation process.
Each Fiber node contains information about the component it represents, including its props, state, and child components. The Fiber node also contains information about its position in the component tree, as well as metadata that is used by the fiber reconciler to prioritize and execute updates.
Here’s an example of a simple Fiber node:
{tag:3,// 3 = ClassComponenttype:App,key:null,ref:null,props:{name:"Tejas",age:30},stateNode:AppInstance,return:FiberParent,child:FiberChild,sibling:FiberSibling,index:0,...}
In this example, we have a Fiber node that represents a ClassComponent called App. The Fiber node contains information about the component’s:
3 = ClassComponent)App){name: "John", age: 32})stateNode (an instance of the App component)return, child, sibling, and index).The Fiber data structure works by breaking up the work of updating the virtual DOM into smaller, more manageable chunks that can be executed asynchronously on the main thread. This allows updates to be prioritized and executed based on their importance, which improves the performance and responsiveness of React applications.
Fiber reconciliation involves comparing the previous virtual DOM (the “old” tree) with the current virtual DOM (the “new” tree) and figuring out which nodes need to be updated, added, or removed.
During the reconciliation process, the fiber reconciler creates a Fiber node for each element in the virtual DOM. There is literally a function called createFiberFromTagAndProps that does this. Of course, another way of saying “tag and props” is by calling them React elements. As we recall, a React element is this:
{type:"div",props:{className:"container"}}
Type (tag) and props. This function returns a fiber derived from elements. The Fiber node contains information about the component’s props, state, and child components, as well as metadata that is used by the fiber reconciler to prioritize and execute updates.
Once the Fiber nodes have been created, the fiber reconciler uses a “work loop” to update the virtual DOM. The work loop starts at the root Fiber node and works its way down the component tree, marking each Fiber node as “dirty” if it needs to be updated. Once it reaches the end, it walks back up, creating a new DOM tree in memory, detached from the browser, that will eventually be committed (flushed) to the screen.
At each step in the work loop, the fiber reconciler checks the priority of the update and decides whether to continue executing or yield control to the browser. This allows updates to be prioritized based on their importance, and less important updates can be delayed or skipped if necessary.
With the fiber reconciler, two trees are derived from a user-defined tree of JSX elements: one tree containing “current” fibers, and another tree containing “work in progress” fibers. Now that we get what fibers are, let’s look at how they work in trees.
React’s Fiber Reconciliation is made possible by two trees: the “current” tree and the “work in progress” tree. These trees are derived from a user-defined tree of JSX elements. Let’s look at how these trees work.
React’s “work in progress” fiber tree is a key component of the fiber reconciler in React. The work in progress fiber tree is a copy of the real DOM that is used for rendering the updated content. The fiber reconciler uses the work in progress tree to avoid making unnecessary updates to the real DOM, which can improve performance and reduce flicker.
When an update is triggered in a React application, the fiber reconciler creates a new work in progress tree that reflects the updated content. The fiber reconciler then uses a work loop to process the update, creating new Fiber nodes and updating existing ones as needed.
As each portion of the virtual DOM is updated, the fiber reconciler stores the new Fiber nodes in the work in progress tree, rather than immediately updating the real DOM. This allows the fiber reconciler to avoid making unnecessary updates to the real DOM, which can improve performance and reduce flicker.
It can also throw away an existing work in progress tree at any time if a higher priority update comes in because the work in progress tree is never committed to the real DOM until it’s ready. This allows the fiber reconciler to prioritize updates based on their importance, which improves the performance and responsiveness of React applications.
Once the update is complete, the fiber reconciler switches the real DOM with the work in progress tree, to ensure that the update is smooth and seamless.
Here’s an example of how the work in progress tree relates to the current tree:
// Current virtual DOM<div><h1>Hello, world!</h1><p>Current count: 0</p><button>Increment</button></div>// Work in progress virtual DOM<div><h1>Hello, world!</h1><p>Current count: 1</p><button>Increment</button></div>
In this example, we have a current virtual DOM that contains a heading, a paragraph, and a button. When an update is triggered to increment the count, the fiber reconciler creates a new work in progress virtual DOM that reflects the updated content.
The work in progress virtual DOM contains the same heading and button as the current virtual DOM, but the paragraph has been updated to reflect the new count. Once the update is complete, the fiber reconciler switches the current virtual DOM with the work in progress virtual DOM.
Let’s look at the current tree and its role in Fiber reconciliation.
Fiber reconciliation happens in two phases:

This two phase approach allows React to do rendering work that can be disposed of n-times before committing it to the DOM and showing a new state to users: it makes rendering interruptible.
Let’s walk through these phases of reconciliation.
The render phase starts when a state-change event occurs in the current tree, React does the work of making the changes off-screen in the alternate tree by recursively stepping through each fiber and setting flags that signal updates are pending. This happens in a function called beginWork internally in React.

beginWorkfunctionbeginWork(current:Fiber|null,workInProgress:Fiber,renderLanes:Lanes):Fiber|null;
beginWork is a key function in the fiber reconciler of React, responsible for setting flags on Fiber nodes in the work in progress tree about whether or not they should update. It sets a bunch of flags, and then recursively goes to the next Fiber node doing the same thing until it reaches the bottom of the tree. When it finishes, we start calling completeWork on the Fiber nodes and walk back up.
More on completeWork later. For now, let’s dive into beginWork. Its signature includes the following arguments:
current: A reference to the Fiber node in the current tree that corresponds to the work in progress node being updated. This is used to determine what has changed between the previous version and the new version of the tree, and what needs to be updated in the real DOM. This is never mutated, and is only used for comparison.
workInProgress: The Fiber node being updated in the work in progress tree. This is the node that will be marked as “dirty” if updated and returned by the function.
renderLanes: Render lanes is a new concept in React’s fiber reconciler that replaces the older renderExpirationTime. It’s a bit more complex than the old renderExpirationTime concept, but it allows React to better prioritize updates and make the update process more efficient.
It is essentially a bitmask that represents the lanes at which an update is being processed. Lanes are a way of categorizing updates based on their priority and other factors. When a change is made to a React component, it is assigned a lane based on its priority and other characteristics. The higher the priority of the change, the higher the lane it is assigned to.
The renderLanes value is passed to the beginWork function in order to ensure that updates are processed in the correct order. Updates that are assigned to higher-priority lanes are processed before updates that are assigned to lower-priority lanes. This ensures that high-priority updates, such as updates that affect user interaction or accessibility, are processed as quickly as possible.
In addition to prioritizing updates, renderLanes also helps React to better manage asynchrony. React uses a technique called “time slicing” to break up long-running updates into smaller, more manageable chunks. renderLanes plays a key role in this process, as it allows React to determine which updates should be processed first, and which updates can be deferred until later.
Here’s an example of how renderLanes might be used in the fiber reconciler. This is 100% pseudocode and not actual code from React, but is intended to illustrate the concept:
functionupdateComponent(element:ReactElement){// Create a new Fiber node for the elementconstnewFiber=createFiberFromElement(element);// Reconcile the new Fiber node with the current treeconst[rootFiber]=render(newFiber,document.getElementById("root"));// Begin the work loop to update the work in progress treeletnextUnitOfWork=rootFiber;letlanes=NoLanes;// Start with no laneswhile(nextUnitOfWork!==null){// Perform the next step in the reconciliation processnextUnitOfWork=beginWork(nextUnitOfWork,lanes);// If there is no next unit of work, we have finished the updateif(nextUnitOfWork===null){// Commit the updated work in progress tree to the real DOMcommitRoot(rootFiber);}// Update the lanes based on the work we just didlanes=getNextLanes(rootFiber,lanes);}}
In this example, we have a function called updateComponent that takes an element and updates it in the virtual DOM. The function creates a new Fiber node for the element using the createFiberFromElement function, and reconciles it with the current tree using the render function.
The function then begins the work loop to update the work in progress tree, using the updated beginWork function to perform the next step in the reconciliation process. The lanes variable is initially set to NoLanes, which means that no updates have been assigned to any lanes yet.
After each call to beginWork, the getNextLanes function is called to update the lanes variable based on the work that was just performed. This ensures that updates are processed in the correct order, and that high-priority updates are processed first.
Overall, renderLanes is an important new concept in React’s fiber reconciler that helps to make the update process more efficient and reliable. By allowing React to prioritize updates and manage asynchrony more effectively, renderLanes helps to ensure that React applications are fast and responsive, even as they become more complex and handle larger amounts of data.
In addition to helping with priority and asynchrony management, renderLanes also allows React to better handle “work in progress” updates. As we know, in the fiber reconciler, updates are processed in two phases: the “render” phase and the “commit” phase. During the render phase, updates are performed on the work in progress tree, while during the commit phase, updates are applied to the actual DOM.
renderLanes plays a key role in this two-phase process, as it helps React to determine when to apply updates to the work in progress tree and when to apply updates to the actual DOM. Updates that are assigned to lower-priority lanes are deferred until later, which means that they may not be applied until just before the commit phase.
Here’s an example of how renderLanes might be used to handle work in progress updates:
functionupdateComponent(element:ReactElement){// Create a new Fiber node for the elementconstnewFiber=createFiberFromElement(element);// Reconcile the new Fiber node with the current treeconst[rootFiber]=render(newFiber,document.getElementById("root"));// Begin the work loop to update the work in progress treeletnextUnitOfWork=rootFiber;letlanes=NoLanes;// Start with no laneswhile(nextUnitOfWork!==null){// Perform the next step in the reconciliation processnextUnitOfWork=beginWork(nextUnitOfWork,lanes);// If there is no next unit of work, we have finished the updateif(nextUnitOfWork===null){// Commit the updated work in progress tree to the real DOMcommitRoot(rootFiber);}// Update the lanes based on the work we just didlanes=getNextLanes(rootFiber,lanes);}// After the work loop is finished, we need to update the lanes again// to handle any deferred updates that may have been created during// the render phase.lanes=getLanesToRetrySynchronouslyOnError(rootFiber);if(lanes!==NoLanes){// Start a new work loop to handle the deferred updatesnextUnitOfWork=rootFiber;while(nextUnitOfWork!==null){nextUnitOfWork=beginWork(nextUnitOfWork,lanes);if(nextUnitOfWork===null){commitRoot(rootFiber);}lanes=getNextLanes(rootFiber,lanes);}}}
In this example, the updateComponent function performs a two-phase update on the work in progress tree. During the render phase, updates are processed using the beginWork function, and renderLanes is used to prioritize updates and handle asynchrony.
After the render phase is complete, the getLanesToRetrySynchronouslyOnError function is called to determine if any deferred updates were created during the render phase. If there are deferred updates, the updateComponent function starts a new work loop to handle them, using beginWork and getNextLanes to process the updates and prioritize them based on their lanes.
renderLanes is a key concept in React’s fiber reconciler that helps to make the update process more efficient, reliable, and responsive. By allowing React to better prioritize updates, manage asynchrony, and handle work in progress updates, renderLanes plays a crucial role in ensuring that React applications are fast and responsive, even as they become mo
completeWorkThe completeWork function is a critical part of React’s fiber reconciler, and it plays a key role in updating the work in progress tree. When called, it applies the updates to the fiber node and constructs a new real DOM tree that represents the updated state of the application in a detached way, fully in memory and invisible to the browser.
If the host environment is a browser, this means doing things like document.createElement or newElement.appendChild. Keep in mind, this tree of elements is not yet attached to the in-browser document: React is just creating the next version of the UI off-screen. Doing this work off-screen makes it interruptable: whatever next state React is computing is not yet painted to the screen, so it can be thrown away in case some higher priority update becomes scheduled. This is the whole point of the Fiber reconciler.
The signature of completeWork is as follows:
functioncompleteWork(current:Fiber|null,workInProgress:Fiber,renderLanes:Lanes):Fiber|null;
Here, current is a reference to the current fiber node in the tree, workInProgress is the fiber node being worked on, and renderLanes is an integer value that represents the priority level of the update being completed.
It is the same signature as beginWork.
The completeWork function is closely related to the beginWork function. While beginWork is responsible for setting flags about “should update” state on a fiber node, completeWork is responsible for constructing a new tree to be committed to the host environment.
When completeWork reaches the top and has constructed the new DOM tree, we say the render phase is completed. Now, React moves on to the commit phase.
completeWork prepares for the commit phase by returning the next fiber node to be processed. Once all fiber nodes in the work in progress tree have been processed by completeWork, the commit phase can begin. During the commit phase, the new virtual DOM tree is committed to the host environment, and the work in progress tree is replaced with the current tree.

The commit phase is responsible for updating the actual DOM with the changes that were made to the virtual DOM during the render phase. The commit phase is divided into two parts: the mutation phase and the layout phase.
The mutation phase is the first part of the commit phase, and it is responsible for updating the actual DOM with the changes that were made to the virtual DOM. During this phase, React walks through the fiber tree, looking for nodes that have been marked as “dirty” (i.e., nodes that have been updated).
For each dirty node, React calls a special function called commitMutationEffects. This function applies the updates that were made to the node during the render phase to the actual DOM.
Here’s an full-pseudocode example of how commitMutationEffects might be implemented:
functioncommitMutationEffects(fiber){switch(fiber.tag){caseHostComponent:{// Update DOM node with new props and/or childrenbreak;}caseHostText:{// Update text content of DOM nodebreak;}caseClassComponent:{// Call lifecycle methods like componentDidMount and componentDidUpdatebreak;}// ... other cases for different types of nodes}}
During the mutation phase, React also calls other special functions, such as commitUnmount and commitDeletion, to remove nodes from the DOM that are no longer needed.
The layout phase is the second part of the commit phase, and it is responsible for calculating the new layout of the updated nodes in the DOM. During this phase, React walks through the fiber tree again, looking for nodes that have been marked as “dirty” (i.e., nodes that have been updated).
For each dirty node, React calls a special function called commitLayoutEffects. This function calculates the new layout of the updated node in the DOM.
Like commitMutationEffects, commitLayoutEffects is also a massive switch statement that calls different functions depending on the type of node being updated.
Once the layout phase is complete, React has successfully updated the actual DOM to reflect the changes that were made to the virtual DOM during the render phase.
By dividing the commit phase into two parts (mutation and layout), React is able to apply updates to the DOM in a more efficient and optimized way. By working in concert with other key functions in the reconciler, such as beginWork and completeWork, the commit phase helps to ensure that React applications are fast, responsive, and reliable, even as they become more complex and handle larger amounts of data.
During the commit phase of React’s reconciliation process, side effects are performed in a specific order, depending on the type of effect. There are several types of effects that can occur during the commit phase, including:
Placement effects: These effects occur when a new component is added to the DOM. For example, if a new button is added to a form, a placement effect will occur to add the button to the DOM.
Update effects: These effects occur when a component is updated with new props or state. For example, if the text of a button changes, an update effect will occur to update the text in the DOM.
Deletion effects: These effects occur when a component is removed from the DOM. For example, if a button is removed from a form, a deletion effect will occur to remove the button from the DOM.
Layout effects: These effects occur before the browser has a chance to paint, and are used to update the layout of the page. Layout effects are managed using the useLayoutEffect hook in functional components and the componentDidUpdate lifecycle method in class components.
In contrast to these commit-phase effects, passive effects are user-defined effects that are scheduled to run after the browser has had a chance to paint. Passive effects are managed using the useEffect hook with an empty dependency array. This allows the effect to be scheduled as a passive effect, rather than as a commit-phase effect.
Passive effects are useful for performing actions that are not critical to the initial rendering of the page, such as fetching data from an API or performing analytics tracking. Because passive effects are not performed during the render phase, they do not affect the performance or perceived speed of the user interface.
React maintains a FiberRootNode atop both trees that points to one of the two trees: the current or the workInProgress trees. The FiberRootNode is a key data structure that sits atop both trees, and is responsible for managing the commit phase of the reconciliation process. The FiberRootNode contains a reference to the current tree, as well as a reference to the workInProgress tree, which is used during the render phase.
When updates are made to the virtual DOM, React updates the workInProgress tree, while leaving the current tree unchanged. This allows React to continue rendering and updating the virtual DOM, while also preserving the current state of the application.
When the rendering process is complete, React calls a function called commitRoot, which is responsible for committing the changes made to the workInProgress tree to the actual DOM. commitRoot switches the pointer of the FiberRootNode from the current tree to the workInProgress tree, making the workInProgress tree the new current tree.
From this point on, any future updates are based on the new current tree. This process ensures that the application remains in a consistent state, and that updates are applied correctly and efficiently.
All of this happens apparently instantly in the browser. This is the work of reconciliation.
In this chapter, we explored the concept of React Reconciliation and learned about the Fiber Reconciler, which is a highly optimized implementation of the reconciliation algorithm. We also learned about Fibers, which are a new data structure introduced in the Fiber Reconciler that represent a unit of work in the reconciliation process. We also learned about the render phase and the commit phase, which are the two main phases of the reconciliation process. Finally, we learned about the FiberRootNode, which is a key data structure that sits atop both trees, and is responsible for managing the commit phase of the reconciliation process.
Let’s ask ourselves a few questions to test our understanding of the concepts in this chapter:
If we can answer these questions, we should be well on our way to understanding the Fiber Reconciler and the reconciliation process in React.
In the next chapter, we’ll look at common questions in React and explore some advanced patterns. We’ll answer questions around how often to useMemo and when to use React.lazy. We’ll also explore how to use useReducer and useContext to manage state in React applications.
See you there!
Now that we’re more aware of what React does and how it works under the hood, let’s explore its practical applications a little deeper in how we write React applications. In this chapter, we’ll explore the answers to common React questions to boost our fluency like:
Let’s get started by talking about memoization.
React.memoMemoization is a technique used in computer science to optimize the performance of functions by caching their previously computed results. In simple terms, memoization stores the output of a function based on its inputs so that if the function is called again with the same inputs, it returns the cached result rather than recomputing the output. This can significantly reduce the time and resources needed to execute a function, especially for functions that are computationally expensive or called frequently.
In the context of React, memoization can be applied to functional components using the React.memo() higher-order component. This function returns a new component that only re-renders if its props have changed. By memoizing functional components, we can prevent unnecessary re-renders, which can improve the overall performance of our React application. Memoization is particularly useful when dealing with expensive calculations or when rendering large lists of items.
Consider a function:
letresult=null;constdoHardThing=()=>{if(result)returnresult;// ...do hard stuffresult=hardStuff;returnhardStuff;};
Calling doHardThing once might take a few minutes to do the hard thing, but calling it a second, third, fourth, nth time, doesn’t actually do the hard thing but instead returns the stored result. This is the gist of memoization. React enables us to memoize components using an exported function called memo. It also enables us to memoize values inside of our components using a hook called useMemo. We’ll talk about both in detail further in this chapter, but let’s first understand why React enables these usages.
We already know that React components are functions that are often invoked for reconciliation, as discussed in Chapter 4. Sometimes, reconciliation (that is, invoking a component function) can take a long time due to intense computations. This would slow down our application and present a bad user experience. Memoization is a way to avoid this by storing the results of expensive computations—either UI elements or concrete values—and returning them when the same inputs are passed to the function, or the same props are passed to the component.
To understand why React.memo is important, let’s consider a common scenario where we have a list of items that need to be rendered in a component. For example, let’s say we have a list of todos that we want to display in a component like this:
functionTodoList({todos}){return(<ul>{todos.map((todo)=>(<likey={todo.id}>{todo.title}</li>))}</ul>);}
If the list of todos is large, and the component is re-rendered frequently, this can cause a performance bottleneck in the application. One way to optimize this component is to memoize it using React.memo():
constMemoizedTodoList=React.memo(functionTodoList({todos}){return(<ul>{todos.map((todo)=>(<likey={todo.id}>{todo.title}</li>))}</ul>);});
By wrapping the TodoList component with React.memo, React will only re-render the component if its props have changed. This means that if the list of todos remains the same, the component will not re-render, and the cached output will be used instead. This can save significant resources and time, especially when the component is complex and the list of todos is large.
In addition to improving performance, React.memo can also make the code more maintainable and easier to reason about. By memoizing components, we can ensure that their outputs remain consistent, which can reduce the risk of introducing bugs in the application.
Let’s consider another example where we have a complex component with multiple nested components that are expensive to render:
functionDashboard({data}){return(<div><h1>Dashboard</h1><UserStatsuser={data.user}/><RecentActivityactivity={data.activity}/><ImportantMessagesmessages={data.messages}/></div>);}
If the data prop changes frequently, this component can be expensive to render, especially if the nested components are also complex. We can optimize this component using React.memo to memoize each nested component:
constMemoizedUserStats=React.memo(functionUserStats({user}){// ...});constMemoizedRecentActivity=React.memo(functionRecentActivity({activity,}){// ...});constMemoizedImportantMessages=React.memo(functionImportantMessages({messages,}){// ...});functionDashboard({data}){return(<div><h1>Dashboard</h1><MemoizedUserStatsuser={data.user}/><MemoizedRecentActivityactivity={data.activity}/><MemoizedImportantMessagesmessages={data.messages}/></div>);}
By memoizing each nested component, React will only re-render the components that have changed, and the cached outputs will be used for the components that have not changed. This can significantly improve the performance of the Dashboard component and reduce unnecessary re-renders.
Thus, we can see that React.memo is an essential tool for optimizing the performance of functional components in React. By memoizing components, we can significantly reduce the number of unnecessary re-renders and improve the overall performance of the application. Memoization can be particularly useful for components that are expensive to render or have complex logic.
React.memoLet’s briefly walk through how React.memo works. When an update happens in React, your component is compared with the results of the element returned from its previous render. If these results are different—i.e, if its props change—the reconciler runs an update effect if the element already exists in the host environment (usually the browser DOM), or a placement effect if it doesn’t. If its props are the same, the component still re-renders and the DOM is still updated. Let’s explore this with an example:
exportconstAvatar=({url,name})=>{return<imgalt={name}src={url}/>;};exportconstMemoizedAvatar=React.memo(Avatar);
Above we have 2 components: Avatar and MemoizedAvatar. React.memo tells React to avoid re-rendering this component (i.e, don’t invoke its function) unless and until its props change. Let’s assume a form like this:
constForm=()=>{const[data,setData]=useState({name:"",avatarUrl:"",});return(<form><label>YourName<inputtype="text"value={data.name}onChange={(e)=>setData(e.target.value)}/></label><label>YourAvatar<inputtype="file"onChange={(e)=>{/* set url */}}/></label><section>Preview<Avataralt="An avatar"url={data.avatarUrl}/></section></form>);};
When entering a name, the form and all its children re-render on every keystroke. Specifically, Avatar re-renders on every keystroke—even if its props are the same, even if data.avatarUrl doesn’t change. If we swap Avatar with our new MemoizedAvatar, then the component itself doesn’t re-render on every keystroke. Instead, it only rerenders when its props change: i.e, when data.avatarUrl is updated. Now, our application behaves as we’d expect: only what changes is recomputed.
This is what React.memo is good for: avoiding unnecessary re-renders when a component’s props are identical between renders. Since we can do this in React, it begs the question: how much and how often should we memoize stuff? Surely if we memoize every component our application might be faster overall, no?
Our user profile example is a good usage of component memoization on the Avatar component because the component’s props do not change as often as the component’s state, which changes on every keystroke.
Normally, using the virtual DOM and it’s efficient reconiliation algorithms, React handles most performance concerns for us out of the box because that’s what it’s designed to do and we often do not need to manually memoize things: it is considered valuable to use memoization in React sparingly because of the following 2 key reasons.
Everything has a cost. While implementing memoization itself has a minimal cost, using React’s memoization primitives unfortunately comes with some overhead that may result in our applications doing unnecessary work: like importing, then invoking React.memo which would in turn execute its own logic, further consuming stack frames and resources that are not needed.
While React.memo is a useful tool for optimizing the performance of functional components in React, it’s important to understand the potential overhead of using this technique and how to use it effectively. Let’s explore this further with some code examples.
One potential overhead is the additional memory usage required to store the memoized outputs. If the component is large or has many dependencies, this can result in increased memory usage. For example, consider the following memoized component:
constMemoizedComponent=React.memo(({items,handleClick})=>{// perform expensive computation based on itemsconstresult=items.reduce((total,item)=>total+item.value,0);return(<div><p>Result:{result}</p><buttononClick={handleClick}>Clickme</button></div>);});
In this example, the MemoizedComponent performs an expensive computation based on the items prop. While memoization can reduce unnecessary re-renders, it also requires storing the memoized output in memory. This can result in increased memory usage, especially if the component is large or has many dependencies.
Another potential overhead of using React.memo is the additional time required to perform the memoization. When a component is memoized, React needs to check whether the inputs to the component have changed before re-rendering. This can result in additional computation time, especially for complex components or those with many dependencies.
It’s important to remember that memoization may not provide significant performance improvements for all components. In some cases, memoization may not be necessary, especially for small or simple components.
By weighing the benefits and overhead, developers can determine whether memoization is necessary and implement it effectively to optimize the performance of their React applications.
If we have a component tree wherein a component’s props regularly change, memoizing this component may cost more than not. Things get even worse if we pass a comparison function to React.memo that runs every time, even though the props are frequently different. The general wisdom on when to memoize a component and when to forego it lies within quanitification: if we are unable to yield measurable performance improvements with memoization, then it’s best to forego it and trust React.
Bugs. Memoization is a powerful tool, but it can also be a double-edged sword. If we memoize a component that depends on enclosing state, we may end up with stale props that are not updated when the enclosing state changes. Trying to understand why a component is not updating can be a frustrating experience, especially if we’re not aware of the memoization.
Here’s an example of how memoization can lead to stale props when a component depends on enclosing state:
functionCounter(){const[count,setCount]=useState(0);constincrement=()=>{setCount(count+1);};constmemoizedIncrement=useCallback(()=>{setCount(count+1);},[]);console.log("Counter rendered");return(<div><p>Count:{count}</p><buttononClick={increment}>Increment</button><MemoizedChildincrement={memoizedIncrement}/></div>);}constMemoizedChild=React.memo(({increment})=>{console.log("Child rendered");return<buttononClick={increment}>MemoizedIncrement</button>;});functionApp(){return<Counter/>;}
In this example, we have a Counter component that uses state to track the count value. The Counter component also provides a memoized version of the increment function to a MemoizedChild component, which is also memoized using React.memo. The MemoizedChild component renders a button that calls the memoized increment function when clicked.
While this code may appear to work correctly at first, it actually has a bug: the MemoizedChild component may not update correctly when the count state changes. This is because the memoized increment function depends on the count state, but it is not included in the dependencies array of the useCallback hook. We’ll discuss useCallback more in detail later, but as a result, the memoized increment function will not update when the count state changes, and the MemoizedChild component will receive stale props.
To fix this bug, we can include the memoized increment function in the dependencies array of the useCallback hook, like this:
constmemoizedIncrement=useCallback(()=>{setCount(count+1);},[count]);
With this change, the memoized increment function will update whenever the count state changes, and the MemoizedChild component will receive updated props.
Memoization can be a powerful tool for optimizing the performance of functional components in React, but it’s important to understand its limitations and potential pitfalls, such as stale props. By being mindful of these issues and using memoization effectively, developers can optimize the performance of their React applications and provide a better user experience.
React.memo performs what is called a shallow comparison of the props to determine whether they’ve changed or not. The problem with this is while scalar types can be compared quite accurately in JavaScript, non-scalars cannot. Consider the following example:
// Scalar types"a"==="a";// string; true3===3;// number; true// Non-scalar types[1,2,3]===[1,2,3];// array; false
What happens with the array comparison above is that the arrays are compared by value. While they look the same to us, the left and right hand sides of the array comparison are two different instances of arrays. To combat this, we can compare a reference to the array instead of a new array:
constmyArray=[1,2,3];myArray===myArray;// truemyArray===[1,2,3];// false
This is why it’s generally a good practice to pass references to memoized components as props, and not values. For example, do:
constParent=()=>{constmyArray=[1,2,3];return<MemoizedComponentmyArray={myArray}/>;};
instead of:
constParent=()=>{return<MemoizedComponentmyArray={[1,2,3]}/>;};
React.memo often also gets circumvented quite commonly by another non-scalar type: functions. Consider the following case:
<MemoizedAvatarname="Tejas"url="https://github.com/tejasq.png"onChange={()=>save()}/>
While the props don’t appear to change or depend on enclosing state with props name, url, and onChange all having constant values, if we compare the props we see the following:
"Tejas"==="Tejas";// <- `name` prop; true"https://github.com/tejasq.png"==="https://github.com/tejasq.png";// <- `url` prop; true(()=>save())===(()=>save());// <- `onChange` prop; false
Once again, this is because we’re comparing functions by value. Remember as long as props differ, our component will not be memoized. We can combat this two ways:
By using the useCallback hook inside MemoizedAvatar’s parent:
constParent=()=>{constonAvatarChange=useCallback(()=>save(),[]);return;<MemoizedAvatarname="Tejas"url="https://github.com/tejasq.png"onChange={onAvatarChange}/>;};
Now, we can be confident that onAvatarChange will never change unless one of the things in its dependency array (second argument) changes. Since it’s empty though, our memoization is fully complete and reliable. This is the recommended way to memoize components that have functions as props. Another way we can get around this is as follows.
By passing a comparison function to React.memo: React.memo takes a second argument, a comparison function, which is used to compare the previous props to the next props. If the comparison function returns true, then the component will not re-render—indicating the props are equal. If it returns false, then the component will re-render. We can use this to our advantage to ensure that our component only re-renders when the props we care about change:
constMemoizedAvatar=React.memo(Avatar,(prevProps,nextProps)=>{return(prevProps.name===nextProps.name&&prevProps.url===nextProps.url&&prevProps.onChange===nextProps.onChange);});
In this scenario however, nextProps.onChange will contain a new function reference every time its parent re-renders because functions are passed by value. We can get around this by serializing the function in the comparison function:
constMemoizedAvatar=React.memo(Avatar,(prevProps,nextProps)=>{return(prevProps.name===nextProps.name&&prevProps.url===nextProps.url&&prevProps.onChange.toString()===nextProps.onChange.toString());});
Now, comparing strings to strings, we can be confident that onChange will never change unless the actual function implementation represented as a string changes. This is a bit of a hack because there are usually better ways to compare functions than serializing them to strings: like comparing references to functions instead.
We could literally rewrite our hack to compare references to onChange instead of serializing it to a string like so:
constMemoizedAvatar=React.memo(Avatar,(prevProps,nextProps)=>{return(prevProps.name===nextProps.name&&prevProps.url===nextProps.url&&prevProps.onChange===nextProps.onChange);});
Regardless, we’re doing this to illustrate the point of shallow comparison in a dependency array. Please be aware that the useCallback hook is the recommended way to memoize functions passed as props to components instead of serializing them to strings.
Whew! Great! This now means that our memoized components will never unnecessarily re-render. Right? Wrong! There’s one more thing we need to be aware of.
React uses React.memo as a hint to its reconciler that we don’t want our components to re-render if their props stay the same. They’re just hints to React. Ultimately, what React does is upto React. To echo back to the beginning of this book, React is intended to be a declarative abstraction of our user interface where we describe what we want, and React figures out the best how to do it. React.memo is a part of this.
React.memo does not guarantee consistently avoided re-renders.This is because React may decide to re-render a memoized component for various reasons, such as changes to the component tree or changes to the global state of the application.
To understand why React.memo does not guarantee consistently avoided re-renders, let’s take a look at some code snippets from React’s source code.
First, let’s look at the implementation of React.memo:
functionmemo(type,compare){return{$$typeof:REACT_MEMO_TYPE,type,compare:compare===undefined?null:compare,};}
In this implementation, React.memo returns a new object that represents the memoized component. The object has a $$typeof property that identifies it as a memoized component, a type property that references the original component, and a compare property that specifies the comparison function to use for memoization.
Next, let’s look at the implementation of the React reconciler’s memoization algorithm:
functionperformWorkOnRoot(root,expirationTime,isExpired){// ...constupdateExpirationTimeBeforeCommit=workInProgress.expirationTime;// ...if(workInProgress.memoizedProps!==null&&!isExpired){// Memoized component, compare new props to memoized propsif(nextChildren===null||!shouldSetTextContent()){bailoutOnAlreadyFinishedWork(workInProgress,expirationTime,updateExpirationTimeBeforeCommit);return;}}// ...// Render new childrenreconcileChildren(current,workInProgress,nextChildren,expirationTime);// ...}
In this implementation, the React reconciler checks whether a component is memoized by comparing its current props to its previous props. If the props have not changed and the component is not expired, the reconciler will skip rendering the component and reuse the previous output instead.
However, there are cases where the React reconciler may still re-render a memoized component even if its props have not changed. For example, if the component’s parent has been updated or if the global state of the application has changed, the memoized component may need to be re-rendered to reflect these changes. Additionally, if a component is expired, the React reconciler will ignore its memoization and re-render it from scratch.
In React, a component is considered “expired” when its previous render is no longer valid and cannot be reused. This can happen for a variety of reasons, such as changes to the component’s props, state, or context, or changes to the global state of the application.
When a component is expired, React will not reuse its previous output and will instead re-render the component from scratch. This can be a performance hit, especially for complex components, as it requires more computation and can cause unnecessary re-renders.
Thus, React.memo does not guarantee consistently avoided re-renders. Developers should use React.memo effectively and understand its limitations to optimize the performance of their React applications and provide a better user experience.
useMemoWhat React.memo does to components, the useMemo hook does to values inside components. Let’s briefly explore useMemo while we’re here. Consider a component:
constPeople=({unsortedPeople})=>{const[name,setName]=useState("");constsortedPeople=unsortedPeople.sort((a,b)=>b.age-a.age);return(<div><div>Enteryourname:{" "}<inputtype="text"placeholder="Obinna Ekwuno"onChange={(e)=>setName(e.target.value)}/></div><h1>Hi,{name}!Here'salistofpeoplesortedbyage!</h1><ul>{sortedPeople.map((p)=>(<likey={p.id}>{p.name},age{p.age}</li>))}</ul></div>);};
This component is going to slow down our application because it has linear runtime complexity: that is, it iterates through every person in the list to compare their age to the previous person. If our list has 1000000 people, it will perform 1000000 iterations at least. In computer science, this is called O(N) time complexity.
What makes things even worse is that this component will re-render whenever its state updates: which is on every keystroke inside the input field for a person’s name. If the name is 5 characters and our list has 1000000 people, we’d be making our app do 5 million operations in total! We can avoid this using useMemo.
Let’s rewrite that code snippet a little bit:
constPeople=({unsortedPeople})=>{const[name,setName]=useState("");constsortedPeople=useMemo(()=>unsortedPeople.sort((a,b)=>b.age-a.age),[unsortedPeople]);return(<div><div>Enteryourname:{" "}<inputtype="text"placeholder="Obinna Ekwuno"onChange={(e)=>setName(e.target.value)}/></div><h1>Hi,{name}!Here'salistofpeoplesortedbyage!</h1><ul>{sortedPeople.map((p)=>(<likey={p.id}>{p.name},age{p.age}</li>))}</ul></div>);};
There! Much better! We wrapped the value of sortedPeople in a function that was passed to the first argument of useMemo. The second argument we pass to useMemo represents an array of values that, if changed, re-sorts the array. Since the array contains only unsortedPeople, it will only sort the array once, and everytime the list of people changes—not whenever someone types in the name input field. This is a great example of how to use useMemo to avoid unnecessary re-renders.
useMemo considered harmfulWhile it might now be tempting to wrap all variable declarations inside a component with useMemo, this is not a good idea. useMemo is a great tool for memoizing expensive computations, but it’s not a good idea to use it for memoizing scalar values.
In JavaScript, a scalar values are a primitive values that are not an object. Scalar values include undefined, , booleans, numbers, and strings. These values are immutable, meaning that they cannot be modified once they are created.
A string is a scalar value, but an array is not. A number is a scalar value, but an object is not. In these cases, useMemo is not necessary because the value is not a reference to another value. It’s a value in and of itself—and in this case, not only is it inexpensive to compute, but we also want to recompute the value every time the component re-renders.
Consider the following example:
constMyComponent=()=>{constdateOfBirth="1993-02-19";constisAdult=newDate().getFullYear()-newDate(dateOfBirth).getFullYear()>=18;if(isAdult){return<h1>Youareanadult!</h1>;}else{return<h1>Youareaminor!</h1>;}};
We’re not using useMemo here anywhere, mainly because the component is stateless. It’s a pure function that takes in a date of birth and returns a string. This is good! But what if we have some input that triggers rerenders like this:
constMyComponent=()=>{const[birthYear,setBirthYear]=useState(1993);constisAdult=newDate().getFullYear()-birthYear>=18;return(<div><label>Birthyear:<inputtype="number"value={birthYear}onChange={(e)=>setBirthYear(e.target.value)}/></label>{isAdult?<h1>Youareanadult!</h1>:<h1>Youareaminor!</h1>}</div>);};
Now, we’re recomputing new Date() on every keystroke. Let’s fix this with useMemo:
constMyComponent=()=>{const[birthYear,setBirthYear]=useState(1993);constisAdult=today.getFullYear()-birthYear>=18;consttoday=useMemo(()=>newDate(),[]);return(<div><label>Birthyear:<inputtype="number"value={birthYear}onChange={(e)=>setBirthYear(e.target.value)}/></label>{isAdult?<h1>Youareanadult!</h1>:<h1>Youareaminor!</h1>}</div>);};
This is good because today will be a reference to the same object every time the component re-renders, and we assume the component will always re-render in the same day.
There’s a slight edge case here if the user’s clock lapses midnight while they’re using this component, but this is a rare edge case that we can ignore for now. Of course, we do better when there’s real production code involved.
This example is to facilitate a bigger question: should we wrap isAdult’s value in useMemo? What happens if we do? The answer is that we shouldn’t because isAdult is a scalar value: a boolean and it’s not expensive to compute. We do call .getFullYear a bunch of times, but we trust the JavaScript engine and the React runtime to handle the performance for us. It’s a simple assignment with no further computation like sorting, filter, or mapping.
In this case, we should not use useMemo because it is more likely to slow our app down than speed our app up because of the overhead of useMemo itself: including importing it, calling it, passing in the dependencies, and then comparing the dependencies to see if the value should be recomputed. All of this has runtime complexity that can hurt our apps more than help it. Instead, we assign and trust React to intelligently re-render our component when necessary with its own optimizations.
Our applications are now enjoying performance benefits of faster rerenders even in the face of heavy computations—but can we do more? In the next section, let’s take a look at shrinking the amount of JavaScript our users have to download using code splitting with React’s lazy loading primitives.
As our applications grow, we accumulate a lot of JavaScript. Our users then download these massive JavaScript bundles—sometimes going into the double digits on megabytes—only to use a small portion of the code. This is a problem because it slows down our users’ initial load time, and it also slows down our users’ subsequent page loads because they have to download the entire bundle again.
Shipping too much JavaScript can be problematic for users with limited internet capabilities, as it can lead to slower page load times, increased data usage, and decreased accessibility. Let’s explore why shipping too much JavaScript is a problem, how it affects users, and what we can do to mitigate these issues.
One of the main problems with shipping too much JavaScript is that it can slow down page load times. JavaScript files are typically larger than other types of web assets, such as HTML and CSS, and require more processing time to execute. This can lead to longer page load times, especially on slower internet connections or older devices.
For example, consider the following code snippet that loads a large JavaScript file on page load:
<!DOCTYPE html><html><head><title>My Website</title><scriptsrc="https://example.com/large.js"></script></head><body><!-- Page content goes here --></body></html>
In this example, the large.js file is loaded in the <head> of the page, which means that it will be executed before any other content on the page. This can lead to slower page load times, especially on slower internet connections or older devices.
Another problem with shipping too much JavaScript is that it can increase data usage. JavaScript files are typically larger than other types of web assets, which means that they require more data to be transferred over the network. This can be a problem for users with limited data plans or slow internet connections, as it can lead to increased costs and slower page load times.
In our previous example, the large.js file is loaded in the <head> of the page, which means that it will be downloaded and executed before any other content on the page. This can lead to increased data usage, especially on slower internet connections or limited data plans.
Finally, shipping too much JavaScript can also decrease accessibility. Users with older devices or slower internet connections may not be able to load or execute large JavaScript files, which can lead to broken functionality or incomplete user experiences. Additionally, users with disabilities who rely on assistive technologies may also be affected by excessive JavaScript, as it can interfere with the operation of these tools and make it difficult or impossible for them to access content on the page.
To mitigate these issues, we can take several steps to reduce the amount of JavaScript that is shipped to users. One approach is to use code splitting to load only the JavaScript that is needed for a particular page or feature. This can help reduce page load times and data usage by only loading the necessary code.
For example, consider the following code snippet that uses code splitting to load only the JavaScript that is needed for a particular page:
import("./large.js").then((module)=>{// Use module here});
In this example, the import() function is used to asynchronously load the large.js file only when it is needed. This can help reduce page load times and data usage by only loading the necessary code.
Another approach is to use lazy loading to defer the loading of non-critical JavaScript until after the page has loaded. This can help reduce page load times and data usage by loading non-critical code only when it is needed.
For example, consider the following code snippet that uses lazy loading to defer the loading of non-critical JavaScript:
<!DOCTYPE html><html><head><title>My Website</title></head><body><!-- Page content goes here --><buttonid="load-more">Load more content</button><script>document.getElementById("load-more").addEventListener("click",()=>{import("./non-critical.js").then((module)=>{// Use module here});});</script></body></html>
In this example, the import() function is used to asynchronously load the non-critical.js file only when the “Load more content” button is clicked. This can help reduce page load times and data usage by loading non-critical code only when it is needed.
Thankfully, React has a solution that makes this even more straightforward: lazy loading using React.lazy and Suspense. Let’s take a look at how we can use these to improve our application’s performance.
Lazy loading is a technique that allows us to load a component only when it’s needed, like with the dynamic import above. This is useful for large applications that have many components that are not needed on the initial render. For example, if we have a large application with a collapsible sidebar that has a list of links to other pages, we might not want to load the full sidebar if it’s collapsed on first load. Instead, we can load it only when the user toggles the sidebar.
Let’s explore the following code sample:
import{Sidebar}from"./Sidebar";// 22MB to importconstMyComponent=({initialSidebarState})=>{const[showSidebar,setShowSidebar]=useState(initialSidebarState);return(<div><buttononClick={()=>setShowSidebar(!showSidebar)}>Togglesidebar</button>{showSidebar&&<Sidebar/>}</div>);};
In this example, let’s imagine that <Sidebar /> is 22MB of JavaScript. This is a lot of JavaScript to download, parse, and execute, and it’s not necessary on the initial render if the sidebar is collapsed. Instead, we can use React.lazy to lazy load the component, only if showSidebar is true. As in, only if we need it as below:
import{lazy,Suspense}from"react";constSidebar=lazy(()=>import("./Sidebar"));constMyComponent=({initialSidebarState})=>{const[showSidebar,setShowSidebar]=useState(initialSidebarState);return(<div><buttononClick={()=>setShowSidebar(!showSidebar)}>Togglesidebar</button><Suspense>{showSidebar&&<Sidebar/>}</Suspense></div>);};
Instead of statically importing ./Sidebar, we dynamically import it—that is, we pass a function to lazy that returns a promise that resolves to the imported module. A dynamic import returns a promise because the module may not be available immediately. It may need to be downloaded from the server first. React’s lazy function which triggers the import is never called unless the underlying component (in this case, Sidebar) is to be rendered. This way, we avoid shipping the 22MB sidebar until we actually render <Sidebar />.
You may have also noticed another new import: Suspense. What’s that doing there? We use Suspense to wrap the component in the tree. Suspense is a component that allows us to show a fallback component while the promise is resolving (read: as the sidebar is downloading). In the snippet above, we’re not showing a fallback component but we could if we wanted to by adding a fallback prop to Suspense like so:
import{lazy,Suspense}from"react";constSidebar=lazy(()=>import("./Sidebar"));constMyComponent=()=>{const[showSidebar,setShowSidebar]=useState(false);return(<div><buttononClick={()=>setShowSidebar(!showSidebar)}>Togglesidebar</button><Suspensefallback={<p>Loading...</p>}>{showSidebar&&<Sidebar/>}</Suspense><main><p>Hellohellowelcome,thisistheapp'smainarea</p></main></div>);};
Now, when the user clicks the button to toggle the sidebar, they’ll see “Loading...” while the sidebar is loaded and rendered. This is a great way to improve our application’s performance by only loading the code we need when we need it and still providing immediate feedback to the user.
React Suspense works like a try/catch block. You know how you can throw an exception from literally anywhere in your code, and then catch it with a catch block somewhere else—even in a different module? Well, Suspense works the same way. You can place lazy-loaded and asynchronous primitives anywhere in your component tree, and then catch them with a Suspense component anywhere above it in the tree, even if your suspense boundary is in a completely different file.
Knowing this, we have the power to choose where we want to show the loading state for our 22MB sidebar. For example, we can hide the entire application while the sidebar is loading—which is a pretty bad idea because we block our entire app’s information from the user just for a sidebar—or we can show a loading state for the sidebar only. Let’s take a look at how we can do the former (even though we shouldn’t) just to understand Suspense’s capabilities.
import{lazy,Suspense}from"react";constSidebar=lazy(()=>import("./Sidebar"));constMyComponent=()=>{const[showSidebar,setShowSidebar]=useState(false);return(<Suspensefallback={<p>Loading...</p>}><div><buttononClick={()=>setShowSidebar(!showSidebar)}>Togglesidebar</button>{showSidebar&&<Sidebar/>}<main><p>Hellohellowelcome,thisistheapp'smainarea</p></main></div></Suspense>);};
By wrapping the entire component in Suspense, we render the fallback until all asynchronous children (promises) are resolved. This means that the entire application is hidden until the sidebar is loaded. This can be useful if we want to wait until everything’s ready to reveal the user interface to the user, but in this case might not be the best idea because the user is left wondering what’s going on and they can’t interact with the application at all.
This is why we should only use Suspense to wrap the components that need to be lazy loaded, like this:
import{lazy,Suspense}from"react";constSidebar=lazy(()=>import("./Sidebar"));constMyComponent=()=>{const[showSidebar,setShowSidebar]=useState(false);return(<div><buttononClick={()=>setShowSidebar(!showSidebar)}>Togglesidebar</button><Suspensefallback={<p>Loading...</p>}>{showSidebar&&<Sidebar/>}</Suspense><main><p>Hellohellowelcome,thisistheapp'smainarea</p></main></div>);};
The suspense boundary is a very powerful primitive that can remedy layout shift and make user interfaces more responsive and intuitive. It’s a great tool to have in your arsenal. Moreover, if high-quality skeleton UI is used in the fallback, we can further guide our users to understand what’s going on and what to expect while our lazy loaded components load, thereby orienting them to the interface they’re about to interact with before it’s ready. Taking advantage of all of this is a great way to improve our applications’ performance and fluently get the most out of React.
Next, we’ll look at another interesting question that many React developers ask: when should we use useState vs. useReducer?
React exposes two hooks for managing state: useState and useReducer. Both of these hooks are used to manage state in a component. The difference between the two is that useState is a hook that is better suited to manage a single piece of state, while useReducer is a hook that manages more complex state. Let’s take a look at how we can use useState to manage state in a component.
import{useState}from"react";constMyComponent=()=>{const[count,setCount]=useState(0);return(<div><p>Count:{count}</p><buttononClick={()=>setCount(count+1)}>Increment</button></div>);};
In the above example, we’re using useState to manage a single piece of state: count. But what if our state’s a little more complex?
import{useState}from"react";constMyComponent=()=>{const[state,setState]=useState({count:0,name:"Tejumma",age:30,});return(<div><p>Count:{state.count}</p><p>Name:{state.name}</p><p>Age:{state.age}</p><buttononClick={()=>setState({...state,count:state.count+1})}>Increment</button></div>);};
Now, we can see that our state is a little more complex. We have a count, a name, and an age. We can increment the count by clicking the button, which sets the state to a new object that has the same properties as the previous state, but with the count incremented by 1. This is a very common pattern in React. The problem with it is that it can raise the possibility of bugs. For example, if we don’t carefully spread the old state, we might accidentally overwrite some of the state’s properties.
At this point, you should know that useState uses useReducer internally. You can think of useState as a higher-level abstraction of useReducer. In fact, one can reimplement useState with useReducer if they so wish!
Seriously, you’d just do this:
import{useReducer}from"react";functionuseState(initialState){const[state,dispatch]=useReducer((state,newValue)=>newValue,initialState);return[state,dispatch];}
Let’s look at the same example as above, but implemented with useReducer instead.
import{useReducer}from"react";constinitialState={count:0,name:"Tejumma",age:30,};constreducer=(state,action)=>{switch(action.type){case"increment":return{...state,count:state.count+1};default:returnstate;}};constMyComponent=()=>{const[state,dispatch]=useReducer(reducer,initialState);return(<div><p>Count:{state.count}</p><p>Name:{state.name}</p><p>Age:{state.age}</p><buttononClick={()=>dispatch({type:"increment"})}>Increment</button></div>);};
Now some would say this is a tad more verbose than useState and many would agree, but this is to be expected whenever anyone goes a level lower in an abstraction stack: the lower the abstraction, the more verbose the code. After all, abstractions are intended to replace complex logic with syntax sugar in most cases. So since we can do the same thing with useState as we can with useReducer, why don’t we just always use useState since it’s simpler?
There are 4 large benefits to using useReducer to answer this question:
It separates the logic of updating state from the component. Its accompanying reducer function can be tested in isolation, and it can be reused in other components. This is a great way to keep our components clean and simple and embrace the single responsibility principle.
We can test the reducer like this:
describe("reducer",()=>{test("should increment count when given an increment action",()=>{constinitialState={count:0,name:"Tejumma",age:30,};constaction={type:"increment"};constexpectedState={count:1,name:"Tejumma",age:30,};constactualState=reducer(initialState,action);expect(actualState).toEqual(expectedState);});test("should return the same state object when given an unknown action",()=>{constinitialState={count:0,name:"Tejumma",age:30,};constaction={type:"unknown"};constexpectedState=initialState;constactualState=reducer(initialState,action);expect(actualState).toBe(expectedState);});});
In this example, we’re testing two different scenarios: one where the increment action is dispatched to the reducer, and one where an unknown action is dispatched.
In the first test, we’re creating an initial state object with a count value of 0, and an increment action object. We’re then expecting the count value in the resulting state object to be incremented to 1. We use the toEqual matcher to compare the expected and actual state objects.
In the second test, we’re creating an initial state object with a count value of 0, and an unknown action object. We’re then expecting the resulting state object to be the same as the initial state object. We use the toBe matcher to compare the expected and actual state objects, since we’re testing for reference equality.
By testing our reducer in this way, we can ensure that it behaves correctly and produces the expected output when given different input scenarios.
The dispatch function returned from useReducer is stable and doesn’t change between renders. This means that we can safely pass it down to child components without worrying about it changing and triggering expensive rerenders which we learned about in the first half of this chapter around memoization. This can be a great performance win in most cases.
Our state and the way it changes is always explicit with useReducer, and some would argue that useState can obfuscate the overall state update flow of a component through layers of JSX trees.
useReducer is an event sourced model: meaning it can be used to model events that happen in our application which we can then keep track of in some type of audit log. This audit log can be used to replay events in our application to reproduce bugs or to implement time travel debugging. It also enables some powerful patterns like undo/redo, optimistic updates, and analytics tracking of common user actions across our interface.
While useReducer is a great tool to have in your arsenal, it’s not always necessary. In fact, it’s often overkill for most use cases. So when should we use useState vs. useReducer? The answer is that it depends on the complexity of your state. But hopefully with all of this information, you can make a more informed decision about which one to use in your application.
Whew! What a chapter! Let’s wrap things up and summarize what we learned.
Throughout this chapter, we’ve discussed various aspects of React, including memoization, lazy loading, reducers, and state management. We’ve explored the advantages and potential drawbacks of different approaches to these topics and how they can impact the performance and maintainability of React applications.
We started by discussing memoization in React and its benefits for optimizing component rendering. We looked at the React.memo function and how it can be used to prevent unnecessary re-renders of components. We also examined some potential issues with memoization, such as stale state and the need to carefully manage dependencies.
Next, we talked about lazy loading in React and how it can be used to defer the loading of certain components or resources until they are actually needed. We looked at the React.lazy and Suspense components and how they can be used to implement lazy loading in a React application. We also discussed the tradeoffs of lazy loading, such as increased complexity and potential performance issues.
We then moved on to reducers and how they can be used for state management in React. We explored the differences between useState and useReducer and discussed the advantages of using a centralized reducer function for managing state updates.
Throughout our conversation, we used code examples from our own implementations to illustrate the concepts we discussed. We explored how these examples work under the hood and how they can impact the performance and maintainability of React applications.
In summary, our conversation covered a range of topics related to React, including memoization, lazy loading, reducers, and state management. We explored the advantages and potential drawbacks of different approaches to these topics and how they can impact the performance and maintainability of React applications. Through the use of code examples and in-depth explanations, we gained a deeper understanding of these topics and how they can be applied in real-world React applications.
Let’s ask ourselves a few questions to test our understanding of the concepts we learned in this chapter:
In the next chapter, we’ll look at some newer features in React that have been introduced in the past year or so. We’ll look at how we can use React’s new server components to further optimize our apps and minimize first-load JS—even to 0kB!
React has made significant progress since its inception. Although it started as a client-side library, the demand for server-side rendering has grown over time for reasons we will come to understand in this chapter. Together, we will explore server-side React and understand how it differs from client-only React, and how it can be used to level up our React applications.
As we’ve discussed in earlier chapters, React was initially developed by Meta to address the need for efficient and scalable user interfaces (UIs). We’ve looked at how it does this through the virtual DOM back in chapter 3, which enables developers to create and manage UI components with ease. React’s client-side approach unlocked fast, responsive user experiences across the web. However as the web continued to evolve, the limitations of client-side rendering became more apparent.
Client-side rendering (CSR) has been the primary approach for building user interfaces with React since it was first open sourced in 2013. 3 years later, React 16 introduced a new rendering mode called “ReactDOMServer” that allowed developers to render React components on the server and send the resulting HTML to the client, instead of rendering the components in the browser. This made it easier to build server-rendered React applications without the need for additional libraries or complex configuration.
React’s prior client-only approach allowed us to build applications that provide fast and responsive user experiences, but as web applications have grown more complex, the limitations of client-only rendering (sometimes called “CSR” or client-side rendering) have become more apparent. In this section, we will explore the limitations of client-side rendering and why server-side rendering (SSR) has become necessary for modern web applications.
One of the significant limitations of client-side rendering is that search engine crawlers may not correctly index the content, as some of them do not execute JavaScript. This can result in poor search engine optimization (SEO) and accessibility issues for users who rely on screen readers since without JavaScript, client-only React apps are just a blank page, sometimes called an “app shell”.
Consider the following example:
importReact,{useEffect,useState}from"react";constHome=()=>{const[data,setData]=useState([]);useEffect(()=>{fetch("https://api.example.com/data").then((response)=>response.json()).then((data)=>setData(data));},[]);return(<div>{data.map((item)=>(<divkey={item.id}>{item.title}</div>))}</div>);};exportdefaultHome;
In this example, we are fetching data from an API and rendering it on the client-side. We can tell it’s the client side because we are using the useEffect hook to fetch the data, and the useState hook to store the data in state. These hooks execute inside a browser (a client) only.
A serious limitation with this is that some search engine crawlers will not be able to see this content unless we implement server-side rendering. Instead, they’ll see a blank screen or a fallback message which can result in poor SEO.
Client-side rendering can have performance issues, especially on slower devices and networks. This is because of network waterfalls, wherein the initial page load is blocked by the amount of JavaScript that needs to be downloaded, parsed, and executed by the browser before the website or web app becomes visible. In cases where network connectivity is a limited resource, this would render a website or application completely unresponsive for significant amounts of time. Consider the following example:
importReactfrom"react";constHome=()=>{consthandleClick=()=>{alert("Button clicked!");};return(<div><h1>Welcometomywebsite</h1><p>Clickthebuttonbelowtoseeanalert</p><buttononClick={handleClick}>Clickme</button></div>);};exportdefaultHome;
In this example, we are using a simple button click event to trigger an alert. However, this code still requires the entire React library to be downloaded and parsed by the browser before the button can be clicked. This can result in a slower initial page load, especially on slower devices and networks.
As of React 17, the bundle size of React and React-DOM is as follows:
These sizes are for the latest version of React at the time of writing and may vary depending on the version and configuration of React that you are using today. Regardless, it’s important to understand from these data that the development version of React DOM is significantly larger than the production version, and the difference between the development and production versions of React itself is also substantial.
This means that even in production environments, our users have to download around 175 KB of JavaScript just for React alone (i.e, React + React DOM), before downloading, parsing, and executing the rest of our application’s code. This can result in a slower initial page load, especially on slower devices and networks, and potentially frustrated users. Moreover, because React essentially owns the DOM and we have no user interface without React in client-only applications, our users have no choice but to wait for React and React DOM to load first before the rest of our application does.
In contrast, a server-rendered application would stream rendered HTML to the client before any JavaScript downloads, enabling users to get meaningful content immediately. It would then load relevant JavaScript after the initial page renders, probably while the user is still orienting themselves with a user interface through a process called “hydration”. More on this in the coming sections.
Initially streaming rendered HTML and then hydrating the DOM with JavaScript allows users to interact with the application sooner, resulting in a better user experience: it is immediately available to the user without them having to wait for any extras—that they may or may not even need—to load.
Client-side rendering can also have security issues, especially when dealing with sensitive data. This is because all of the application’s code is downloaded to the client’s browser, making it vulnerable to attacks such as cross-site scripting (XSS) and cross-site request forgery (CSRF).
Consider the following example:
importReact,{useState}from"react";constAccount=()=>{const[balance,setBalance]=useState(100);consthandleWithdrawal=(amount)=>{setBalance(balance-amount);};return(<div><h1>AccountBalance:{balance}</h1><buttononClick={()=>handleWithdrawal(10)}>Withdraw$10</button><buttononClick={()=>handleWithdrawal(50)}>Withdraw$50</button><buttononClick={()=>handleWithdrawal(100)}>Withdraw$100</button></div>);};exportdefaultAccount;
In this example, we have an account component that allows users to withdraw funds. However, if this component is rendered on the client-side, it could be vulnerable to CSRF attacks because the server and client do not have a shared common secret or contract between them. To speak poetically, the client and server don’t know each other. This could allow an attacker to steal funds or manipulate the application’s data.
If we used server rendering, we could mitigate these security issues by rendering the component on the server with a special secret token generated by the server and then sending HTML containing the secret token to the client. The client would then send this token back to the server that issued it, establishing a secure bidirectional contract. This would allow the server to verify that the request is coming from the correct client that it has pre-authorized and not an unknown one, which could possible be a malicious attacker. Without server rendering, this becomes very hard if not impossible.
For these reasons, server-side rendering (SSR) has emerged as a critical technique for improving the performance and user experience of web applications. With server rendering, applications can be optimized for speed and accessibility, resulting in faster load times, better SEO, and improved user engagement. Server-side rendering enables applications to be rendered on the server and sent to the client as fully formed HTML pages.
Let’s dive deeper into the benefits of server rendering. First and foremost, server rendering can significantly improve the speed and accessibility of web applications. By rendering applications on the server, developers can ensure that the HTML and CSS are optimized for performance, resulting in faster load times and better overall performance.
Server rendering can also improve the SEO of web applications. Because search engines are primarily concerned with HTML and text content, server-rendered pages are easier for search engines to crawl and index. This can lead to higher rankings and more traffic for web applications; especially for content-rich web applications like blogs.
Finally, server rendering can improve the overall user experience of web applications. Because server-rendered pages are fully formed HTML pages, they are accessible to all users, including those with slow or unreliable internet connections. This can result in higher engagement and retention rates for web applications. Server rendering has been around for many years, but it has only recently gained widespread adoption among web developers. React, in particular, has been instrumental in popularizing server rendering. There are now several frameworks available for implementing server-rendered React applications, which we will eventually cover in our chapter on frameworks.
With SSR, the initial HTML of a page is generated on the server and sent to the client, allowing the browser to start rendering the page more quickly compared to client-side rendering (CSR), where the HTML is generated in the browser using JavaScript. This results in better perceived performance and faster time to first meaningful paint (FMP).
However, server-rendered HTML is static and lacks interactivity as it does not have any JavaScript initially loaded, and includes no event listeners or other dynamic functionality attached. To enable user interactions and other dynamic features, the static HTML must be “hydrated” with the necessary JavaScript code. Let’s understand the concept of hydration a little better.
Hydration is a term used to describe the process of attaching event listeners and other JavaScript functionality to static HTML that is generated on the server and sent to the client. The goal of hydration is to enable a server-rendered application to become fully interactive after being loaded in the browser, providing users with a fast and smooth experience.
In a React application, hydration happens after a client downloads a server-rendered React application. Then, the following steps occur:
Loading the client bundle: While the browser is rendering the static HTML, it also downloads and parses the JavaScript bundle that contains the application’s code. This bundle includes the React components and any other code necessary for the application’s functionality.
Attaching event listeners: Once the JavaScript bundle is loaded, React “hydrates” the static HTML by attaching event listeners and other dynamic functionality to the DOM elements. This is typically done using the ReactDOM.hydrate function, which takes the root React component and the DOM container as arguments. Hydration essentially transforms the static HTML into a fully interactive React application.
After the hydration process is complete, the application is fully interactive and can respond to user input, fetch data, and update the DOM as necessary.
During hydration, React matches the structure of the DOM elements in the static HTML to the structure defined by the React components via JSX. It is crucial that the structure generated by the React components matches the structure of the static HTML. If there is a mismatch, React will not be able to correctly attach event listeners and will not be aware of what React element directly maps to what DOM element, which would result in the application not behaving as expected.
By combining server-side rendering and hydration, developers can create web applications that load quickly and provide a smooth, interactive user experience.
If you have an existing client-only React app, you may be wondering how to add server rendering. Fortunately, it’s relatively straightforward to add server rendering to an existing React app. One approach is to use a server rendering framework, such as Next.js or Remix. These frameworks provide built-in support for server rendering, allowing you to easily add server rendering to your React app.
The best way to add server rendering to your React app is to use a server rendering framework, since they abstract away a lot of the complexity around implementing server-side rendering while also steering you, the developer, towards a pit of success. Abstractions like this can however leave the more curious of us interested in understanding the underlying mechanisms wanting. If you’re a curious person and are interested in how one would add server rendering to a client-only React app manually, or if you’re interested in how frameworks do it, read on.
If you’ve got a client-only application, this is how you’d add server rendering to it. First, you’d create a server.js file in the root of your project. This file will contain the code for your server.
// server.jsconstexpress=require("express");constpath=require("path");constReact=require("react");constReactDOMServer=require("react-dom/server");constApp=require("./src/App");constapp=express();app.use(express.static(path.join(__dirname,"build")));app.get("*",(req,res)=>{consthtml=ReactDOMServer.renderToString(<App/>);res.send(`<!DOCTYPE html><html><head><title>My React App</title></head><body><div id="root">${html}</div><script src="/static/js/main.js"></script></body></html>`);});app.listen(3000,()=>{console.log("Server listening on port 3000");});
In this example, we’re using Express to create a server that serves static files from the ./build directory and then render our React app on the server. We’re also using ReactDOMServer to render our React app to an HTML string and then inject it into the response sent to the client.
In this scenario, we’re assuming our client-only React app has some type of build script that would output a client-only JavaScript bundle into a directory called build that we reference in the above snippet. This is important for hydration. With all these pieces in order, let’s go ahead and start our server.
nodeserver.js
Running this command should start our server on port 3000, and should output Server listening on port 3000.
With these steps, we now have a server-rendered React app that can be optimized for speed and accessibility. By taking this “peek under the hood” approach to server rendering, we gain a deeper understanding of how server rendering works and how it can benefit our React applications.
If we open a browser and visit http://localhost:3000, we should see a server-rendered application. We can confirm that it is in fact server rendered by viewing the source code of this page, which should reveal actual HTML markup instead of a blank document.
In the previous section, we manually added server rendering to a client-only React app using Express and ReactDOMServer. Specifically, we used ReactDOMServer.renderToString() to render our React app to an HTML string. This is the most basic way to add server rendering to a React app. However, there are other ways to add server rendering to React apps. Let’s take a deeper look at server rendering APIs exposed by React and understand when and how to use them.
Let’s consider the renderToString API in detail, exploring its usage, advantages, disadvantages, and when it is appropriate to use it in a React application. Specifically, let’s look into:
To start with this, let’s talk about what it is.
renderToString: What it isrenderToString is a server-side rendering API provided by React that enables you to render a React component into an HTML string on the server. This API is synchronous and returns a fully rendered HTML string, which can then be sent to the client as a response. renderToString is commonly used in server-rendered React applications to improve performance, SEO, and accessibility.
Using renderToString is relatively straightforward. First, you need to import the renderToString function from the react-dom/server package. Then, you can call the function with a React component as its argument, and it will return the fully rendered HTML as a string. Here’s an example of using renderToString to render a simple React component:
importReactfrom"react";import{renderToString}from"react-dom/server";functionApp(){return(<div><h1>Hello,world!</h1><p>ThisisasimpleReactapp.</p></div>);}consthtml=renderToString(<App/>);console.log(html);
In this example, we create a simple App component and call renderToString with the component as its argument. The function returns the fully rendered HTML, which can be sent to the client.
This function traverses the tree of React elements, converts them to a string representation of real DOM elements, and finally outputs a string. It is synchronous and blocking, meaning it cannot be interrupted or paused. If a component tree from the root is many levels deep, it can require quite a bit of processing. Since a server typically services multiple clients, renderToString could be called for each client unless there’s some type of cache preventing this, and quickly block the event loop and overload the system.
In terms of code, renderToString converts this:
React.createElement("section",{id:"list"},React.createElement("h1",{},"This is my list!"),React.createElement("p",{},"Isn't my list amazing? It contains amazing things!"),React.createElement("ul",{},amazingThings.map((t)=>React.createElement("li",{key:t.id},t.label))));
to this:
<sectionid="list"><h1>This is my list!</h1><p>Isn't my list amazing? It contains amazing things!</p><ul><li>Thing 1</li><li>Thing 2</li><li>Thing 3</li></ul></section>
Because React is declarative and React elements are declarative abstractions, a tree of them can be turned into a tree of anything else—in this case, a tree of React elements is turned into a string-representation of a tree of HTML elements.
There are several advantages to using renderToString for server-side rendering in a React application:
Simplicity: The renderToString API is easy to use and doesn’t require any additional setup or configuration. You can simply import the function, call it with a React component, and get the rendered HTML as a string.
SEO: Server-side rendering with renderToString can help improve the search engine optimization (SEO) of your application. When search engine crawlers index your site, they can see the fully rendered HTML, making it easier for them to understand the content and structure of your site.
Accessibility: renderToString can also improve the accessibility of your application. Users with slow internet connections or devices may have a better experience if they receive fully rendered HTML instead of waiting for client-side JavaScript to load and render the page.
Performance: on the client side, React’s rendering is bound by the client’s CPU power—which is heavily varied between clients. On the server side, we are able to accurately control the resources of our machines that render React components, thereby giving us more control and predictability.
First Meaningful Paint: By using server-side rendering with renderToString, you can send fully rendered HTML to the client, which can result in a faster first meaningful paint. This can lead to a better perceived performance for your users.
While renderToString offers several advantages, it also has some downsides:
Performance: One of the main disadvantages of renderToString is that it can be slow for large React applications. Because it is synchronous, it can block the event loop and make the server unresponsive. This can be especially problematic if you have a high-traffic application with many concurrent users.
Memory-intensive: renderToString returns a fully rendered HTML string, which can be memory-intensive for large applications. This can lead to increased memory usage on your server and potentially slower response times, or a panic that kills the server process under heavy load.
Limited interactivity: Since renderToString generates static HTML, any interactivity in your application must be handled by client-side JavaScript. This means that your users may experience a delay between when the site loads to when the site becomes interactive, depending on how long the JavaScript takes to be loaded, parsed, and executed.
Lack of streaming support: renderToString does not support streaming, which means that the entire HTML string must be generated before it can be sent to the client. This can result in a slower time to first byte (TTFB) and a longer time for the client to start receiving the HTML. This limitation can be particularly problematic for large applications with lots of content, as the client must wait for the entire HTML string to be generated before any content can be displayed.
For larger applications or situations where the downsides of renderToString become problematic, React offers alternative APIs for server-side rendering, such as renderToPipeableStream. These APIs return a Node.js stream instead of a fully rendered HTML string, which can provide better performance and support for streaming. We will cover these more in the next section.
renderToStringConsidering the advantages and disadvantages of renderToString, it can be a good fit for smaller React applications or in situations where simplicity and ease of use are more important than performance. For example, if you are building a small website or a blog with mostly static content, renderToString may be sufficient for your needs.
However, if your application is large and complex, or if you require better performance and streaming capabilities, you should consider using renderToPipeableStream instead.
Additionally, it’s worth considering using a framework like Next.js or Gatsby for your React application, as they can handle server-side rendering and other optimizations out-of-the-box. These frameworks can make it easier to set up server-side rendering and provide better performance and developer experience compared to using renderToString directly.
In summary, renderToString is a server-side rendering API in React that allows you to render a React component into an HTML string on the server. It offers several advantages, including simplicity, improved SEO, and better accessibility. However, it also has some downsides, such as potentially slow performance for large applications, memory-intensive operation, and lack of streaming support.
When choosing whether to use renderToString or an alternative API like renderToPipeableStream, consider factors such as the size of your application, your server environment, and your team’s expertise. By understanding the different server-side rendering APIs available in React, you can make an informed decision about which one is best suited for your application.
Ideally by now, we’re familiar with what renderToString is, how it works, and how it fits into server rendering. Let’s now dive into these 3 points of exploration (what it is, how it works, and how it fits) around renderToString’s bigger brother renderToPipeableStream.
renderToPipeableStreamrenderToPipeableStream is a server-side rendering API introduced in React 18. It provides a more efficient and flexible way to render large React applications to a Node.js stream. It returns a stream that can be piped to a response object. renderToPipeableStream provides more control over how the HTML is rendered and allows for better integration with other Node.js streams.
In addition, it fully supports React’s concurrent features including Suspense, which unlocks better handling of asynchronous data fetching during server-side rendering. Because it is a stream, it is also streamable over the network, where chunks of HTML can be asynchronously and cumulatively sent to clients over the network without blocking. This leads to faster time to first bite (TTFB) measures and generally better performance.
Let’s dive deep into renderToPipeableStream, discussing its features, advantages, and use cases. We’ll also provide code snippets and examples to help you better understand how to implement this API in your React applications.
Similar to renderToString, renderToPipeableStream takes a declaratively-described tree of React elements and—instead of turning them into a string of HTML—turns the tree into a Node.js stream. A Node.js stream is a fundamental concept in the Node.js runtime environment that enables efficient data processing and manipulation. Streams provide a way to handle data incrementally, in chunks, rather than loading the entire data set into memory at once. This approach is particularly useful when dealing with large strings or data streams that cannot fit entirely in memory, or over the network.
At its core, a Node.js stream represents a flow of data between a source and a destination. It can be thought of as a pipeline through which data flows, with various operations applied to transform or process the data along the way.
Node.js streams are categorized into four types based on their nature and direction of data flow:
Readable Streams: A readable stream represents a source of data from which you can read. It emits events like “data,” “end,” and “error.” Examples of readable streams include reading data from a file, receiving data from an HTTP request, or generating data using a custom generator.
Writable Streams: A writable stream represents a destination where you can write data. It provides methods like write() and end() to send data into the stream. Writable streams emit events like “drain” when the destination can handle more data and “error” when an error occurs during writing. Examples of writable streams include writing data to a file, sending data over a network socket, or piping data to another stream.
Duplex Streams: A duplex stream represents both a readable and writable stream simultaneously. It allows bidirectional data flow, meaning you can both read from and write to the stream. Duplex streams are commonly used for network sockets or communication channels where data needs to flow in both directions.
Transform Streams: A transform stream is a special type of duplex stream that performs data transformations while data flows through it. It reads input data, processes it, and provides the processed data as output. Transform streams can be used to perform tasks such as compression, encryption, decompression, or data parsing.
Node.js streams operate using a combination of events and methods. Readable streams emit events such as “data” when new data is available, “end” when the stream has no more data to provide, and “error” when an error occurs during reading. Writable streams provide methods like write() to send data into the stream and end() to signal the end of writing.
One of the powerful features of Node.js streams is the ability to pipe data between streams. Piping allows you to connect the output of a readable stream directly to the input of a writable stream, creating a seamless flow of data. This greatly simplifies the process of handling data and reduces memory usage.
Streams in Node.js also support backpressure handling. Backpressure is a mechanism that allows streams to control the flow of data when the destination cannot process it as fast as it is being produced. When the writable stream is unable to handle data quickly enough, the readable stream will pause emitting “data” events, preventing data loss. Once the writable stream is ready to consume more data, it emits a “drain” event, signaling the readable stream to resume emitting data.
Node.js provides a built-in stream module that offers a set of classes and utilities for working with streams. In addition to the core module, numerous third-party libraries extend the capabilities of streams or provide specialized stream functionality.
To summarize, Node.js streams are a powerful abstraction for handling data in a scalable and memory-efficient manner. By breaking data into manageable chunks and allowing incremental processing, streams enable efficient handling of large data sets, file I/O operations, network communication, and much more.
In React, the purpose of streaming React components to a stream is to enhance the time-to-first-byte (TTFB) performance of server-rendered applications. Instead of waiting for the entire HTML markup to be generated before sending it to the client, these methods enable the server to start sending chunks of the HTML response as they are ready, thus reducing the overall latency.
The renderToPipeableStream function is a part of the React’s experimental server renderer, which is designed to support streaming rendering of a React application to a Node.js stream. It’s a part of the new server renderer architecture called “Fizz”.
Without distracting from our context of server rendering too much, here’s a simplified explanation of how it works:
Creating a Request: The function renderToPipeableStream takes as input the React elements to be rendered and an optional options object. It then creates a request object using a createRequestImpl function. This request object encapsulates the React elements, resources, response state, and format context.
Starting the Work: After creating the request, the startWork function is called with the request as an argument. This function initiates the rendering process. The rendering process is asynchronous and can be paused and resumed as needed, which is where Suspense comes in. If a component is wrapped in a Suspense boundary and it initiates some asynchronous operation (like data fetching), the rendering of that component (and possibly its siblings) can be “suspended” until the operation finishes.
Returning a Pipeable Stream: renderToPipeableStream then returns an object that includes a pipe method and an abort method. The pipe method is used to pipe the rendered output to a writable stream (like an HTTP response object in Node.js). The abort method can be used to cancel any pending I/O and put anything remaining into client-rendered mode.
Piping to a Destination: When the pipe method is called with a destination stream, it checks if the data has already started flowing. If not, it sets hasStartedFlowing to true and calls the startFlowing function with the request and the destination. It also sets up handlers for the ‘drain', ‘error', and ‘close’ events of the destination stream.
Handling Stream Events: The ‘drain’ event handler calls startFlowing again to resume the flow of data when the destination stream is ready to receive more data. The ‘error’ and ‘close’ event handlers call the abort function to stop the rendering process if an error occurs in the destination stream or if the stream is closed prematurely.
Aborting the Rendering: The abort method on the returned object can be called with a reason to stop the rendering process. It calls the abort function from the ‘react-server’ module with the request and the reason.
The actual implementation of these functions involves more complex logic to handle things like progressive rendering, error handling, and integration with the rest of the React server renderer. The code for these functions can be found in the ‘react-server’ and ‘react-dom’ packages of the React source code.
renderToPipeableStreamStreaming: renderToPipeableStream returns a pipeable Node.js stream, which can be piped to a response object. This allows the server to start sending the HTML to the client before the entire page is rendered, providing a faster user experience and better performance for large applications.
Flexibility: renderToPipeableStream offers more control over how the HTML is rendered. It can be easily integrated with other Node.js streams, allowing developers to customize the rendering pipeline and create more efficient server-side rendering solutions.
Suspense support: renderToPipeableStream fully supports React’s concurrent features, including Suspense. This allows developers to manage asynchronous data fetching more effectively during server-side rendering, ensuring that data-dependent components are only rendered once the necessary data is available.
Advanced API: Although renderToPipeableStream is a more advanced API compared to renderToString, it enables developers to create more efficient and powerful server-side rendering solutions, particularly for large and complex React applications.
Let’s take a look at some code that illustrates the benefits of this API. We have an application that displays a list of dog breeds. The list is populated by fetching data from an API endpoint. The application is rendered on the server using renderToPipeableStream and then sent to the client. Let’s start by looking at our dog list component:
// ./src/DogBreeds.jsxconstdogResource=createResource(fetch("https://dog.ceo/api/breeds/list/all").then((r)=>r.json()).then((r)=>Object.keys(r.message)));functionDogBreeds(){return(<ul><Suspensefallback="Loading...">{dogResource.read().map((profile)=>(<likey={profile}>{profile}</li>))}</Suspense></ul>);}exportdefaultDogBreeds;
Now, let’s look at our overall App that contains the DogBreeds component:
// src/App.jsimportReact,{Suspense}from"react";constUserProfile=React.lazy(()=>import("./DogBreeds"));functionApp(){return(<div><h1>DogBreeds</h1><Suspensefallback={<div>LoadingDogBreeds...</div>}><UserProfile/></Suspense></div>);}exportdefaultApp;
Notice, we’re using React.lazy here as mentioned in prior chapters, just so we have another suspense boundary to demonstrate how renderToPipeableStream handles Suspense. Okay, let’s tie this all together with an Express server.
// server.jsimportexpressfrom"express";importReactfrom"react";import{renderToPipeableStream}from"react-dom/server";importAppfrom"./App.jsx";constapp=express();app.use(express.static("build"));app.get("/",async(req,res)=>{consthtmlStart=`<!DOCTYPE html><html lang="en"><head><meta charset="UTF-8" /><meta name="viewport" content="width=device-width, initial-scale=1.0" /><title>React Suspense with renderToPipeableStream</title></head><body><div id="root">`;res.write(htmlStart);const{pipe}=renderToPipeableStream(<App/>,{onShellReady:()=>{pipe(res);},});});app.listen(3000,()=>{console.log("Server is listening on port 3000");});
What we’re doing in the above code snippet is responding to a request with a stream of HTML. We’re using renderToPipeableStream to render our App component to a stream, and then piping that stream to our response object. We’re also using the onShellReady option to pipe the stream to the response object once the shell is ready. The shell is the HTML that is rendered before the React application is hydrated, and before Suspense is resolved. In our case, the shell is the HTML that is rendered before the dog breeds are fetched from the API. Let’s take a look at what happens when we run this code.
If we visit http://localhost:3000, we get a page with a heading “Dog Breeds”, and our suspense fallback “Loading Dog Breeds...”. This is the shell that is rendered before the dog breeds are fetched from the API. The really cool thing is—even if we don’t include React on the client side in our HTML and hydrate the page, the Suspense fallback is replaced with the actual dog breeds once they are fetched from the API. This swapping of DOM when data becomes available happens entirely from the server side, without client-side React!
Let’s understand how this works in a bit more detail.
Warning: we are about to dive deep into React implementation details here that are quite likely to change over time. The point of this exercise (and this book) is not to obsess over single implementation details, but instead of understand the underlying mechanism so we can learn and reason about React better. This isn’t required to use React, but understanding the mechanism can give us hints and practical tools to use in our day-to-day working with React. With that, let’s move forward.
When we visit http://localhost:3000, the server responds with the shell HTML, which includes the heading “Dog Breeds” and the suspense fallback “Loading Dog Breeds...”. This HTML looks like this:
<!DOCTYPE html><htmllang="en"><head><metacharset="UTF-8"/><metaname="viewport"content="width=device-width, initial-scale=1.0"/><title>React Suspense with renderToPipeableStream</title></head><body><divid="root"><div><h1>User Profiles</h1><!--$?--><templateid="B:0"></template><div>Loading user profiles...</div><!--/$--></div><divhiddenid="S:0"><ul><!--$--><li>affenpinscher</li><li>african</li><li>airedale</li>[...]<!--/$--></ul></div><script>function$RC(a,b){a=document.getElementById(a);b=document.getElementById(b);b.parentNode.removeChild(b);if(a){a=a.previousSibling;varf=a.parentNode,c=a.nextSibling,e=0;do{if(c&&8===c.nodeType){vard=c.data;if("/$"===d)if(0===e)break;elsee--;else("$"!==d&&"$?"!==d&&"$!"!==d)||e++;}d=c.nextSibling;f.removeChild(c);c=d;}while(c);for(;b.firstChild;)f.insertBefore(b.firstChild,c);a.data="$";a._reactRetry&&a._reactRetry();}}$RC("B:0","S:0");</script></div></body></html>
What we see here is quite interesting:
<template> element with a generated ID (B:0) in this case and some HTML comments. The HTML comments are used to mark the start and end of the shell. These are markers or “holes” where resolved data will go once Suspense is resolved.<script> element. This <script> tag contains a function called $RC that is used to replace the shell with the actual content. The $RC function takes two arguments: the ID of the <template> element that contains the marker, and the ID of the <div> element that contains the fallback. The function then fills the marker with rendered UI after data is available, while removing the fallback.It’s pretty unfortunate that this function is minified even in development mode, but let’s try to unminify it and understand what it does. If we do, this is what we observe:
functionreactComponentCleanup(reactMarkerId,siblingId){letreactMarker=document.getElementById(reactMarkerId);letsibling=document.getElementById(siblingId);sibling.parentNode.removeChild(sibling);if(reactMarker){reactMarker=reactMarker.previousSibling;letparentNode=reactMarker.parentNode,nextSibling=reactMarker.nextSibling,nestedLevel=0;do{if(nextSibling&&8===nextSibling.nodeType){letnodeData=nextSibling.data;if("/$"===nodeData){if(0===nestedLevel){break;}else{nestedLevel--;}}elseif("$"!==nodeData&&"$?"!==nodeData&&"$!"!==nodeData){nestedLevel++;}}letnextNode=nextSibling.nextSibling;parentNode.removeChild(nextSibling);nextSibling=nextNode;}while(nextSibling);while(sibling.firstChild){parentNode.insertBefore(sibling.firstChild,nextSibling);}reactMarker.data="$";reactMarker._reactRetry&&reactMarker._reactRetry();}}reactComponentCleanup("B:0","S:0");
Let’s break this down further.
The function takes two arguments: reactMarkerId and siblingId. Effectively, the marker is hole where data will go once its available, and the sibling is the Suspense fallback.
The function then removes the sibling element (the fallback) from the DOM using the removeChild method on its parent node when data is available.
If the reactMarker element exists, the function runs. It sets the reactMarker variable to the previous sibling of the current reactMarker element. The function also initializes variables parentNode, nextSibling, and nestedLevel.
A do...while loop is used to traverse the DOM tree, starting with the nextSibling element. The loop continues as long as the nextSibling element exists. Inside the loop, the function checks whether the nextSibling element is a comment node (indicated by a nodeType value of 8).
a. If the nextSibling element is a comment node, the function inspects its data (i.e., the text content of the comment). It checks whether the data is equal to "/$", which signifies the end of a nested structure. If the nestedLevel value is 0, the loop breaks, indicating that the desired end of the structure has been reached. If the nestedLevel value is not 0, it means that the current "/$" comment node is part of a nested structure, and the nestedLevel value is decremented.
b. If the comment node data is not equal to "/$", the function checks whether it is equal to "$", "$?", or "$!". These values indicate the beginning of a new nested structure. If any of these values are encountered, the nestedLevel value is incremented.
During each iteration of the loop, the nextSibling element is removed from the DOM using the removeChild method on its parent node. The loop continues with the next sibling element in the DOM tree.
Once the loop has completed, the function moves all child elements of the sibling element to the location immediately before the nextSibling element in the DOM tree using the insertBefore method. This process effectively restructures the DOM around the reactMarker element.
The function then sets the data of the reactMarker element to "$", which is likely used to mark the component for future processing or reference. If a reactRetry property exists on the reactMarker element and it is a function, the function invokes this method.
In summary, this function appears to perform cleanup and restructuring of the DOM elements related to a React component. It uses comment nodes with specific data values to determine the structure of the component and manipulates the DOM accordingly. Since this is inlined in our HTML from the server, we can stream data like this using renderToPipeableStream and have the browser render the UI as it becomes available without even including React in the browser bundle or hydrating.
Thus, renderToPipeableStream gives us quite a bit more control and power compared to renderToString when server rendering.
Creating a custom server rendering implementation for a React application can be a challenging and time-consuming task. While React does provide some APIs for server rendering, building a custom solution from scratch can lead to various issues and inefficiencies. In this section, we’ll explore the reasons why it’s better to rely on established frameworks like Next.js and Remix, rather than building your own server rendering solution.
Handling edge cases and complexities: React applications can become quite complex, and implementing server rendering requires addressing various edge cases and complexities. These can include handling asynchronous data fetching, code splitting, and managing various React lifecycle events. By using a framework like Next.js or Remix, you can avoid the need to handle these complexities yourself, as these frameworks have built-in solutions for many common edge cases.
One such edge case is security. As the server processes numerous client requests, it’s crucial to ensure that sensitive data from one client doesn’t inadvertently leak to another. This is where frameworks like Next.js, Remix, and Gatsby can provide invaluable assistance in handling these concerns. Imagine a scenario where client A accesses the server, and their data is cached by the server. If the server accidentally serves this cached data to client B, sensitive information could be exposed.
Consider the following example:
// server.jsconstexpress=require("express");constapp=express();constcachedUserData=null;app.get("/user/:userId",(req,res)=>{const{userId}=req.params;if(cachedUserData){returnres.json(cachedUserData);}// Fetch user data from a database or another data sourceconstuserData=fetchUserData(userId);dataCache=userData;res.json(userData);});app.listen(3000,()=>{console.log("Server listening on port 3000");});
In this example, the server caches user data in the dataCache object. If a race condition occurs, there is a risk that one client’s data could be served to another client. This issue can lead to unauthorized access to sensitive information and serious security breaches.
If we roll our own, the risk of human error is ever present. If we lean on frameworks built by large communities, this risk is mitigated. These frameworks are designed with security in mind and ensure that sensitive data is handled properly. They prevent potential data leakage scenarios by using secure and isolated data fetching methods
Performance optimizations: Frameworks like Next.js and Remix come with numerous performance optimizations out of the box. These optimizations can include automatic code splitting, server rendering, and caching. Building a custom server rendering solution might not include these optimizations by default, and implementing them can be a challenging and time-consuming task.
// Example of automatic code splitting with Next.jsimportdynamicfrom"next/dynamic";constDynamicComponent=dynamic(()=>import("../components/DynamicComponent"));functionPage(){return(<div><DynamicComponent/></div>);}exportdefaultPage;
// Example of simplified file-based routing with Next.js// File: pages/blog/[slug].jsfunctionBlogPost({post}){return(<div><h1>{post.title}</h1><div>{post.content}</div></div>);}exportasyncfunctiongetStaticProps({params}){constpost=awaitgetPostBySlug(params.slug);return{props:{post}};}exportasyncfunctiongetStaticPaths(){constslugs=awaitgetAllPostSlugs();return{paths:slugs.map((slug)=>({params:{slug}})),fallback:false,};}exportdefaultBlogPost;
Community support and ecosystem: Established frameworks like Next.js and Remix have large and active communities, which can provide invaluable support and resources. By using a well-supported framework, you can leverage the collective knowledge and experience of the community, access a vast array of plugins and integrations, and benefit from regular updates and improvements.
Best practices and conventions: Using a framework like Next.js or Remix can help enforce best practices and conventions in your project. These frameworks have been designed with best practices in mind, and by following their conventions, you can ensure that your application is built on a solid foundation.
// Example of best practices with Remix// File: routes/posts/$postId.tsximport{useParams}from"react-router-dom";import{useLoaderData}from"@remix-run/react";exportfunctionloader({params}){returnfetchPost(params.postId);}functionPost(){const{postId}=useParams();constpost=useLoaderData();return(<div><h1>{post.title}</h1><div>{post.content}</div></div>);}exportdefaultPost;
Considering the benefits and optimizations provided by established frameworks like Next.js and Remix, it becomes evident that building a custom server rendering solution for a React application is not an ideal approach. By leveraging these frameworks, you can save development time, ensure best practices are followed, and benefit from the ongoing improvements and support provided by their respective communities.
Some would say these frameworks are “opinionated”, as in that they adhere to a specific set of conventions or best practices. These conventions are usually prescribed by the framework’s creators or the broader community, and they guide developers in structuring their projects, writing code, and handling various aspects of application development.
Opinionated frameworks can save developers time and effort by providing a well-defined structure, established patterns, and sensible defaults. They often include a set of tools and libraries that work well together, promoting consistency and enabling developers to quickly get up and running with their projects.
Conventions are the rules and guidelines that dictate the organization, structure, and coding practices within a framework. They are designed to promote consistency, improve code readability, and make it easier for developers to understand and maintain their projects. Some common conventions found in opinionated frameworks include:
Directory Structure: Opinionated frameworks often prescribe a specific directory structure for organizing your application’s files and folders. This structure makes it easier for developers to locate specific components, assets, and other files, and it ensures that the application’s organization is consistent across projects using the same framework.
An example of this is the ./pages or ./app directory in Next.js. This directory is used to store all of the application’s pages, and they are then automatically mapped to routes on the server. By following this convention, developers can quickly and easily add new pages to their application without having to manually configure routing.
Component Naming and Structure: Conventions around component naming and structure encourage developers to write modular, reusable code that adheres to best practices. For example, a framework might dictate that components should be named using PascalCase and organized in a specific way to promote code readability and maintainability.
Coding Style: Opinionated frameworks typically recommend a specific coding style or provide built-in linting and formatting tools to ensure consistency across the project. By adhering to a common coding style, developers can more easily read and understand each other’s code, making collaboration and maintenance more efficient.
Routing and Navigation: Opinionated frameworks often include built-in support for client-side routing and navigation, making it easier to create single-page applications (SPAs) with complex routing requirements. By following the framework’s conventions, developers can implement routing and navigation more quickly and with fewer potential pitfalls.
There are several benefits to using an opinionated framework for your projects, including:
Faster Development: With predefined conventions and best practices in place, developers can spend less time setting up their projects and more time focusing on their application’s core functionality. This streamlined development process can help teams get their applications up and running more quickly.
Consistency: By adhering to a specific set of conventions, developers can create more consistent and maintainable codebases. This consistency makes it easier for new developers to join a project and quickly understand its structure and coding practices.
Improved Code Quality: Opinionated frameworks often include tools and features designed to enforce best practices and improve code quality. By following the framework’s conventions, developers can create more reliable, maintainable, and efficient applications.
Easier Collaboration: When all developers on a team follow the same set of conventions, it becomes easier to collaborate on projects and share code. This consistency can help reduce the risk of bugs and other issues caused by miscommunication or differences in coding styles.
This also makes contributing to open source more approachable.
Less Decision Fatigue: With an opinionated framework, many decisions about project structure, coding practices, and tooling are already made for you. This can help reduce decision fatigue and allow developers
In conclusion, server-side rendering (SSR) and hydration are powerful techniques that can significantly improve the performance, user experience, and SEO of web applications. React provides a rich set of APIs for server rendering, such as renderToString and renderToPipeableStream, each with its own strengths and trade-offs. By understanding these APIs and selecting the right one based on factors such as application size, server environment, and developer experience, you can optimize your React application for both server and client-side performance.
As we’ve seen throughout this chapter, renderToString is a simple and straightforward API for server rendering that is suitable for smaller applications. However, it may not be the most efficient option for larger applications due to its synchronous nature and potential to block the event loop. On the other hand, renderToPipeableStream is a more advanced and flexible API that allows for better control over the rendering process and improved integration with other Node.js streams, making it a more suitable choice for larger applications.
Now that you’ve gained a solid understanding of server-side rendering and hydration in React, it’s time to test your knowledge with some review questions. If you can confidently answer these, it’s a good sign that you’ve got a solid understanding of mechanism in React and can comfortably move forward. If you cannot, we’d suggest reading through things a little more, although this will not hurt your experience as you continue through the book.
renderToString and renderToPipeableStream APIs in React?Once you’ve mastered server-side rendering and hydration, you’re ready to explore even more advanced topics in React development. In the next chapter, we’ll dive into “Asynchronous React.” As web applications become more complex, handling asynchronous actions becomes increasingly important for creating smooth user experiences. We’ll explore techniques like Suspense, React.lazy, and concurrent mode, which allow you to handle asynchronous data fetching, code splitting, and rendering more efficiently.
By learning how to leverage asynchronous React, you’ll be able to create highly performant, scalable, and user-friendly applications that can handle complex data interactions with ease. So, stay tuned and get ready to level up your React skills as we continue our journey into the world of Asynchronous React!
In the previous chapter, we delved deep into the world of server-side rendering with React. We examined the importance of server-side rendering for improving the performance and user experience of our applications, especially in the context of modern web development. We explored different server rendering APIs, such as renderToString and renderToPipeableStream, and discussed their use cases and benefits. We also touched upon the challenges of implementing server-side rendering and how it’s better to rely on established frameworks like Next.js and Remix to handle the complexities for us.
We covered the concept of hydration and its significance in connecting server-rendered markup with client-side React components, creating a seamless user experience. Additionally, we discussed the potential security issues and challenges that come with managing multiple client connections in a serverful environment, emphasizing the need for using frameworks that handle these concerns effectively.
Furthermore, we highlighted the advantages of adopting conventions and opinionated frameworks, which can save developers valuable time and effort by providing best practices and sensible defaults. Throughout the chapter, we also provided several code examples and snippets to help illustrate key concepts and reinforce the lessons learned.
Now, as we transition to the next chapter—Asynchronous React—we will build upon our understanding of server-side rendering and explore the inner workings of React itself. We will dive into the Fiber Reconciler and learn about the asynchronous nature of React, as well as how it manages updates and rendering efficiently. By examining scheduling, deferring updates, and render lanes, we’ll gain insights into the performance optimizations made possible by React’s core architecture.
The knowledge acquired in the previous chapter on server-side rendering will help us appreciate the advanced capabilities of React even more, as we witness the synergy between server-side rendering and the asynchronous nature of React in action. This chapter will provide you with a solid foundation in understanding the mechanisms that drive React’s performance and how it intelligently prioritizes and processes updates.
So, let’s embark on our journey into the fascinating world of Asynchronous React, as we continue to build on our expertise and discover new ways to harness the power of React for creating high-performance applications.
As covered in chapter 4, the Fiber Reconciler is the core mechanism in React that enables asynchronous rendering. It was introduced in React 16 and represented a significant architectural shift from the previous Stack Reconciler. The primary goal of the Fiber Reconciler is to improve the responsiveness and performance of React applications, particularly for large and complex user interfaces (UIs).
The Fiber Reconciler achieves this by breaking the rendering process into smaller, more manageable units of work called fibers. This allows React to pause, resume, and prioritize rendering tasks, making it possible to defer or schedule updates based on their importance. This improves the responsiveness of the application and ensures that critical updates are not blocked by less important tasks.
Here’s a brief recap of the key features enabled by the Fiber Reconciler:
Incremental rendering: The Fiber Reconciler divides the rendering process into small units of work (these are called fibers) that can be paused, resumed, or prioritized. This enables incremental rendering, which allows React to perform work over multiple frames, improving responsiveness and perceived performance.
Concurrent features: Concurrent features are a set of features that take advantage of the Fiber Reconciler to enable even more fine-grained control over rendering priorities. It allows React to work on multiple tasks concurrently without blocking the main thread, further improving applications’ responsiveness.
Suspense: Suspense is a powerful mechanism for handling asynchronous data fetching and rendering. It allows React components to “suspend” rendering while waiting for data or other resources, providing a seamless fallback experience for users. The Fiber Reconciler makes it possible to implement Suspense with minimal overhead and complexity.
React’s ability to schedule and defer updates is crucial for maintaining a responsive application. By prioritizing critical updates and deferring less important ones, React ensures that the user interface remains responsive even under heavy load or when dealing with complex UIs. The Fiber Reconciler enables this functionality by using a combination of requestIdleCallback and requestAnimationFrame APIs. These APIs allow React to perform work during idle periods and schedule updates at the most opportune times.
Consider a real-time chat application where users can send and receive messages. We will have a chat component that displays a list of messages and a message input component where users can type and submit their messages. Additionally, the chat application receives new messages from the server in real-time. In this scenario, we want to prioritize user interactions (typing and submitting messages) to maintain a responsive experience while ensuring that incoming messages are rendered efficiently without blocking the UI.
To make this example a little more concrete, let’s create some components. First, a list of messages:
constMessageList=({messages})=>(<ul>{messages.map((message,index)=>(<likey={index}>{message}</li>))}</ul>);
This should not seem too complicated. Next, we have a message input component that allows users to type and submit messages:
constMessageInput=({onSubmit})=>{const[message,setMessage]=useState("");consthandleSubmit=(e)=>{e.preventDefault();onSubmit(message);setMessage("");};return(<formonSubmit={handleSubmit}><inputtype="text"value={message}onChange={(e)=>setMessage(e.target.value)}/><buttontype="submit">Send</button></form>);};
Finally, we have a chat component that combines the two components and handles the logic for sending and receiving messages:
constChatApp=()=>{const[messages,setMessages]=useState([]);useEffect(()=>{// Connect to the server and subscribe to incoming messagesconstsocket=newWebSocket("wss://your-websocket-server.com");socket.onmessage=(event)=>{setMessages((prevMessages)=>[...prevMessages,event.data]);};return()=>{socket.close();};},[]);constsendMessage=(message)=>{// Send the message to the server};return(<div><MessageListmessages={messages}/><MessageInputonSubmit={sendMessage}/></div>);};
In this example, React’s asynchronous rendering capabilities come into play by efficiently managing the updates of both the message list and the user’s interactions with the message input. When a user types or submits a message, React prioritizes the text input updates above other updates to ensure a smooth user experience. In contrast, when new messages arrive from the server, React intelligently schedules the updates and renders them without blocking the UI, allowing the chat application to function efficiently even under heavy load. Thus, user input is never interrupted, and incoming messages are rendered with a lower priority than user interactions since they are less critical to the user experience: that is, they do not react, pun intended, to the user’s actions.
This example demonstrates how React’s asynchronous rendering capabilities can be leveraged to build responsive applications that handle complex interactions and frequent updates without compromising on performance or user experience.
Let’s look a little deeper at how exactly React achieves this.
In React, the process of scheduling, prioritizing, and deferring updates is essential to maintaining a responsive user interface. This process ensures that high-priority tasks are addressed promptly while low-priority tasks can be deferred, allowing the UI to remain smooth even under heavy load. To delve deeper into this topic, we’ll examine several core concepts: the scheduler, the priority levels of tasks, and the mechanisms that defer updates.
Before we proceed—let’s remind ourselves one more time that the information covered here are implementation details and are not requisite to using React. However, understanding these concepts will help you better understand how React works and how to use it effectively. With that in mind, let’s proceed.
At the heart of React’s prioritization process is the scheduler, a separate package that helps React manage the work it needs to perform. The scheduler utilizes a cooperative multitasking model, which means that it allows multiple tasks to execute concurrently without interrupting each other. This model is made possible by breaking the work into smaller chunks, called “units of work” or fibers, which can be processed incrementally. We covered these units of work in chapter 4 when we looked at the Fiber Reconciler. Fibers are units of work.
The scheduler’s primary responsibility is to manage the processing of these units of work based on their priority. It determines when a unit of work should start, pause, or resume based on the priority of other tasks in the queue. The scheduler leverages the browser’s requestIdleCallback API when available to schedule low-priority tasks during idle periods, ensuring that these tasks do not interfere with more critical tasks or user interactions.
Let’s make sure we understand requestIdleCallback correctly before we continue.
requestIdleCallbackrequestIdleCallback is a method exposed in the window object by modern web browsers, which allows developers to schedule a function to be called during periods of low activity or idle time. It is a non-standard feature, but it is supported in most modern browsers including Chrome, Firefox, and Safari.
This method is particularly useful for performing background or non-essential work without interfering with the user’s experience. For example, it can be used to defer non-critical actions such as sending analytics data, preloading content, or performing calculations that aren’t immediately necessary for the user’s interaction with the page. This is indeed how React uses this API.
The basic usage of requestIdleCallback is quite simple. You call the method with a callback function, and the browser will call that function when it determines that there is idle time available.
window.requestIdleCallback(function(deadline){// perform work here});
The deadline object passed to your callback contains two properties:
timeRemaining: A method that returns the amount of time remaining in the current idle period, in milliseconds.didTimeout: A boolean indicating whether the callback was called due to a timeout.Here’s an example of how you might use these properties:
window.requestIdleCallback(function(deadline){while(deadline.timeRemaining()>0&&tasks.length>0){performTask(tasks.pop());}});
In this example, tasks is an array of tasks to be performed. The performTask function performs one of these tasks. The loop continues performing tasks as long as there is time remaining in the current idle period and there are still tasks to be performed.
requestIdleCallback also supports a second, optional argument: an options object. The only option currently supported is timeout, which specifies a maximum time to wait before the callback is called, in milliseconds.
If you specify a timeout, then the browser will attempt to run the callback during an idle period before the timeout elapses. If it cannot find an idle period before the timeout, it will run the callback anyway and set the didTimeout property of the deadline object to true.
Here’s an example:
window.requestIdleCallback(function(deadline){if(deadline.didTimeout){// The browser could not find an idle period before the timeout// We might want to perform only high-priority tasks in this case}else{// The browser found an idle period// We can perform all tasks here}},{timeout:1000});
In this example, the callback will be called within approximately 1000 milliseconds (1 second), whether the browser finds an idle period or not.
Just like setTimeout and setInterval, requestIdleCallback returns an ID that can be used to cancel the scheduled callback.
The method for cancelling a scheduled callback is cancelIdleCallback. You pass it the ID returned by requestIdleCallback, and the browser will cancel the callback if it has not already been called.
Here’s an example:
varid=window.requestIdleCallback(function(){// this may or may not be called});window.cancelIdleCallback(id);
In this example, the callback may or may not be called, depending on whether the call to cancelIdleCallback happens before or after the browser calls the callback.
While requestIdleCallback is a powerful tool, it does have some limitations and caveats:
didTimeout property of the deadline object to true.timeRemaining method of the deadline object gives an estimate of the time remaining in the current idle period, but it is not a guarantee. The actual time remaining may be less than the value returned by timeRemaining.this value. This is different from setTimeout and setInterval, which call their callbacks with this set to the global object (usually window).React’s concurrent features are somewhat analogous to requestIdleCallback, which enables lower-priority tasks to be scheduled to run during any idle time in the browser’s event loop. React’s concurrent features, in essence, are about smart scheduling: it chooses when to work on which updates based on their priority.
To effectively manage the scheduling of updates, React assigns a priority level to each task. These priority levels help React determine the order in which tasks should be executed. There are several priority levels, including:
React evaluates each task and assigns a priority level based on its nature and the current state of the application. This evaluation enables React to execute high-priority tasks more quickly and defer lower-priority tasks as needed.
Let’s now take a look at how these priority levels are used to schedule updates in our previous example, the chat application. This will help illustrate how React’s scheduling mechanism can be applied in practice to ensure that the app remains responsive and efficient.
We can break down the tasks in the chat app into the following priority levels:
ImmediatePriority—Handling user input: When the user types a message in the input field or clicks the send button, the app needs to respond immediately to ensure a smooth and responsive experience. React assigns the ImmediatePriority level to user input events, such as onChange and onClick, to ensure that they are processed promptly.
UserBlockingPriority—Rendering new messages: As new messages arrive from the server, the chat app needs to update the message list to display them. While this task is essential, it’s not as time-sensitive as handling user input. React may assign a UserBlockingPriority level to this task, allowing it to be interrupted by ImmediatePriority tasks if necessary.
NormalPriority—Non-critical UI updates: Suppose the chat app had features that are not critical to its core functionality, such as updating user avatars, fetching additional metadata, or loading non-essential UI elements (e.g., tooltips). These tasks can be assigned a NormalPriority level, allowing React to defer them until more critical tasks are complete.
LowPriority—Background tasks: In some cases, the chat app might need to perform background tasks, such as data fetching for a hidden part of the UI, updating caches, or performing analytics. These tasks are not time-sensitive and can be assigned either LowPriority or IdlePriority level, allowing React to execute them when there is spare time available.
By mapping the tasks in the chat app to React’s priority levels, React can intelligently schedule and manage updates to ensure a responsive user experience. Higher-priority tasks, such as user input handling, are prioritized and processed immediately, while lower-priority tasks can be deferred or interrupted as needed. This allows the chat app to function efficiently and provide a seamless experience even under heavy load.
The process of deferring lower priority updates is a crucial aspect of React’s scheduling mechanism. By deferring lower-priority tasks, React can ensure that high-priority tasks are completed promptly, resulting in a more responsive user interface. React leverages the concept of “lanes” to manage the deferral of updates. Each lane represents a different priority level, and updates are grouped into these lanes accordingly. When the scheduler determines that it’s time to process a deferred update, it selects the highest-priority lane with pending updates and begins processing the associated tasks.
To further illustrate this process, consider the following example:
import{useTransition,useState}from"react";functionButton(){const[state,setState]=useState({buttonClicked:true});const[,startTransition]=useTransition();// Low-priority updateconstchangeBg=()=>startTransition(()=>{setState({backgroundUpdated:true});});constonClick=()=>{// High-priority updatesetState({buttonClicked:true});changeBg();};return<buttononClick={onClick}>dosomething</button>;}
In this example, when the onClick function is called, two updates are triggered: a high-priority update to reflect the button click and a low-priority update to modify the background. By using the useTransition hook, we explicitly defer the low-priority update. This allows React to prioritize the button click update, ensuring that the UI remains responsive.
As the scheduler processes updates, it may also interrupt lower-priority tasks to accommodate higher-priority ones. Once the higher-priority tasks are completed, the interrupted tasks can resume from where they left off. This ability to pause and resume tasks contributes to the overall responsiveness of the application.
Render lanes are an essential part of React’s scheduling system, which ensures efficient rendering and prioritization of tasks. A lane is a unit of work that represents a priority level and can be processed by React as part of its rendering cycle. The concept of render lanes was introduced in React 18, replacing the previous scheduling mechanism that used expiration times. Let’s dive into the details of render lanes, how they work, and their underlying representation as bitmasks.
First off, a render lane is a lightweight abstraction that React uses to organize and prioritize the work that needs to be done during the rendering process. Each lane corresponds to a specific priority level, with higher priority lanes processed before lower priority lanes. React uses a fixed number of render lanes, each represented as a single bit in a 32-bit integer. This efficient representation as a bitmask allows for fast operations and easy comparisons, making it an ideal data structure for React’s scheduling system.
Before we go further, let’s dive a little bit into bitmasks.
Before discussing bitmasks, we need to understand binary representation and bitwise operations. In JavaScript, numbers are usually stored in a format called double-precision floating-point, according to the IEEE 754 standard. But when you work with bitwise operations, JavaScript temporarily converts numbers to 32-bit signed (as in, + or -) integers.
Bitwise operators treat their operands as a sequence of 32 bits (32 zeroes and ones), rather than as decimal, hexadecimal, or octal numbers. For example, the decimal number nine has a binary representation of 1001.
A bitmask is a sequence of bits that can manipulate and/or read the values of specific bits in a target bit sequence. You can think of a bitmask as a sort of filter that’s applied to a sequence of binary data, enabling you to interact with certain bits while ignoring others. You can use bitwise operators in conjunction with bitmasks to read and manipulate individual bits in a number.
Bitwise operators and bitmasks can be powerful tools when you need to interact with the individual bits within a number. They can be particularly useful in tasks related to data compression, encryption, and low-level programming. However, they can also be more difficult to understand and debug than higher-level operations, so they should be used judiciously.
As with many aspects of programming, the key to using bitmasks effectively is understanding the underlying principles and practicing with concrete examples. Whether you’re trying to squeeze out a bit more performance, working with a hardware interface, or just enjoy bit-level manipulation, bitmasks offer a way to interact with your data at the most granular level.
React uses bitmasks for lanes for these reasons: they’re fast, efficient, and save memory. This is also probably another compelling reason for us to lean on React as an abstraction, so that we don’t have to worry about this level of concerns ourselves. Great! Now that we understand bitmasks, let’s continue exploring lanes.
When a component updates or a new component is added to the render tree, React assigns a lane to the update based on its priority using the priority levels we discussed just previously. As we know, the priority is determined by the type of update (e.g., user interaction, data fetching, or background task) and other factors, such as the component’s visibility.
React then uses the render lanes to schedule and prioritize updates in the following manner:
Collect updates: React collects all the updates that have been scheduled since the last render and assigns them to their respective lanes based on their priority.
Process lanes: React processes the updates in their respective lanes, starting with the highest priority lane. Updates in the same lane are batched together and processed in a single pass.
Commit phase: After processing all the updates, React enters the commit phase, where it applies the changes to the DOM, runs effects, and performs other finalization tasks.
Repeat: The process repeats for each render, ensuring that updates are always processed in priority order and that high-priority updates are not starved by lower-priority ones.
This compact representation using bitmasks allows for efficient operations and comparisons, which are crucial for React’s scheduling system.
For example, consider the following bitmask representation of render lanes:
00000000000000000000000000000000
Each bit in the bitmask corresponds to a lane, with the leftmost bit representing the highest priority lane and the rightmost bit representing the lowest priority lane.
When an update is scheduled, React assigns it to a lane by setting the corresponding bit in the bitmask. For example, if an update is assigned to lane 3 (zero-based index), the bitmask would look like this:
00000000000000000000000000000100
React can then use bitwise operations to quickly determine which lanes have updates and to compare the priority of different lanes. For example, React can use the bitwise OR operator to merge the bitmasks of multiple lanes with updates:
constlane1=0b00000000000000000000000000000010;constlane2=0b00000000000000000000000000000100;constcombinedLanes=lane1|lane2;// 0b00000000000000000000000000000110
Similarly, React can use the bitwise AND operator to check if a specific lane has updates:
consthasUpdatesInLane1=(combinedLanes&lane1)!==0;// true
To understand how render lanes work in practice, let’s consider some code examples illustrating the use of render lanes in a React application. Suppose we have a different chat application with three main components: a message list, a typing indicator, and a message input. Each component has different update priorities based on user interaction and expected behavior. When updates are scheduled, React assigns them to the appropriate lanes based on their priority. In our chat application example, let’s assume the following priority assignments:
React takes care of assigning updates to the correct lanes based on these priorities, allowing the application to function efficiently without manual intervention. This is a function of the Fiber reconciler which we’ve previously discussed in chapter 4. It does this by using heuristics and internal mechanisms that evaluate the update’s context and decide the appropriate priority level. Let’s explore how this works.
When an update is triggered, React performs the following steps to determine its priority and assign it to the correct lane:
Determine the update’s context: React evaluates the context in which the update was triggered. This context could be a user interaction, an internal update due to state or props changes, or even an update that’s a result of a server response. The context plays a crucial role in determining the priority of the update.
Estimate priority based on the context: Based on the context, React estimates the priority of the update. For instance, if the update is a result of user input, it’s likely to have a higher priority, while an update triggered by a non-critical background process might have a lower priority. We’ve already discussed the different priority levels in detail, so we won’t go into more detail here.
Check for any priority overrides: In some cases, developers can explicitly set the priority of an update using React’s useTransition or useDeferredValue hooks. If such a priority override is present, React will consider the provided priority instead of the estimated one.
Assign the update to the correct lane: Once the priority is determined, React assigns the update to the corresponding lane. This process is done using the bitmask we just looked at, which allows React to efficiently work with multiple lanes and ensure that updates are correctly grouped and processed.
Throughout this process, React relies on its internal heuristics and the context in which updates occur to make informed decisions about their priorities. This dynamic assignment of priorities and lanes allows React to balance responsiveness and performance, ensuring that applications function efficiently without manual intervention from developers.
It’s important to note that while React is good at estimating priorities, it’s not always perfect. As a developer, you may sometimes need to override the default priority assignments using the mentioned APIs (useTransition, useDeferredValue) to fine-tune your application’s performance and responsiveness.
Once updates have been assigned to their respective lanes, React processes them in priority order. In our chat application example, React would process updates in the following order:
By processing updates in priority order, React ensures that the most important parts of the application remain responsive even under heavy load.
After processing all the updates in their respective lanes, React enters the commit phase, where it applies the changes to the DOM, runs effects, and performs other finalization tasks. In our chat application example, this might include updating the message input value, showing or hiding the typing indicator, and appending new messages to the message list. React then moves on to the next render cycle, repeating the process of collecting updates, processing lanes, and committing changes.
This book is called “Fluent React” and aims to be a deep dive on React’s various mechanisms. Since we’re looking at update deferral and scheduling in this chapter, we’d be remiss if we didn’t dive deeper into the functions we’ve just mentioned: useTransition and useDeferredValue. This book is also not just the docs (which you can reference at react.dev), but a more nuanced look at these APIs. In keeping with that spirit, let’s dive a little bit into them as well.
Let’s begin with the basics and dive deeper.
useTransition is a powerful React hook that allows you to manage the priority of state updates in your components and prevent the UI from becoming unresponsive due to high-priority updates. It’s particularly useful when dealing with updates that can be visually disruptive, such as loading new data or navigating between pages in a Single Page Application (SPA).
useTransition is a hook, meaning you can only use it inside function components. It returns an array containing two elements:
isPending: A boolean indicating whether a transition is in progress.startTransition: A function that you can use to wrap updates that should be deferred or given a lower priority.Here’s a simple example that demonstrates the basic usage of useTransition:
importReact,{useState,useTransition}from"react";functionApp(){const[count,setCount]=useState(0);const[isPending,startTransition]=useTransition();consthandleClick=()=>{doSomethingImportant();startTransition(()=>{setCount(count+1);});};return(<div><p>Count:{count}</p><buttononClick={handleClick}>Increment</button>{isPending&&<p>Loading...</p>}</div>);}exportdefaultApp;
In this example, we use useTransition to manage the priority of a state update that increments a counter. By wrapping the setCount update inside the startTransition function, we tell React that this update can be deferred, preventing the UI from becoming unresponsive if there are other high-priority updates happening simultaneously.
useTransition examplesNow that we’ve seen a basic example, let’s explore some more advanced use cases of useTransition.
useTransition can be particularly useful when dealing with data fetching, as it allows you to manage the priority of updates that display fetched data. Here’s an example that demonstrates how to use useTransition with data fetching:
importReact,{useState,useEffect,useTransition}from"react";functionfetchData(id){returnnewPromise((resolve)=>{setTimeout(()=>{resolve({id,data:`Fetched data for id:${id}`});},1000);});}functionApp(){const[data,setData]=useState(null);const[isPending,startTransition]=useTransition();consthandleFetchData=(id)=>{startTransition(()=>{fetchData(id).then((fetchedData)=>setData(fetchedData));});};useEffect(()=>{handleFetchData(1);},[]);return(<div><buttononClick={()=>handleFetchData(2)}>Fetchmoredata</button>{isPending&&<p>Loading...</p>}{data&&<p>{data.data}</p>}</div>);}exportdefaultApp;
In this example, we have a fetchData function that simulates an API call and returns a promise with fetched data. We use useTransition to wrap the update that sets the fetched data, ensuring that this update is deferred if necessary.
useTransition is also useful when navigating between pages in a Single Page Application (SPA). By managing the priority of updates related to navigation, you can ensure that the user experience remains smooth and responsive, even when dealing with complex page transitions.
Consider this example where we demonstrate how to use useTransition for managing page transitions in an SPA:
importReact,{useState,useTransition}from"react";constPageOne=()=><div>PageOne</div>;constPageTwo=()=><div>PageTwo</div>;functionApp(){const[currentPage,setCurrentPage]=useState("pageOne");const[isPending,startTransition]=useTransition();consthandleNavigation=(page)=>{startTransition(()=>{setCurrentPage(page);});};constrenderPage=()=>{switch(currentPage){case"pageOne":return<PageOne/>;case"pageTwo":return<PageTwo/>;default:return<div>Unknownpage</div>;}};return(<div><nav><buttononClick={()=>handleNavigation("pageOne")}>PageOne</button><buttononClick={()=>handleNavigation("pageTwo")}>PageTwo</button></nav>{isPending&&<p>Loading...</p>}{renderPage()}</div>);}exportdefaultApp;
In this example, we have two simple components representing different pages in our SPA. We use useTransition to wrap the state update that changes the current page, ensuring that the page transition is deferred if there are other high-priority updates (like user input) happening simultaneously.
useTransitionuseTransition also allows you to customize the behavior of the transition through an optional configuration object. This object can include the following properties:
timeoutMs: Sets a timeout for the transition, in milliseconds. If the transition takes longer than this value, React will force a render with the new state.Here’s an example that demonstrates how to customize useTransition with a timeout:
importReact,{useState,useTransition}from"react";functionApp(){const[count,setCount]=useState(0);const[isPending,startTransition]=useTransition({timeoutMs:5000});consthandleClick=()=>{startTransition(()=>{setCount(count+1);});};return(<div><p>Count:{count}</p><buttononClick={handleClick}>Increment</button>{isPending&&<p>Loading...</p>}</div>);}exportdefaultApp;
In this example, we set a 5-second timeout for the transition, ensuring that the update will be applied even if it takes longer than expected.
With the background knowledge of React’s Fiber architecture, the React Scheduler, priority levels, and render lanes mechanism, we can now delve into the inner workings of the useTransition hook.
The useTransition hook works by creating a transition and assigning a specific priority level to the updates made within that transition. When an update is wrapped in a transition, React ensures that the update gets scheduled and rendered based on the assigned priority level.
Here’s a high-level overview of the steps involved in using the useTransition hook:
useTransition hook within a functional component.isPending state, and the second is the startTransition function.startTransition function to wrap any state update or component rendering that you want to control the timing of.isPending state provides an indicator of whether the transition is still in progress or has completed.useTransition and Render LanesWhen you use useTransition, React leverages the render lanes mechanism to ensure that updates within the transition are given the appropriate priority level. By default, transitions are assigned a lower priority level compared to other updates, such as immediate or user-blocking updates.
The default priority level for a transition is called TransitionPriority. This level allows updates within the transition to be deferred if there are higher-priority updates competing for the main thread. This ensures that critical updates and user interactions are not blocked by the updates within the transition.
When a transition is initiated, React creates a new lane for the updates wrapped within the startTransition function. It then associates this lane with the TransitionPriority level, allowing the Scheduler to manage and prioritize the updates accordingly.
Let’s look at a simple example to understand how useTransition works:
importReact,{useState,useTransition}from"react";functionApp(){const[count,setCount]=useState(0);const[isPending,startTransition]=useTransition();consthandleClick=()=>{doSomethingImportant();startTransition(()=>{setCount((prevCount)=>prevCount+1);});};return(<div><h1>Count:{count}</h1><buttononClick={handleClick}>Increment</button>{isPending&&<p>Updating...</p>}</div>);}exportdefaultApp;
In this example, we have a simple counter application. When the user clicks the “Increment” button, the handleClick function is called. Inside this function, we use the startTransition function returned by the useTransition hook to wrap the state update for the counter.
React ensures that the wrapped update is assigned the TransitionPriority level, allowing other higher-priority updates to take precedence if needed. The isPending state indicates whether the transition is still in progress or has completed, allowing us to show an “Updating...” message to the user while the update is being processed.
By using useTransition, we can effectively control the timing of updates and maintain a smooth user experience, even when other higher-priority updates are competing for the main thread.
While the default priority level for transitions is TransitionPriority, React allows developers to customize the priority level of updates wrapped within a transition. This can be achieved by passing a configuration object with a timeoutMs property to the useTransition hook.
The timeoutMs property specifies the maximum time the transition is allowed to be deferred. After the timeout has elapsed, the transition will be treated as a higher-priority update. This is very similar to how requestIdleCallback works in browsers, as we’ve covered earlier.
Here’s an example of how to use the timeoutMs configuration:
importReact,{useState,useTransition}from"react";functionApp(){const[count,setCount]=useState(0);const[isPending,startTransition]=useTransition({timeoutMs:1000});consthandleClick=()=>{startTransition(()=>{setCount((prevCount)=>prevCount+1);});};return(<div><h1>Count:{count}</h1><buttononClick={handleClick}>Increment</button>{isPending&&<p>Updating...</p>}</div>);}exportdefaultApp;
In this example, we have specified a timeoutMs of 1000 milliseconds (1 second). This means that if the transition is deferred for more than 1 second, React will treat it as a higher-priority update and ensure it gets rendered without further delay. React can update priority levels on the fly. This means that if the transition is deferred for less than 1 second, React will treat it as a lower-priority update and defer it if needed.
useTransition and SuspenseThe useTransition hook can also be used in conjunction with React Suspense to manage the loading state of components that rely on asynchronous data fetching. When using useTransition with Suspense, the transition priority level determines how long the component will stay in the “suspended” state before the fallback UI is rendered.
Here’s an example that demonstrates using useTransition with Suspense:
importReact,{useState,useTransition,Suspense}from"react";importUserProfilefrom"./UserProfile";functionApp(){const[userId,setUserId]=useState(1);const[isPending,startTransition]=useTransition({timeoutMs:1000});consthandleNextUser=()=>{startTransition(()=>{setUserId((prevUserId)=>prevUserId+1);});};return(<div><Suspensefallback={<p>Loadinguserprofile...</p>}><UserProfileuserId={userId}/></Suspense><buttononClick={handleNextUser}>NextUser</button>{isPending&&<p>Loadingnextuser...</p>}</div>);}exportdefaultApp;
In this example, we have a UserProfile component that fetches user data asynchronously. When the user clicks the “Next User” button, the handleNextUser function is called, and we use the startTransition function to wrap the state update for the userId. With the timeoutMs configuration set to 1000 milliseconds, React will wait up to 1 second before showing the fallback UI, allowing for a smoother transition between user profiles.
useTransitionThe useTransition hook is a powerful feature in React that allows developers to manage the scheduling and prioritization of component updates. It is particularly useful when dealing with asynchronous tasks, ensuring that updates do not interfere with the user’s experience or cause unnecessary re-renders. In this section, we will discuss the advantages of using useTransition in detail.
Smoother User Experience: One of the main advantages of using useTransition is that it enables a smoother user experience. By deferring lower-priority updates and preventing them from blocking the main thread, React can maintain a responsive UI, even in complex applications with many asynchronous tasks. This is especially important in applications where smooth interactions are crucial, such as games, animations, and real-time data visualizations.
For example, consider a chat application where users can send and receive messages in real-time. If the UI were to freeze every time a new message arrived, the user experience would suffer. By using useTransition, React can intelligently schedule the updates and renders, ensuring that the chat application functions efficiently even under heavy load.
Improved Perceived Performance: The useTransition hook also improves the perceived performance of an application. By delaying the rendering of certain updates, React can prioritize more critical tasks, such as user interactions and animations. This gives the illusion of a faster and more responsive application, even if the actual processing time remains the same.
Consider an application with a list of items that the user can filter. If the user types a search query, we want the application to respond immediately, rather than waiting for the filtered list to be rendered. By using useTransition, React can prioritize the user’s input and defer the rendering of the filtered list, making the application feel more responsive.
Optimized Resource Usage: Another advantage of using useTransition is that it helps optimize resource usage. In many applications, component updates can consume significant processing power, especially when dealing with complex data structures or large datasets. By deferring and prioritizing updates, React can better allocate resources to the most critical tasks, improving overall application performance.
For example, in a data visualization application, updating a complex chart with new data can be resource-intensive. By using useTransition, React can prioritize user interactions, such as zooming and panning, while deferring the chart update. This ensures that the application remains responsive and efficient, even when dealing with large amounts of data.
Graceful Degradation: When using useTransition, developers can take advantage of graceful degradation, where the application continues to function even when certain features are not available or when the system is under heavy load. By controlling the priority of updates, React can ensure that critical functionality remains accessible, even when less important updates are deferred or omitted.
For example, in a news application, the primary function is to display the latest headlines. If the server is under heavy load and takes longer than usual to fetch new data, React can use useTransition to prioritize rendering the existing headlines while deferring the update until the new data arrives. This ensures that users can still access the primary functionality of the application, even under adverse conditions.
Simplified State Management: The useTransition hook also simplifies state management in React applications. By providing a clear way to separate high-priority updates from lower-priority ones, developers can more easily reason about the state of their application and ensure that updates are processed in the correct order.
For example, consider an application with multiple components that depend on a shared piece of state. If some of these components are more critical than others, it can be challenging to ensure that the shared state updates are processed in the correct order. By using useTransition, developers can prioritize the most critical updates and defer less important ones, making state management more predictable and easier to reason about.
Better Handling of Concurrency: In modern web applications, concurrency is a common concern. Multiple tasks may be running simultaneously, with each potentially affecting the state of the application. Using useTransition can help manage these concurrent tasks more effectively, ensuring that high-priority updates are processed before lower-priority ones.
For example, imagine an e-commerce application where the user can add items to their cart while also viewing product details. If a user adds an item to their cart and then quickly navigates to another product page, we want to ensure that the cart update is processed before rendering the new product page. By using useTransition, we can control the priority of these updates and guarantee that the user’s actions are processed in the correct order.
More Efficient Network Usage: By using useTransition, developers can optimize network usage in their applications. Often, web applications need to fetch data from an API or other external sources. By controlling when and how this data is fetched, React can reduce the number of redundant network requests and ensure that the most critical data is fetched first.
For instance, in a social media application, the user’s feed may be updated with new posts as they scroll down the page. Instead of fetching all the posts at once, which can be resource-intensive and slow, useTransition allows React to fetch and render posts incrementally as the user scrolls. This not only improves the perceived performance of the application but also reduces the load on the server and network.
Easier Debugging and Testing: Finally, using useTransition can make debugging and testing React applications easier. By clearly separating high-priority updates from lower-priority ones, developers can more easily pinpoint issues and understand the flow of their application. This can be particularly useful when debugging complex applications with many components and asynchronous tasks.
Additionally, useTransition can make testing more straightforward by allowing developers to simulate different conditions and scenarios, such as slow network connections or heavy load on the server. This can help ensure that the application remains performant and responsive under various conditions, leading to a more robust and reliable user experience.
useTransitionWhile useTransition offers many benefits and can greatly improve the user experience in various situations, there are cases where it may not be the best choice. It’s essential to understand these scenarios to avoid overusing the hook and complicating your application unnecessarily. In this section, we will discuss when not to use useTransition in your React applications, covering several examples and providing guidance on making the right choice.
If you are developing a simple application with minimal asynchronous updates, using useTransition might be overkill. In such cases, the added complexity and overhead of the hook may not provide enough benefits to justify its usage. For instance, if your application consists of a few simple components and does not require any data fetching or complex state management, using useTransition may not be necessary.
Consider the following example:
importReact,{useState}from"react";functionCounter(){const[count,setCount]=useState(0);return(<div><p>Count:{count}</p><buttononClick={()=>setCount(count+1)}>Increment</button></div>);}exportdefaultCounter;
In this simple counter example, there is no need for useTransition since the state updates are minimal and synchronous.
When your application primarily consists of static content or elements that do not require asynchronous updates, there is no need to use useTransition. The primary use case for useTransition is to manage asynchronous updates, and if your application does not involve such scenarios, adding useTransition would unnecessarily complicate the code.
In some cases, you might need the updates to be immediate and not have any transitions or deferring of updates. This can be true for critical functionalities, where any delay could impact the user experience negatively. In these cases, using useTransition would be counterproductive, as it might introduce delays or add unnecessary complexity.
For example, consider a form validation scenario:
importReact,{useState}from"react";functionLoginForm(){const[,setEmail]=useState("");const[password,setPassword]=useState("");const[isValid,setIsValid]=useState(false);constvalidateForm=()=>{setIsValid(.length>0&&password.length>0);};return(<div><inputtype="email"placeholder="Email"value={}onChange={(e)=>{setEmail(e.target.value);validateForm();}}/><inputtype="password"placeholder="Password"value={password}onChange={(e)=>{setPassword(e.target.value);validateForm();}}/><buttondisabled={!isValid}>Submit</button></div>);}exportdefaultLoginForm;
In this case, you would want the form validation to happen immediately as the user types. Using useTransition here would not be beneficial, as it could introduce delays in the validation process.
For applications with low-frequency updates, using useTransition might not provide enough advantages to justify its use. In these scenarios, the additional complexity introduced by the hook might outweigh the performance benefits. This is particularly true for applications where updates occur infrequently or are not time-sensitive.
To summarize, you should consider not using useTransition in the following cases:
Understanding when NOT to use useTransition is crucial for making the right architectural decisions for your application. By avoiding its unnecessary use, you can keep your codebase clean and focused on the essential aspects of your app.
Whew, that’s a lot to chew on, but hopefully now we understand useTransition and its interplay with asynchronous React including the priority system, render lanes, and the scheduler. We’ve also learned about the different use cases for useTransition and when it might not be the best choice. Let’s move on exploring useDeferredValue in more detail.
useDeferredValueuseDeferredValue is a built-in hook introduced in React to help manage the prioritization of updates in React applications. By deferring less important updates to a later time, useDeferredValue enables smoother transitions and a better user experience in situations where the application is under heavy load or dealing with computationally expensive operations.
During the initial render, the returned deferred value will be the same as the value you provided. During updates, React will first attempt a re-render with the old value (so it will return the old value), and then try another re-render in background with the new value (so it will return the updated value). If the new value is not ready yet, the old value will be returned again. Think of this as stale-while-revalidate energy.
The primary purpose of useDeferredValue is to allow you to defer the rendering of less critical updates. This is particularly useful when you want to prioritize more important updates, such as user interactions, over less critical ones, such as displaying updated data from the server.
By using useDeferredValue, you can provide a smoother user experience and ensure that your application remains responsive even under heavy load or when dealing with complex operations.
To use useDeferredValue, you will need to import it from the React package and pass a value to be deferred as its argument. The hook will then return a deferred version of the value that can be used in your component.
Here’s an example of how to use useDeferredValue in a simple application:
importReact,{useState,useDeferredValue}from"react";functionApp(){const[searchValue,setSearchValue]=useState("");constdeferredSearchValue=useDeferredValue(searchValue,{timeoutMs:2000,});return(<div><inputtype="text"value={searchValue}onChange={(event)=>setSearchValue(event.target.value)}/><SearchResultssearchValue={deferredSearchValue}/></div>);}functionSearchResults({searchValue}){// Perform the search and render the results}
In this example, we have a search input and a SearchResults component that displays the results. We use useDeferredValue to defer the rendering of the search results, allowing the application to prioritize user input and remain responsive even when searching is computationally expensive.
The second argument to useDeferredValue is an optional configuration object with a timeoutMs property. This property specifies the maximum time (in milliseconds) that the deferred value should be delayed before it is updated. In our example, we set the timeout to 2 seconds (2000 milliseconds).
There are several advantages to using useDeferredValue in your React applications:
Improved Responsiveness: By deferring less critical updates, useDeferredValue allows your application to prioritize more important tasks, such as user input or animations. This results in a smoother user experience and ensures that your application remains responsive even under heavy load or when dealing with computationally expensive operations.
Simplified State Management: useDeferredValue provides a simple and declarative way to manage the prioritization of updates in your application. By encapsulating the logic for deferring updates within the hook, you can keep your component code clean and focused on the essential aspects of your app.
Better Resource Utilization: By deferring less critical updates, useDeferredValue allows your application to make better use of available resources. This can help reduce the likelihood of performance bottlenecks and improve the overall performance of your application.
useDeferredValueuseDeferredValue is most useful in situations where your application needs to prioritize certain updates over others. Some common scenarios where you might consider using useDeferredValue include:
Let’s take a look at an example where useDeferredValue can be particularly useful. Imagine we have a large list of items that we want to filter based on user input. Filtering a large list can be computationally expensive, so using useDeferredValue can help keep the application responsive.
importReact,{useState,useMemo,useDeferredValue}from"react";functionApp(){const[filter,setFilter]=useState("");constdeferredFilter=useDeferredValue(filter,{timeoutMs:1000});constitems=useMemo(()=>generateLargeListOfItems(),[]);constfilteredItems=useMemo(()=>{returnitems.filter((item)=>item.includes(deferredFilter));},[items,deferredFilter]);return(<div><inputtype="text"value={filter}onChange={(event)=>setFilter(event.target.value)}/><ItemListitems={filteredItems}/></div>);}functionItemList({items}){// Render the list of items}functiongenerateLargeListOfItems(){// Generate a large list of items for the example}
In this example, we use useDeferredValue to defer the rendering of the filtered list. As the user types in the filter input, the deferred value updates less frequently, allowing the application to prioritize the user input and remain responsive. The timeoutMs is set to 1000 milliseconds, meaning the deferred value will be updated at most once per second.
The useMemo hooks are used to memoize the items and filteredItems arrays, preventing unnecessary re-rendering and recalculations. This further improves the performance of the application.
useDeferredValueWhile useDeferredValue can be beneficial in certain scenarios, it’s important to recognize its limitations and potential drawbacks.
useDeferredValue is just one tool for improving the performance and responsiveness of your application. It’s essential to ensure that your application is well-optimized and follows best practices for performance.useDeferredValue everywhere, doing so can lead to unnecessary complexity in your code. Use useDeferredValue only when it makes sense for your specific use case.A good question to ask yourself when deciding whether to use useDeferredValue or not is:
“Is the update I’m considering deferring less critical or time-sensitive, and can the application still provide a smooth user experience with slightly delayed or less frequent updates?”
By asking yourself this question, you can assess the importance of the update and whether it’s suitable for deferral. Remember that useDeferredValue is most useful when the updates you’re deferring are less critical or time-sensitive, and when prioritizing other tasks (like user input or animations) is more important for the user experience. If the update is critical or must be immediately reflected in the UI, it’s likely not a good candidate for using useDeferredValue.
Deep diving into useDeferredValue requires an understanding of several pieces of React’s internal implementation. It’s built upon the infrastructure employing React’s newer concurrent features and takes advantage of key techniques such as fiber reconciliation, scheduling, double buffering, and render lanes to provide the desired effects.
We’ll try to break this down as concisely as possible, but given the complexity of the implementation, some details may be simplified.
The primary goal of useDeferredValue is to defer state updates that might cause heavy render work and allow more important updates (like user interactions) to be processed first.
To understand how useDeferredValue works internally, let’s first consider the function signature:
constdeferredValue=useDeferredValue(value,{timeoutMs:2000});
It takes two parameters, value and a configuration object with timeoutMs. value is the current state that we want to defer, and timeoutMs is an optional timeout in milliseconds after which React will force the update.
When useDeferredValue is called, the given value is stored in React’s internal component state. The component then tells React that it wants to suspend based on the provided timeout. This suspension signal is propagated upwards through the component tree.
React internally keeps track of two versions of the state: the “normal” one, and the deferred one. This is effectively double buffering: React swaps between the two versions depending on which render pass it’s in. A component using useDeferredValue is effectively rendered twice: once with the deferred state and once with the non-deferred state.
Here’s a simplified version of how this might work:
letcurrentValue=initial;letdeferredValue=initial;functionuseDeferredValue(value,config){// Set the new valuecurrentValue=value;// Schedule an update after timeoutMssetTimeout(()=>{deferredValue=currentValue;},config.timeoutMs);// Return the deferred valuereturndeferredValue;}
Now, the concept of render lanes comes into play. Each state update in React is assigned to a specific render lane. The updates in the same render lane are processed together. When a component calls useDeferredValue, React creates a new render lane for this deferred update. This way, React can work on several lanes concurrently, switching between them as needed.
At render time, React checks if the timeout for the deferred value has expired. If it has, React uses the new value for rendering. If not, React uses the old value. This allows React to keep the UI responsive by only committing deferred updates when the main thread is less busy.
When an important update (like a user interaction) comes in, it’s given a higher priority render lane. React will switch to work on this lane immediately, deferring the work on lower priority lanes. This is how useDeferredValue helps in keeping the UI responsive under heavy load.
After the higher priority updates are committed, React will switch back to the deferred update lane and continue the work there. This process is repeated as new updates come in.
Remember that this is a simplified view of how useDeferredValue might work under the hood. The actual implementation involves dealing with many edge cases and optimizations. For instance, useDeferredValue also interacts with React’s batching mechanism, suspense, error boundaries, and many other parts of the system. The most reliable way to understand these details is to dive into the React source code.
useDeferredValue is a complex but powerful hook that leverages several key techniques in React’s concurrent mode to deliver a responsive UI under heavy load. Its mastery requires a deep understanding of React’s internals, but its appropriate use can dramatically enhance the user experience in React applications.
However, like any other tool, useDeferredValue must be used judiciously. If overused or misused, it can lead to unnecessary complexity in your codebase, and even potential bugs or performance issues. It’s a good practice to understand your application’s performance bottlenecks and requirements thoroughly before deciding to use useDeferredValue or any other advanced feature of React.
Also, useDeferredValue is only a part of the story. React’s concurrent mode introduces several other features and techniques. These features work together to help you build a truly concurrent UI, allowing different parts of your application to update at their own pace based on their priorities.
Now, let’s look at some more complex examples of how useDeferredValue could be used in a real-world application:
import{useDeferredValue}from'react';functionNoteComponent({text}){constdeferredText=useDeferredValue(text,{timeoutMs:1000});constsentiment=computeSentiment(deferredText);// Expensive operationreturn(<div><textareavalue={text}onChange={...}/><div>Sentiment:{sentiment}</div></div>);}
Here, useDeferredValue ensures that the user’s text input remains responsive, while the expensive sentiment analysis is deferred. If the computation takes longer than 1 second, it will still be forced to complete, ensuring that the sentiment analysis result is updated within a reasonable timeframe.
useDeferredValue can also be helpful when fetching data. Consider a search component that fetches search results from the server. You want to optimize it so that the UI remains responsive while the search results are being fetched.import{useDeferredValue}from'react';functionSearchComponent({query}){const[results,setResults]=useState([]);useEffect(()=>{fetchData(query).then(results=>setResults(results));},[query]);constdeferredResults=useDeferredValue(results,{timeoutMs:2000});return(<div><inputvalue={query}onChange={...}/><SearchResultsresults={deferredResults}/></div>);}
Here, useDeferredValue defers the rendering of the new search results until the fetching is complete. This ensures that the UI remains responsive even if fetching the search results takes time.
In both examples, the key point to note is the value of timeoutMs. By tuning this value, you can control how long React should wait before forcing the deferred update.
While the usage of useDeferredValue can greatly enhance your application’s responsiveness under load, it should not be seen as a magic bullet. Always remember that the best way to improve performance is to write efficient code and avoid unnecessary computations. As a final note, please remember that the specifics of these internals can change as React continues to evolve, so it’s crucial to keep up to date with the official documentation and RFCs.
useTransition vs. useDeferredValueTo appreciate the nuanced differences between useTransition and useDeferredValue, it’s essential to delve into the specifics of how these hooks operate.
The useTransition hook is designed to handle state transitions and manage intermediate loading states. Essentially, it makes it easier to prevent fast updates from causing a “tearing” effect where the UI shows a mix of old and new states at the same time. useTransition also lets us keep the user interface responsive during state transitions that might cause some delay. This hook is particularly useful when an update relies on a slow, asynchronous operation like network requests.
Here’s an example of how useTransition might be used:
const[resource,setResource]=useState(initialResource);const[startTransition,isPending]=useTransition();functionhandleClick(nextResource){startTransition(()=>{setResource(nextResource);});}return(<><buttononClick={()=>handleClick(nextResource)}disabled={isPending}>Load</button><SomeComponentresource={resource}/></>);
In this example, useTransition wraps around the state update function and allows the UI to keep responding to other user inputs even if the update introduced by handleClick function takes a long time to complete.
However, while useTransition is excellent for managing resource-heavy state updates, it might not be the best tool for all asynchronous scenarios. When it comes to updating the state based on user inputs in a performant way, especially when these inputs are fast or frequent (like typing in a text field), useTransition could still lead to a lagging UI. This is because the nature of useTransition is to delay rendering of the new state, which means the UI could be out-of-sync with the actual state while typing fast.
This is where useDeferredValue comes in. useDeferredValue allows us to defer slower updates from causing immediate re-renders, thus ensuring that our component remains responsive even under rapid state changes. It does so by keeping the previous state around for a bit longer, enabling the user interface to stay in sync with the user’s inputs, and then ‘catches up’ the rendering when there are enough resources available.
Let’s consider a text input field where we want to perform some expensive operation on the input, like a search operation, but we don’t want to compromise the responsiveness of the text input:
const[text,setText]=useState('');constdeferredText=useDeferredValue(text,{timeoutMs:2000});// text stays responsive, even under heavy load<inputvalue={text}onChange={e=>setText(e.target.value)}/>// deferredText "lags behind" text, updating only when the system is less busy<ExpensiveComponenttext={deferredText}/>
Here, we use useDeferredValue to create a deferred version of our state. The deferred version will “lag behind” the actual state, updating only when the system is less busy. This allows ExpensiveComponent to always work with slightly outdated data but without slowing down the responsiveness of the text input.
In summary, useTransition and useDeferredValue serve different purposes and are best used in different scenarios:
useTransition is best used for managing loading states and handling resource-heavy updates. It provides a way to keep the UI responsive during state transitions that might take some time to complete.
useTransition is great for scenarios where updates aren’t urgent and maintaining responsiveness for the rest of the user interface is a priority.
useDeferredValue shines when dealing with rapid state updates. It keeps the UI responsive by deferring slower state updates, thus preventing them from blocking important UI updates such as user inputs.
useDeferredValue excels in situations where maintaining real-time interaction is critical, even if it means showing slightly stale data for a short time.
However, it’s important to understand that neither useTransition nor useDeferredValue are silver bullets for all performance issues. They should be used judiciously as part of an overall performance strategy. Misuse or overuse of these hooks could lead to unnecessary complexity in your code and may even degrade performance rather than improve it.
Let’s consider some guidelines and common scenarios where each hook could be useful:
useTransitionuseTransition. This way, you can show a loading spinner if the operation takes longer than expected, while allowing other interactions to stay responsive.const[startTransition,isPending]=useTransition();startTransition(()=>{fetch("/api/data").then((response)=>response.json().then((data)=>setData(data)));});// Show a loading spinner if the operation takes longer than expectedif(isPending){return<Spinner/>;}
useTransition allows you to delay this update and keep the UI responsive in the meantime.const[startTransition,isPending]=useTransition();startTransition(()=>{// This computation might take a whileconstnewData=computeNewData(oldData);setData(newData);});
useDeferredValueuseDeferredValue is great for making text inputs and other similar components stay responsive under load.const[text,setText]=useState("");constdeferredText=useDeferredValue(text,{timeoutMs:2000});<inputvalue={text}onChange={(e)=>setText(e.target.value)}/>// deferredText updates only when the system is less busy<ExpensiveComponenttext={deferencedText}/>
useDeferredValue to ensure that user input stays responsive, while the filtered list can lag a little behind.const[searchTerm,setSearchTerm]=useState("");constdeferredSearchTerm=useDeferredValue(searchTerm,{timeoutMs:2000});<inputvalue={searchTerm}onChange={(e)=>setSearchTerm(e.target.value)}/>// FilteredList updates only when the system is less busy<FilteredListitems={items}searchTerm={deferredSearchTerm}/>
useTransition and useDeferredValue are powerful tools that can significantly improve the user experience in asynchronous operations when used properly. While useTransition focuses on providing an improved experience for slow state updates, useDeferredValue keeps the UI responsive under rapid state changes. Understanding when and how to use these hooks effectively is a crucial part of mastering asynchronous programming in React.
This comprehensive conversation focused on the deep exploration of Asynchronous React, touching on multiple aspects including the Fiber Reconciler, Scheduling, deferring updates, Render Lanes, and new hooks, such as useTransition and useDeferredValue.
We began by discussing the Fiber Reconciler, the heart of React’s concurrent rendering engine. It’s the algorithm behind the framework’s ability to break work into smaller chunks and manage the execution priority, allowing React to be “interruptible” and support concurrent rendering. This contributes significantly to the ability of React to handle complex, high-performance applications smoothly, ensuring user interactions remain responsive even during heavy computation.
Then we moved on to the concept of scheduling and deferring updates, which essentially allows React to prioritize certain state updates over others. React can defer lower priority updates in favor of higher ones, thus maintaining a smooth user experience even under heavy load. An example was illustrated with a chat application where incoming message updates were intelligently scheduled and rendered without blocking the user interface.
The conversation then moved to Render Lanes, a central concept in React’s concurrent features. Render Lanes are a mechanism that React uses to assign priority to updates and effectively manage the execution of these updates. It’s the secret behind how React decides which updates are urgent and need to be processed immediately and which ones can be deferred until later. The detailed explanation even mentioned how these Render Lanes use bitmasking to efficiently handle multiple priorities.
We then delved into the new hooks introduced for asynchronous operations in React, useTransition and useDeferredValue. These hooks are designed to handle transitions and provide smoother user experiences, particularly for operations that take a considerable amount of time.
The useTransition hook was first discussed, which allows React to transition between states in a way that ensures a responsive user interface even if the new state takes a while to prepare. In other words, it allows for delaying an update to the next render cycle if the component is currently rendering.
We also discussed the useDeferredValue hook, which defers the update of less critical parts of a component, thus preventing janky user experience. It essentially allows React to “hold on” to the previous value for a little longer if the new value is taking too much time.
The discussion also covered when and how to use these hooks, including detailed code examples and potential pitfalls. We discussed the scenarios when one should avoid using these hooks and the trade-offs involved. This included a comparative analysis of useTransition vs. useDeferredValue, discussing their usage cases and how they work under the hood.
Throughout the conversation, the recurring theme was understanding the ‘what’ and ‘why’ behind React’s strategies for managing complex, dynamic applications with heavy computation and how developers can utilize these strategies to deliver a smooth, responsive user experience.
What is the Fiber Reconciler in React and how does it contribute to the handling of complex, high-performance applications?
Explain the concept of scheduling and deferring updates in React. How does it help in maintaining a smooth user experience even under heavy load?
What are Render Lanes in React and how do they manage the execution of updates? Can you describe how Render Lanes use bitmasking to handle multiple priorities?
What is the purpose of the useTransition and useDeferredValue hooks in React? Describe a situation where each of these hooks would be beneficial.
When might it be inappropriate to use useTransition and useDeferredValue? What are some of the trade-offs involved with using these hooks?
Now that you have a deep understanding of the asynchronous nature of React and its inner workings, you are well-equipped to harness its full potential in building high-performance applications. In the next chapter, we will explore various popular frameworks built on top of React, such as Next.js and Remix, which further streamline the development process by providing best practices, conventions, and additional features.
These frameworks are designed to help you build complex applications with ease, taking care of many common concerns, such as server rendering, routing, and code splitting. By leveraging the power of these frameworks, you can focus on building your application’s features and functionality while ensuring optimal performance and user experience.
Stay tuned for an in-depth exploration of these powerful frameworks and learn how to build scalable, performant, and feature-rich applications using React and its ecosystem.
In our journey through React thus far, we have uncovered an extensive range of features and principles that contribute to its power and versatility. The previous chapter delved into the fascinating world of Asynchronous React, which empowers us with tools like useTransition and useDeferredValue to create highly responsive and user-friendly interfaces. We explored how these tools utilize the sophisticated scheduling and prioritization mechanisms of React, made possible by the Fiber Reconciler, to achieve optimal performance. The understanding of these asynchronous patterns is critical as we venture into the realm of React frameworks in this chapter.
React by itself is incredibly powerful, but as applications grow in complexity, we often find ourselves repeating similar patterns or needing more streamlined solutions for common challenges. This is where frameworks come in. React frameworks are software libraries or toolkits built on top of React, providing additional abstractions to handle common tasks more efficiently and enforce best practices.
While React provides the building blocks to create interactive user interfaces, it leaves many important architectural decisions up to the developers. React is unopinionated in this regard, giving developers the flexibility to structure their applications in the way they see fit. However, as applications scale, this freedom can turn into a burden. You might find yourself reinventing the wheel, dealing with common challenges such as routing, data fetching, and server-side rendering again and again.
This is where React frameworks come in. They provide a predefined structure and solutions to common problems, allowing developers to focus on what’s unique about their application, rather than getting bogged down with boilerplate code. This can significantly accelerate the development process and improve the quality of the codebase by adhering to best practices enforced by the framework.
To fully understand this, let’s try to write our own minimal framework. In order to do this, we need to identify a few key features that we get from frameworks that we do not get as easily from plain React. For the sake of brevity, we’ll identify three key features that we get from frameworks along these lines:
Let’s take a pre-existing imaginary React application, and incrementally add these features to understand what frameworks do for us. The React app we’re “framework-ifying” has the following structure:
- index.js - List.js - Detail.js - dist/ - clientBundle.js
Here’s what each file looks like:
// index.jsimportReactfrom"react";import{createRoot}from"react-dom/client";importRouterfrom"./Router";constroot=createRoot(document);constparams=newURLSearchParams();constthingId=params.get("id");root.render(window.location.pathname==="/"?<List/>:<DetailthingId={thingId}/>);// List.jsexportconstList=()=>{const[things,setThings]=useState([]);const[requestState,setRequestState]=useState("initial");const[error,setError]=useState(null);useEffect(()=>{setRequestState("loading");fetch("https://api.com/get-list").then((r)=>r.json()).then(setThings).then(()=>{setRequestState("success");}).catch((e)=>{setRequestState("error");setError(e);});},[]);return(<div><ul>{things.map((thing)=>(<likey={thing.id}>{thing.label}</li>))}</ul></div>);};// Detail.jsexportconstDetail=({thingId})=>{const[thing,setThing]=useState([]);const[requestState,setRequestState]=useState("initial");const[error,setError]=useState(null);useEffect(()=>{setRequestState("loading");fetch("https://api.com/get-thing/"+thingId).then((r)=>r.json()).then(setThings).then(()=>{setRequestState("success");}).catch((e)=>{setRequestState("error");setError(e);});},[]);return(<div><h1>Thething!</h1>{thing}</div>);};
There are a few of issues with this that affect all client-only rendered React applications:
We ship an empty page to a user with only code to load, then parse and execute JavaScript. A user downloads a blank page until JavaScript kicks in, and then they get our app. If the user is a search engine, they see nothing. If the search engine crawler does not support JavaScript, the search engine will not index our website.
We start fetching data too late. Our app falls prey to a user-experience curse called network waterfalls: a phenomenon that occurs when network requests happen in succession and slow down applications. Our application has to make multiple requests to a server for basic functionality. For instance, it executes like so:
useEffect starts fetching datauseEffect finishes fetching dataOur router is purely client-based. If a browser requests https://our-app.com/detail?thingId=24, the server responds with a 404 page because there is no such file on the server. A common hack used to remedy this is to render an HTML file when a 404 is encountered that loads JavaScript and has the client-side router take over. This hack doesn’t work for search engines or environments where JavaScript support is limited.
Frameworks help resolve all these issues and more. Let’s explore how exactly they do this.
To start with, frameworks usually give us server rendering out of the box. To add server rendering to this application, we need a server. We can write one ourselves using a package like Express.js. We would then deploy this server and we’re in business. Let’s explore the code that would power such a server.
Before we do, please be aware that we’re using renderToString merely for simplicity and to illustrate the underlying mechanisms behind how frameworks implement these features. In a real production use-case, it’s almost always better to rely on more powerful asynchronous APIs for server rendering like renderToPipeableStream as covered in Chapter 6.
With that out of the way, let’s do this.
// ./server.jsimportexpressfrom"express";import{renderToString}from"react-dom/server";// We covered this in Chapter 6import{List}from"./List";import{Detail}from"./Detail";constapp=express();app.use(express.static("./dist"));// Get static files like client JavaScript etc.constcreateLayout=(children)=>`<html lang="en"><head><title>My page</title></head><body>${children}<script src="/clientBundle.js"></script></body><html>`;app.get("/",(req,res)=>{res.setHeader("Content-Type","text/html");res.end(createLayout(renderToString(<List/>)));});app.get("/detail",(req,res)=>{res.setHeader("Content-Type","text/html");res.end(createLayout(renderToString(<DetailthingId={req.params.thingId}/>)));});app.listen(3000,()=>{console.info("App is listening!");});
This code is all we need to add server rendering to our application. Notice how index.js on the client-side has its own client router, and how we essentially just added another one for the server. Frameworks ship isomorphic routers, which are routers that work both on the client and on the server.
While this server is okay, it doesn’t scale well: for each route we add, we’ll have to manually add more req.get calls. Let’s make this a little more scalable. To do this, we’ll need to create file-system based routing. This is where the conventions and opinions of frameworks like Next.js come in (more on that later). When we enforce a convention such that all pages must go in a ./pages directory and all filenames in this directory become router paths, then our server can rely on the convention as an assumption and become more scalable.
Let’s illustrate this with an example. First, we’ll augment our directory structure. The new directory structure looks like this:
- index.js - pages/ - list.js - detail.js - dist/ - clientBundle.js
Now, we can assume that everything in pages becomes a route. Let’s update our server to match this.
// ./server.jsimportexpressfrom"express";import{join}from"path";import{renderToString}from"react-dom/server";// We covered this in Chapter 6constapp=express();app.use(express.static("./dist"));// Get static files like client JavaScript etc.constcreateLayout=(children)=>`<html lang="en"><head><title>My page</title></head><body>${children}<script src="/clientBundle.js"></script></body><html>`;app.get("/:route",async(req,res)=>{// Import the route's component from the pages directoryconstexportedStuff=awaitimport(join(process.cwd(),"pages",req.params.route));// We can no longer have named exports because we need predictability// so we opt for default exports instead.constPage=exportedStuff.default;// We can infer props from the query string maybe?constprops=req.query;res.setHeader("Content-Type","text/html");res.end(createLayout(renderToString(<Page{...props}/>)));});app.listen(3000,()=>{console.info("App is listening!");});
Now, our server scales far better because of the new ./pages directory convention we’ve adopted! Great! However, we are now forced to have each page’s component be a default export since our approach is more general and there would otherwise be no way to predict what name to import. This is one of the tradeoffs of working with frameworks. In this case, the tradeoff seems to be worth it.
Great! We’re 2 for 3. We’ve got server rendering and file-system based routing, but we’re still suffering from network waterfalls. Let’s fix the data fetching story. To start with, we’ll update our components to receive initial data through props. For simplicity, we’ll deal with just the List component and leave the Detail component for you to do as homework.
// ./pages/list.jsx// Note the default export for file-system based routing.exportdefaultfunctionList({initialThings}/* <- adding initial prop */){const[things,setThings]=useState(initialThings);const[requestState,setRequestState]=useState("initial");const[error,setError]=useState(null);// This can still run to fetch data if we need it to.useEffect(()=>{if(initialThings)return;setRequestState("loading");fetch("https://api.com/get-list").then((r)=>r.json()).then(setThings).then(()=>{setRequestState("success");}).catch((e)=>{setRequestState("error");setError(e);});},[initialThings]);return(<div><ul>{things.map((thing)=>(<likey={thing.id}>{thing.label}</li>))}</ul></div>);}
Great. Now that we’ve added an initial prop, we need some way to fetch the data this page needs on the server, and then pass it to the component before rendering it. Let’s explore how we can do that. What we want to do is ideally this:
// ./server.jsimportexpressfrom"express";import{join}from"path";import{renderToString}from"react-dom/server";// We covered this in Chapter 6constapp=express();app.use(express.static("./dist"));// Get static files like client JavaScript etc.constcreateLayout=(children)=>`<html lang="en"><head><title>My page</title></head><body>${children}<script src="/clientBundle.js"></script></body><html>`;app.get("/:route",async(req,res)=>{constexportedStuff=awaitimport(join(process.cwd(),"pages",req.params.route));constPage=exportedStuff.default;// Get component's dataconstdata=awaitexportedStuff.getData();constprops=req.query;res.setHeader("Content-Type","text/html");// Pass props and datares.end(createLayout(renderToString(<Page{...props}{...data.props}/>)));});app.listen(3000,()=>{console.info("App is listening!");});
This means we’ll need to export a fetcher function called getData from any page components that need data! Let’s adjust the list to do this:
// ./pages/list.jsx// We'll call this on the server and pass these props to the componentexportconstgetData=async()=>{return{props:{initialThings:awaitfetch("https://api.com/get-list").then((r)=>r.json()),},};};exportdefaultfunctionList({initialThings}/* <- adding initial prop */){const[things,setThings]=useState(initialThings);const[requestState,setRequestState]=useState("initial");const[error,setError]=useState(null);// This can still run to fetch data if we need it to.useEffect(()=>{if(initialThings)return;setRequestState("loading");getData().then(setThings).then(()=>{setRequestState("success");}).catch((e)=>{setRequestState("error");setError(e);});},[initialThings]);return(<div><ul>{things.map((thing)=>(<likey={thing.id}>{thing.label}</li>))}</ul></div>);}
Done! Now we’re:
We’ve successfully added and understood the 3 features we’ve identified from various frameworks and implemented a basic version of them. By doing this, we’ve learned and now understand the underlying mechanism by which frameworks do what they do. Now that we understand the benefits of using a framework at the code level, and the reasons for some of their conventions, let’s explore these benefits at a higher level.
Structure and Consistency: React frameworks enforce a certain structure and pattern to organize the codebase. This leads to consistency, making it easier for new developers to understand the flow of the application.
Best Practices: Frameworks often come with baked-in best practices that developers are encouraged to follow. This can lead to better code quality and fewer bugs.
Abstractions: React frameworks provide higher-level abstractions to handle common tasks such as routing, data fetching, server rendering, and more. This can make your code cleaner, more readable, and easier to maintain, while leaning on the broader community to ensure the quality of these abstractions.
Performance Optimizations: Many frameworks come with out-of-the-box optimizations such as code splitting, server-side rendering, and static site generation. These can significantly improve the performance of your application.
Community and Ecosystem: Popular frameworks have a large community and a rich ecosystem of plugins and libraries. This means you can often find a solution or get help quickly if you run into a problem.
While frameworks come with many advantages, they are not without their tradeoffs. Understanding these can help you make an informed decision about whether to use a framework and which one to choose.
Learning Curve: Every framework comes with its own set of concepts, APIs, and conventions that you need to learn. If you’re new to React, trying to learn a framework at the same time can be overwhelming.
Flexibility vs. Convention: While the enforced structure and conventions of a framework can be a boon, they can also be constraining. If your application has unique requirements that don’t fit into the framework’s model, you might find yourself fighting against the framework rather than being helped by it.
Dependency and Commitment: Choosing a framework is a commitment. You’re tying your application to the fate of the framework. If the framework stops being maintained or if it takes a direction that doesn’t align with your needs, you may face difficult decisions about whether to undertake a costly migration to a different framework or to maintain the existing framework code yourself.
Abstraction Overhead: While abstractions can simplify development by hiding complexity, they can also create “magic” that makes it difficult to understand what’s happening under the hood. This can make debugging and performance tuning challenging. Furthermore, every abstraction comes with some overhead, which might have an impact on performance.
Size and Speed: Frameworks typically include code for a wide range of features, some of which you may not use in your application. This can lead to larger bundle sizes, which may affect the loading speed of your application, especially for users with slow internet connections. Some frameworks address this issue with techniques like tree-shaking or code splitting, but it’s still something to be aware of.
Now that we understand why we might want to use a React framework, and the benefits and tradeoffs involved, we can delve into specific frameworks in the React ecosystem. In the upcoming sections of this chapter, we’ll explore some of the popular choices such as Next.js and Remix. Each of these frameworks offers unique features and advantages, and understanding them will equip you with the knowledge to choose the right tool for your specific needs.
Let’s explore some of the popular React frameworks and discuss their features, advantages, and trade-offs. We’ll start with a brief overview of each framework, followed by a detailed comparison of their features and performance. We’ll also discuss some of the factors to consider when choosing a framework for your project.
Remix is a powerful modern web framework that leverages React and the features of the web platform. Let’s get started with some practical examples to understand how it works.
First, we’ll set up a basic Remix application. You can install Remix using npm:
npxcreate-remix@latest
This will create a new Remix project in your current directory.
A typical Remix application consists of several components. Let’s take a look at each of them.
The root component (usually located in root.tsx) is the entry point of your application. It’s where you’ll typically set up global providers and context for your app.
import{RemixBrowser}from"@remix-run/browser";functionApp(){return<RemixBrowser/>;}exportdefaultApp;
In the above example, RemixBrowser is a built-in component provided by Remix. It uses React’s context API to provide all the necessary data to your components.
In Remix, each route is represented by a file in the routes directory. Each file exports a React component and optionally a loader function.
For example, here is a simple route that displays a list of posts:
// routes/posts.jsimport{useLoaderData}from"@remix-run/react";exportfunctionloader(){returnfetch("/api/posts").then((res)=>res.json());}exportdefaultfunctionPosts(){constposts=useLoaderData();return(<div><h1>Posts</h1><ul>{posts.map((post)=>(<likey={post.id}>{post.title}</li>))}</ul></div>);}
In the above code, the loader function fetches the list of posts from an API. The returned data is then available to the component through the useLoaderData hook.
Remix uses a “loader” function associated with each route to load the necessary data. This function runs on the server by default, but can also run on the client for client-side transitions.
Here is an example of how error handling works in Remix:
exportfunctionloader(){returnfetch('/api/posts').then((res)=>{if(!res.ok){thrownewError('Failed to fetch posts');}returnres.json();});}exportdefaultfunctionPosts(){constposts=useLoaderData();if(postsinstanceofError){return<div>Error:{posts.message}</div>;}return(// ...);}
In this example, the loader function throws an error if the fetch request fails. The error is then caught by Remix and passed to the useLoaderData hook.
Remix provides robust support for HTML forms and server-side actions. Here’s an example of how to handle a form submission in Remix:
// routes/posts/new.jsexportfunctionaction({request}){// Get the form data from the requestconstformData=newURLSearchParams(awaitrequest.text());// Perform the server-side action (e.g., creating a new post)// ...returnredirect('/posts');}exportdefaultfunctionNewPost(){return(<formmethod="post"><label>Title<inputname="title"/></label><buttontype="submit">CreatePost</button></form>);}
In the example we just discussed, the action function handles the form submission. It reads the form data from the request, performs the server-side action (in this case, presumably creating a new post), and then redirects the user to the /posts page.
action functions in Remix can also handle more complex scenarios. For instance, they can perform optimistic updates, where the UI is updated before the server responds. This is done by delaying the redirect until after the server-side action is complete:
// routes/posts/new.jsexportfunctionaction({request}){constformData=newURLSearchParams(awaitrequest.text());// Perform the server-side action// ...// Return a "pending" redirectreturnredirect('/posts',{status:303});}
In this example, the action function returns a “pending” redirect with a 303 status code. This tells Remix to update the UI immediately and to start fetching the data for the /posts page, but to wait until the server-side action is complete before actually navigating to the /posts page.
Nested routing is another powerful feature provided by Remix. It allows you to compose your UI based on the structure of your routes, leading to more modular and maintainable code.
Here’s an example:
// routes/posts.jsimport{Outlet}from"@remix-run/react";exportdefaultfunctionPosts(){// ...return(<div>{/* Render the list of posts */}{/* ... */}{/* Render the nested route */}<Outlet/></div>);}
// routes/posts/$postId.jsimport{useLoaderData,useParams}from"@remix-run/react";exportfunctionloader({params}){returnfetch(`/api/posts/${params.postId}`).then((res)=>res.json());}exportdefaultfunctionPost(){constpost=useLoaderData();const{postId}=useParams();return(<div><h1>{post.title}</h1><p>{post.content}</p></div>);}
In this example, the posts.js route renders an Outlet, which is where the nested route ($postId.js) will be rendered. The $postId.js route uses the useParams hook to get the postId from the URL, and its loader function fetches the data for the individual post.
This nested routing structure maps nicely to the URL structure (/posts/:postId), making the routing logic easy to understand and maintain.
Remix provides out-of-the-box support for page transitions, helping you to create smooth, app-like experiences.
Here’s an example:
import{Link}from"@remix-run/react";exportdefaultfunctionPosts(){constposts=useLoaderData();return(<div><h1>Posts</h1><ul>{posts.map((post)=>(<likey={post.id}><Linkto={`/posts/${post.id}`}className="post-link">{post.title}</Link></li>))}</ul></div>);}
/* styles.css */.post-link{transition:color0.3s;}.post-link[data-link-active]{color:red;}
In our example, this attribute was used to change the color of the active link during a transition, providing a visual cue to the user.
Moreover, Link components also have a data-link-pending attribute that’s added when the link’s destination page is being preloaded. You could use this attribute to, for example, display a spinner on the link during page transitions:
/* styles.css */.post-link[data-link-pending]::after{content:" (Loading...)";}
Let’s imagine a scenario where you want certain routes in your application to be accessible only by authenticated users. Remix can handle this requirement with the help of loader functions and redirects.
Here’s an example of how you can implement this:
// routes/dashboard.jsexportfunctionloader({request}){// Check if the user is authenticatedif(!request.session.has("user")){returnredirect("/login",{status:401});}// Fetch the data for the dashboard// ...}exportdefaultfunctionDashboard(){constdata=useLoaderData();// The data could be an error if the user is not authenticatedif(datainstanceofError){return<div>Error:{data.message}</div>;}// Render the dashboard// ...}
In this example, the loader function checks if the user is authenticated by looking for a ‘user’ key in the session. If the user is not authenticated, it redirects to the /login page. The Dashboard component then checks if data is an instance of Error and renders an error message if it is.
That covers some of the concepts in and around Remix with practical code snippets and examples. It’s clear that Remix offers an extensive feature set designed to help developers build high-quality web applications. However, as with any technology, it’s important to understand its core concepts and principles in order to use it effectively.
In the next section, we will discuss the core concepts of Next.js as an alternative and how they can help you build better web applications.
Next.js, a popular React framework by Vercel, is well-known for its rich features and simplicity in creating server-side rendered (SSR) and static websites. It follows the convention over configuration principle, reducing the amount of boilerplate and decision-making necessary to start a project. With the release of Next.js 13, a significant addition has been the introduction of the Next.js App Router. This provides a top-level component that enables exciting capabilities that we will discuss here.
In Next.js, routes are determined by the file structure in your pages or app directory. Each file in pages corresponds to a route on the client side where the filename reflects the path of the route.
// pages/index.jsxexportdefaultfunctionHome(){return<div>WelcometoHomePage!</div>;}
In this example, the index.js file within the pages directory corresponds to the / route. If we navigate to localhost:3000/ in the browser, we’ll see “Welcome to Home Page!” rendered on the screen. As of Next.js 13, there’s an alternate router that can be used to take advantage of React Server Components called the Next.js App Router. This router is opt-in. To use it, one would create a similar file but in the ./app directory. To create an index route, we would create a file called page.jsx in the ./app directory.
// app/page.jsxexportdefaultfunctionHome(){return<div>WelcometoHomePage!</div>;}
Notice how under the pages routing paradigm, the filename determines the path. This is not the case with the app directory. Instead, the app directory has strong opinions about filenames, and paths are determined by directory names. Here’s an example file structure:
- (root)
- ./app
- page.jsx
- layout.jsx
- error.jsx
- loading.jsx
- ./list
- page.jsx
- layout.jsx
- error.jsx
- loading.jsx
- ./detail/[id]
- page.jsx
- layout.jsx
- error.jsx
- loading.jsx
Notice how the directories list, detail, and the root all contain files with the same names? This is because the app directory has strong opinions about naming files according to their roles. For example, page.jsx is the default route, error.jsx is the error route, and layout.jsx is the layout route. This is a convention over configuration approach that Next.js takes to make it easier for developers to get started with the framework. The layout route provides structure to the page that is expected to be present on every page. The error route is used to display errors that occur during the rendering of a page.
The name of the directories themselves become the pathname to the router. This means that if we wanted to create a page on http://localhost:3000/i-love-next, we would create a file called ./app/i-love-next/page.jsx. This file’s default export would be a React component that would be rendered when the user navigates to http://localhost:3000/i-love-next.
Dynamic routing is used when the exact route is not known beforehand. In Next.js, you can handle dynamic routes by using square brackets [] to wrap your file name.
// pages/posts/[id].jsexportdefaultfunctionPost({post}){return(<div><h1>{post.title}</h1><p>{post.content}</p></div>);}
In the example above, [id].js will match any route in the form of /posts/<anything>. The <anything> part can be accessed in your component using the useRouter hook provided by Next.js.
import{useRouter}from"next/router";// In your componentconstrouter=useRouter();const{id}=router.query;
Next.js provides API routes out of the box, which essentially gives you a backend without needing a separate server. You can create API routes as easily as you create pages. Any file inside the ,/pages/api directory will become an API route.
Here’s an example of an API route that returns the current date:
// pages/api/date.jsexportdefaultfunctionhandler(req,res){res.status(200).json({date:newDate()});}
When you navigate to /api/date, you will receive a JSON response with the current date. API routes essentially become serverless functions that you can use to fetch data from a database or perform other operations. You can also use API routes to create a REST API for your application. For example, you can create an API route that returns a list of posts from a database.
// pages/api/posts.jsexportdefaultfunctionhandler(req,res){constposts=awaitdb.getPosts();res.status(200).json(posts);}
getServerSideProps and getStaticProps are special functions that you can export from a Next.js page. They run on the server side and are used to fetch data for your component.
getServerSideProps runs every time a request is made, making it ideal for data that changes frequently. It is a serverful response to a client. On the other hand, getStaticProps runs only at build time, so it’s perfect for static sites where the data doesn’t change often.
// pages/posts.jsexportasyncfunctiongetServerSideProps(){constres=awaitfetch("https://.../posts");constposts=awaitres.json();return{props:{posts,},};}exportdefaultfunctionPosts({posts}){// Render posts...}
In the example above, getServerSideProps fetches a list of posts from a server and passes them as props to the Posts component.
Next.js is a powerful framework that simplifies many aspects of building React applications. With the addition of features like the App Router and Middleware in Next.js 13, it becomes an even more compelling choice for building modern web applications.
The getServerSideProps and getStaticProps functions are only available when working with the pages directory. For the new app router, all components are server components by default: meaning any component in there can just be async and await data as it executes on the server unless explicitly marked with a "use client" directive.
Next.js 13 introduced support for React Server Components. This innovative feature aims to improve performance by allowing components to be rendered on the server and sent to the client, thus reducing the size of JavaScript sent to the client. Server Components in Next.js combine the performance benefits of traditional server-side rendering with the developer experience of building with React. We will cover server components in depth in the next chapter.
Deciding which React framework to use for your project can be a challenging decision, as each of these frameworks offers a distinct set of features, advantages, and trade-offs. In this section, we will attempt to provide some insight into what makes popular React frameworks a viable option for developers today, and discuss factors such as learning curve, flexibility, and performance, which can guide you in choosing the most suitable framework for your specific needs.
It’s worth noting that one framework is not inherently better or worse than another. Each framework has its own set of strengths and weaknesses, and the best framework for your project will depend on your specific requirements and preferences.
Before we dive into the details of each framework, it’s important to understand your project’s specific needs. Here are some critical questions to consider:
Understanding the answers to these questions will give you a clearer picture of what you need from a framework.
Next.js is a versatile framework developed by Vercel that supports both SSR and SSG. It also provides first-class support for React Server Components and a streamlined development experience through features such as automatic routing based on your project’s file structure, hot module replacement, and automatic code splitting.
Learning Curve: Next.js maintains a balance between providing a rich feature set and having a manageable learning curve. If you’re already familiar with React, you should be able to get started with Next.js relatively quickly.
Flexibility: Next.js is designed with flexibility in mind. It offers several methods for fetching data (getServerSideProps, getStaticProps, getInitialProps), allowing you to choose the one that best fits your use case. You can use it as a full-stack framework, communicating with a database directly, or you can use it as a frontend framework, communicating with an API.
Performance: Next.js automatically optimizes your application for the best performance by implementing features such as automatic code splitting, prefetching pages, and more.
It is also worth noting that the same team that builds React itself works at Vercel, where Next.js is developed. This extremely tight feedback loop of development suggests that Next.js is the best choice of React framework.
Remix is a relatively new entry to the React framework scene, compared to Next.js that was first created around 10 years prior. It’s built by the creators of React Router and emphasizes on web fundamentals, making fewer assumptions and providing a lot of flexibility.
Learning Curve: Remix might have a slightly steeper learning curve because it introduces some new concepts and requires a good understanding of web fundamentals. However, its detailed documentation and growing community support can assist your learning journey.
Flexibility: Remix provides a great deal of flexibility, especially when it comes to fetching and managing data. It leverages native web constructs like fetch and <link rel="preload"> to handle data, giving you complete control over how and when your data is fetched and rendered.
Performance: Remix’s unique approach to routing and data loading makes it efficient and performant. Since data fetching is tied to routes, only the necessary data for a specific route is fetched, reducing the overall data requirement. Plus, its optimistic UI updates and progressive enhancement strategies improve the user experience.
Choosing a framework does not come without trade-offs. Let’s discuss some of the considerations:
Learning Curve: Each of these frameworks introduces its own concepts and abstractions on top of React, and learning these can take time and effort. We haven’t yet discussed a framework called Gatsby as its status is somewhat in flux at the time of writing this, though the learning curve is more noticeable with Gatsby in its current state as it relies on GraphQL.
Flexibility vs. Convention: While Next.js and Remix offer a lot of flexibility and fewer conventions, Gatsby is more opinionated. This can be a boon if you’re looking for a streamlined, predetermined way of doing things. It can, however, feel restrictive if you prefer having the freedom to customize your project’s configuration.
Performance: While all these frameworks make efforts to optimize performance, the static nature of Gatsby can lead to faster initial load times. However, this comes at the cost of longer build times for larger, content-heavy sites.
Community and Ecosystem: Next.js and Gatsby have been around for a while and have large communities and rich ecosystems with plenty of plugins and resources. Remix, being newer, has a smaller community, but it’s growing rapidly.
Pricing: While Next.js and Gatsby are open-source and free to use, Remix operates under a commercial license, though it’s free for personal use and non-profit organizations.
So, how to choose the right framework? It all comes down to your project needs and personal preference. If you need a flexible, full-stack framework, Next.js might be a better fit. If you’re building a static, content-heavy site and don’t mind the learning curve of GraphQL, Gatsby could be the way to go. And if you prefer a more hands-on approach to fetching and rendering data with a strong adherence to web fundamentals, Remix could be your best bet.
In any case, it’s a good idea to try each of them out for a smaller project or a part of your application. This will give you a better understanding of how they work and which one feels most comfortable to work with.
The Developer Experience (DX), build performance, and runtime performance are crucial factors to consider when choosing a React framework for a large-scale, complex project. Both Next.js and Remix offer excellent experiences in these areas, though there are some differences worth noting.
Developer experience is a combination of several factors including the quality of documentation, the ease of setting up and configuring the project, the learning curve, and the debugging and testing support.
Next.js is known for its excellent developer experience. The framework has robust and comprehensive documentation, with clear guides for various features like dynamic routing, API routes, data fetching, and more. This makes it a great choice for both beginners and experienced developers. The setup is straightforward with sensible defaults, and Next.js has first-class TypeScript support, which can be a big plus for larger projects.
Next.js’s file-based routing system is intuitive, and the framework supports both static generation (SSG) and server-side rendering (SSR) seamlessly. The hybrid approach allows developers to choose the best strategy for each page based on its needs, offering great flexibility. Next.js also offers a rich ecosystem of plugins and integrations, which can significantly speed up the development process. However, the breadth of the ecosystem can sometimes be a double-edged sword, as the quality and maintenance of third-party plugins can vary.
Through the use of standard JavaScript primitives like async/await, Next.js also makes server components far more approachable to React developers.
Remix also provides an excellent developer experience, with a focus on empowering developers to write idiomatic, server-rendered React code. Remix leverages the native fetch API for data loading, providing a universal way to fetch data on both the client and the server. This design is intuitive and helps eliminate the common data-fetching confusion encountered in many React applications.
One of Remix’s unique features is its built-in routing system that mimics the browser’s behavior. This design encourages developers to embrace the web platform, leading to applications that are more resilient and accessible. The routing system, combined with suspense on the server, can also lead to performance improvements.
Build performance becomes increasingly critical as a project grows in complexity and size. Both Next.js and Remix have made optimizations to improve the build time.
Next.js uses static generation by default, which means pages are pre-rendered at build time. This can lead to faster page loads, but also longer build times, especially for sites with a large number of pages.
To address this, Next.js introduced Incremental Static Regeneration (ISR), allowing developers to regenerate static pages after they have been built, without a full rebuild. This feature can significantly improve build times for large, dynamic sites.
Remix, on the other hand, has a unique take on build performance. It opts for a server-first architecture, which means that pages are rendered on-demand by the server, and the HTML is sent down to the client. This approach avoids the need for a build step entirely, leading to virtually instant deploys.
Both Next.js and Remix are designed with performance in mind and offer several optimizations to deliver fast, responsive applications.
Next.js comes with several built-in performance optimizations. It supports automatic code splitting, which ensures that only the necessary code is loaded for each page. It also has a built-in Image component that optimizes image loading for better performance.
The hybrid SSG/SSR model in Next.js allows developers to choose the optimal data fetching strategy for each page, balancing performance and freshness. Pages that don’t require fresh data canbe pre-rendered at build time, resulting in faster page loads. For pages that require fresh data, server-side rendering or Incremental Static Regeneration can be used.
Next.js also provides automatic static optimization for pages without blocking data requirements, ensuring they are served as static HTML files, leading to faster Time to First Byte (TTFB).
Finally, Next.js takes full advantage of React Server Components where possible, allowing it to send less JavaScript to the client resulting in faster page loads and other overhead.
Remix takes a slightly different approach to performance. Instead of pre-rendering pages, it opts for server rendering, sending down just the HTML that the client needs. This can result in faster TTFB, especially for dynamic content.
One of the key features of Remix is its robust caching strategy. It leverages the browser’s native fetch and cache APIs, allowing developers to specify caching strategies for different resources. This leads to faster page loads and a more resilient application.
Remix also optimizes performance with its unique routing system. By nesting routes, Remix allows you to isolate components that fetch data. If a child component fetches data, it doesn’t block the rendering of the parent component. This leads to a faster render, improving perceived performance.
Both Next.js and Remix offer compelling benefits for large-scale, complex projects. They both excel in developer experience, build performance, and runtime performance. Next.js might be a better choice if you prefer a mature ecosystem with extensive resources and plugins, a hybrid SSG/SSR model, and innovative features like ISR. On the other hand, Remix could be more suitable if you prefer a server-rendered approach with instant deploys, a strong emphasis on embracing web platform features like fetch and cache APIs, and advanced React concepts like Suspense and Server Components.
The most suitable framework for your specific project needs would ultimately depend on your team’s expertise, your project requirements, and your preference for certain architectural patterns. Regardless of the choice, both Next.js and Remix are solid foundations for building high-quality, performant React applications.
Making a decision on which React framework to use, particularly when it comes down to Next.js or Remix, often hinges on the specific needs of your project, and involves a complex interplay of multiple factors. Let’s break down these factors and how they might influence your choice.
Project Size and Complexity
The size and complexity of your project is an important consideration.
Remix is a full-stack framework that’s designed with simplicity and flexibility in mind. It’s designed to encourage proper design patterns and best practices right from the get-go. It makes heavy use of the concept of “nested routing,” which is highly beneficial for complex, multi-level navigation structures.
<Outlet><Linkto="profile">Profile</Link><Linkto="settings">Settings</Link></Outlet>
In the above example, Outlet is a placeholder for the child routes.
Next.js, on the other hand, is more flexible and has a bigger community and ecosystem. It can handle complex applications, but it’s not quite as opinionated about how you should structure your code. It’s worth noting, however, that Next.js has excellent support for server-side rendering, static site generation, and incremental static regeneration, which can be crucial for large projects that need SEO optimization.
Data Requirements
The nature of the data your project works with can also be a determining factor. If your project heavily relies on real-time data or requires the benefits of static site generation with client-side rendering, you may want to opt for Next.js. With its getServerSideProps and getStaticProps functions, Next.js offers a highly flexible and convenient way to fetch and update data.
exportasyncfunctiongetServerSideProps(context){constres=awaitfetch(`https://.../data`);constdata=awaitres.json();if(!data){return{notFound:true,};}return{props:{data},// will be passed to the page component as props};}
Remix, however, leverages the Fetch API for data fetching, aligning closely with how the browser naturally works. This makes it an excellent choice if you’re seeking to build a highly dynamic application with a seamless user experience. It’s also designed to work seamlessly with Suspense, a feature of React that lets you render components “in the future” and show a loading state while you’re waiting.
Target Audience
Your target audience can also significantly impact the choice of framework. If your application is targeting users in low-bandwidth or poor network areas, you may lean towards Next.js due to its static site generation capabilities which allow your site to be fully rendered and served as static HTML. However, Remix’s emphasis on minimal data transfer and efficient cache control could also be beneficial in such scenarios.
Development Team’s Familiarity
The familiarity and comfort of your development team with the framework should not be overlooked. Both Next.js and Remix have different approaches and philosophies. Next.js is backed by Vercel and has been around for a longer time, hence has a larger community and more resources for learning and troubleshooting. Remix, while younger, is designed by the creators of React Router and is highly praised for its well-thought-out design and comprehensive documentation.
Performance
Performance is often a crucial factor, and both frameworks offer strong performance. Next.js has powerful features like Automatic Static Optimization and Incremental Static Regeneration that can significantly enhance your application’s performance. Remix, with its focus on leveraging browser-native features like the Fetch API and suspenses, can also offer excellent performance, especially for highly dynamic applications.
Keeping up with the constantly evolving JavaScript ecosystem, including the many frameworks built around React, can feel like a daunting task. Every year, a number of new tools and libraries are introduced, each with its own set of features, benefits, and trade-offs. As a developer, making an informed decision about the right framework to use for a future project involves more than just a familiarity with the current state of the ecosystem. It also requires a forward-looking understanding of the trajectory of these tools and how they fit within the broader context of web development.
There are several strategies for staying up-to-date and continuously making informed decisions about choosing the right React framework for your future projects:
Following trusted sources: The JavaScript ecosystem evolves at a rapid pace. It’s essential to follow trustworthy sources that provide quality content and regular updates about the latest trends and tools. This could be blogs, YouTube channels, newsletters, podcasts, or online communities. For example, following the official blogs and Twitter accounts of Next.js and Remix could provide insights into their upcoming features, improvements, and overall roadmap.
Joining relevant communities: Online communities such as Reddit, Stack Overflow, GitHub, or various Discord and Slack groups are excellent places to keep an eye on emerging trends and tools. Community members often share their experiences with different frameworks, which can provide a useful perspective when deciding between different tools.
Attending conferences and meetups: Conferences and meetups are great for staying updated on the latest developments and best practices in the JavaScript and React ecosystem. Even if you can’t attend in person, many of these events offer online streaming or record their talks for later viewing.
Experimenting with different frameworks: Nothing beats hands-on experience when it comes to understanding a tool. Allocating some time to build small projects or prototypes with different frameworks can provide invaluable insights. This can help you understand the strengths and weaknesses of each framework and how they fit with your development style and project requirements.
Understanding the principles and philosophies behind the frameworks: Every framework has its principles and philosophies that guide its design and evolution. For instance, Next.js emphasizes a hybrid approach to rendering and aims to provide an excellent developer experience with zero-config setup. On the other hand, Remix focuses on leveraging the web’s fundamentals, such as the browser fetch API for data loading and native HTML forms for user inputs.
Considering the ecosystem and support: The surrounding ecosystem and support can be as important as the framework itself. This includes the availability of plugins or extensions, the quality of documentation, the activity of the community, the responsiveness of maintainers, etc. For instance, Next.js has a large ecosystem with many plugins and integrations, while Remix, being relatively new, is still growing its ecosystem.
Staying mindful of the project’s needs: The needs of your project should always be the foremost consideration. While it’s important to keep up with new developments, the goal is not to use the newest tools for their own sake but to find the tools that best meet your project’s requirements. For example, if your project is heavily data-driven with complex server-side operations, Next.js might be a better fit with its server-side rendering and API routes capabilities. Conversely, if your project benefits from a fine-grained control over loading states and transitions, Remix might be the preferred choice with its unique approach to loading data.
While these strategies can help you stay informed and make better decisions, it’s also important to avoid the trap of “analysis paralysis.” In the end, all frameworks have their strengths and trade-offs, and there’s rarely a single “correct” choice. The most important thing is to choose a tool that fits your project’s needs, aligns with your team’s skills, and then focus on building your application. Rather than chasing after the latest and greatest, it’s essential to maintain a balanced perspective. Experimenting with new tools and approaches is vital for growth and innovation, but so is understanding the value of stability, maintainability, and productivity.
It’s equally crucial to recognize that every project and team is unique. What works for one project might not work for another, and the best choice of framework often depends on a combination of factors such as the project’s scale, complexity, performance requirements, long-term maintainability, and the team’s expertise and preferences.
For instance, a large-scale enterprise application with complex business logic and high performance requirements might benefit from Next.js’s comprehensive feature set, server-side rendering capabilities, and robust ecosystem. However, a small, creative project with a need for fine-grained control over UI transitions might find Remix’s granular loading and transition mechanisms more suitable.
The availability of resources, documentation, and community support are also essential considerations. Next.js, being around for a longer time, boasts a large and vibrant community, extensive documentation, and a wealth of resources for learning and troubleshooting. Remix, while not as mature, has gained a strong reputation for its innovative features, excellent documentation, and the pedigree of its creators (who are among the original creators of React Router).
Also, the choice of framework can significantly influence the developer experience. Next.js is known for its emphasis on providing a seamless developer experience, with features like hot-module replacement, fast refresh, and zero-config setup. Remix, on the other hand, aligns closely with the web’s fundamentals, potentially offering a more straightforward mental model and making it easier to reason about your application’s behavior.
When considering the future, it’s also important to look at the roadmaps of these frameworks. Both Next.js and Remix have shown commitment to adopting upcoming web technologies and standards, contributing back to the ecosystem, and continuously evolving to meet the needs of modern web development.
Finally, while these strategies can provide a guiding framework, they’re not a substitute for personal judgment and experience. It’s always beneficial to take the time to experiment with different tools, engage with their communities, and understand their philosophies and design principles. This first-hand experience can often provide the most valuable insights and guide you towards the right choice for your specific needs and circumstances.
In the end, the decision comes down to a thoughtful consideration of your project’s needs, your team’s capabilities, the trade-offs you’re willing to accept, and the long-term implications for your application’s maintainability and evolution. Whether you choose Next.js, Remix, or another framework entirely, the goal should always be to facilitate the creation of high-quality, performant, and maintainable applications that meet the needs of your users and stakeholders.
Over the course of this chapter, we’ve delved deep into the concept of React frameworks, with an emphasis on Next.js and its innovative Server Components feature introduced in version 13. This chapter allowed us to explore the underlying principles, the reasoning, and the practical implications of using frameworks, and especially how Next.js makes use of Server Components to enhance web application performance.
The discussion began by recapping Async React and its implications for efficient rendering and user interactivity. We then moved on to explore the “why” and the “what” of React frameworks: why they are necessary, what benefits they offer, and what tradeoffs they entail.
React frameworks are often necessary because they provide a structured environment that simplifies the development process. They offer a variety of benefits, including efficient code organization, community support, enhanced development speed, and built-in best practices. However, these benefits come with some tradeoffs. These include a potentially steep learning curve, reduced flexibility due to the opinionated nature of some frameworks, and the risk of overkill for smaller, simpler projects.
Next, we dived into a comparison between different frameworks, focusing primarily on Next.js and Remix. Each framework offers its unique set of features and advantages, and the choice often comes down to the specific needs of the project.
Next.js is a React framework renowned for its robustness and flexibility. It supports both server-side rendering (SSR) and static site generation (SSG), making it a versatile choice for different types of projects. Next.js also offers features such as API routes, automatic code splitting, and built-in CSS and Sass support, among others. With the introduction of Next.js 13, a new experimental feature was added to the mix: Server Components.
Server Components in Next.js bring the power of server-side operations to components, allowing for more efficient rendering and data fetching. They run only on the server and send the rendered HTML to the client, reducing the JavaScript bundle size that’s shipped to the client. They have the potential to significantly improve the performance of Next.js applications by enabling direct access to back-end resources and doing away with client-side data fetching logic.
We also looked at Remix, another robust React framework. Remix is known for its excellent routing capabilities, built-in data loading and caching mechanism, nested routing, and error boundaries. It puts a strong emphasis on leveraging the browser’s built-in functionalities, leading to an intuitive developer experience and robust user experience.
With all these concepts in mind, the pivotal factor when choosing between Next.js and Remix is the project’s specific needs and constraints. Both frameworks have their strengths, and both can be powerful tools for developing efficient, performant React applications.
This chapter was packed with information, insights, and practical examples. The world of React frameworks is dynamic and exciting. As we move forward, these concepts and tools will continue to evolve and shape the landscape of React development.
What are the primary reasons for using a React framework like Next.js or Remix, and what benefits do they offer?
What are some of the trade-offs or downsides that come with using a React framework?
How does the Server Components feature in Next.js 13 work, and in what ways can it improve the performance of a Next.js application?
What are some of the key differences between Next.js and Remix, and how might these influence the decision of which framework to use for a specific project?
In this chapter, we briefly mentioned React Server Components and begun to scratch their surface in a top-contour manner. In the next chapter, we’ll intensify our focus on React Server Components and dive a little bit deeper: understanding their value proposition and how they work by writing a minimal server that renders and serves React Server Components.
In addition, we’ll examine why React Server Components require a new generation of build tooling like bundlers, routers, and more. Ultimately, we will come away with an improved understanding of React Server Components and their underlying mechanism in what is sure to be an informative and educational deep dive.
Tejas Kumar has been writing React code since 2014 and has given multiple conference talks, workshops, and guest lectures on the topic. With his wealth of experience across the technical stack of multiple startups, Tejas has developed a deep understanding of React’s core concepts and enjoys using it to encourage, equip, and empower others to write React apps fluently.