Carl Rippon
BIRMINGHAM—MUMBAI

Copyright © 2021 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Ashwin Nair
Publishing Product Manager: Pavan Ramchandani
Acquisition Editor: Nitin Nainani
Senior Editor: Hayden Edwards
Content Development Editor: Aamir Ahmed
Technical Editor: Deepesh Patel
Copy Editor: Safis Editing
Project Coordinator: Kinjal Bari
Proofreader: Safis Editing
Indexer: Manju Arasan
Production Designer: Jyoti Chauhan
First published: December 2019
Second edition: January 2021
Production reference: 1060121
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80020-616-8

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at customercare@packtpub.com for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Carl Rippon has been involved in the software industry for over 20 years, developing a complex line of business applications across various sectors. He has spent the last 9 years building single-page applications using a wide range of JavaScript technologies, including Angular, ReactJS, and TypeScript. Carl has written over 150 blog posts on various technologies.
Adam Greene is a senior software engineer at Cvent, the market-leading meetings, events, and hospitality technology provider. Adam has close to 20 years' software development experience with Microsoft technologies and has worked for companies of all sizes, from start-ups to multinational corporations, in various leadership capacities. Adam is a full-stack web developer and has developed SaaS applications using ASP.NET, Angular, and React for several years.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
ASP.NET Core is an open source and cross-platform web application framework built by Microsoft. ASP.NET Core is a great choice for building highly performant backends that interact with SQL Server and are hosted in Azure.
React was built by Facebook in order to improve the scalability of their codebase and was eventually open sourced in 2013. React is now a massively popular library for building component-based frontends and works fantastically well with many backend technologies, including ASP.NET Core.
The book will step you through building a real-world application that leverages both these technologies. You will gradually build the app chapter by chapter to actively learn key topics in these technologies. Each chapter ends with a summary and some questions on the content to reinforce your learning. At the end of the book, you will have a secure, performant, and maintainable single-page application (SPA) that is hosted in Azure along with the knowledge to build your next ASP.NET Core and React app.
If you're a web developer looking to get up to speed with full-stack web application development with .NET Core and React, this book is for you. Although the book does not assume any knowledge of React, a basic understanding of .NET Core will help you to get to grips with the concepts covered.
Chapter 1, Understanding the ASP.NET 5 React Template, covers the standard SPA template that Visual Studio offers for ASP.NET Core and React apps. It covers the programmatic entry points for both the frontend and backend and how they work together in the Visual Studio solution.
Chapter 2, Creating Decoupled React and ASP.NET 5 Apps, steps through how we can create an up-to-date ASP.NET Core and React solution. The chapter includes the use of TypeScript, which is hugely beneficial when creating large and complex frontends.
Chapter 3, Getting Started with React and TypeScript, covers the fundamentals of React, such as JSX, props, state, and events. The chapter also covers how to create strongly typed components with TypeScript.
Chapter 4, Styling React Components with Emotion, covers different approaches to styling React components. The chapter covers styling using plain CSS and then CSS modules before covering CSS-in-JS.
Chapter 5, Routing with React Router, introduces a library that enables apps with multiple pages to be efficiently created. It covers how to declare all the routes in an app and how these map to React components, including routes with parameters. The chapter also covers how to load the components from a route on demand in order to optimize performance.
Chapter 6, Working with Forms, covers how to build forms in React. The chapter covers how to build a form in plain React before leveraging a popular third-party library to make the process of building forms more efficient.
Chapter 7, Managing State with Redux, steps through how this popular library can help manage state across an app. A strongly typed Redux store is built along with actions and reducers with the help of TypeScript.
Chapter 8, Interacting with the Database with Dapper, introduces a library that enables us to interact with SQL Server databases in a performant manner. Both reading and writing to a database are covered, including mapping SQL parameters from a C# class and mapping results to C# classes.
Chapter 9, Creating REST API Endpoints, covers how to create a REST API that interacts with a data repository. Along the way, dependency injection, model binding, and model validation are covered.
Chapter 10, Improving Performance and Scalability, covers several ways of improving the performance and scalability of the backend, including reducing database round trips, making APIs asynchronous, and data caching. Along the way, several tools are used to measure the impact of improvements.
Chapter 11, Securing the Backend, leverages ASP.NET identity along with JSON web tokens in order to add authentication to an ASP.NET Core backend. The chapter also covers the protection of REST API endpoints with the use of standard and custom authorization policies.
Chapter 12, Interacting with RESTful APIs, covers how a React frontend can talk to an ASP.NET Core backend using the JavaScript fetch function. The chapter also covers how a React frontend can gain access to protected REST API endpoints with a JSON web token.
Chapter 13, Adding Automated Tests, covers how to create unit tests and integration tests on the ASP.NET Core backend using xUnit. The chapter also covers how to create tests on pure JavaScript functions as well as React components using Jest.
Chapter 14, Configuring and Deploying to Azure, introduces Azure and then steps through deploying both the backend and frontend to separate Azure app services. The chapter also covers the deployment of a SQL Server database to SQL Azure.
Chapter 15, Implementing CI and CD with Azure DevOps, introduces Azure DevOps before stepping through creating a build pipeline that automatically triggers when code is pushed to a source code repository. The chapter then steps through setting up a release pipeline that deploys the artifacts from the build into Azure.
You need to know the basics of C#, including the following:
You need to know the basics of JavaScript, including the following:
You need to know the basics of HTML, including the following:
You need an understanding of basic CSS, including the following:
An understanding of basic SQL would be helpful but is not essential.
You will need the following installed on your computer:
If you are using the digital version of this book, we advise you to type the code yourself or access the code via the GitHub repository (link available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Code in Action videos for this book can be viewed at http://bit.ly/3mB8KuU.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Let's create a file called .eslintrc.json in frontend with the following code."
A block of code is set as follows:
{
"extends": "react-app"
}
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
function App() {
const unused = 'something';
return (
...
);
};
Any command-line input or output is written as follows:
> cd frontend
> npm start
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Click on Install to install the extension and then the Reload button to complete the installation."
Tips or important notes
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at customercare@packtpub.com.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at copyright@packt.com with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
This section provides a high-level introduction to ASP.NET Core and React and explains how to create projects that enable them to work well together. We will create the project for the app that we'll build throughout this book, which will allow users to submit questions and other users to submit answers to them—a Q&A app.
This section comprises the following chapters:
React was Facebook's answer to helping more people work on the Facebook code base and deliver features quicker. React worked so well for Facebook that they eventually open sourced it (https://github.com/facebook/react). Today, React is a mature library for building component-based frontends (client-side code that runs in the browser); it is extremely popular and has a massive community and ecosystem. At the time of writing, React is downloaded over 8.8 million times per week, which is 2 million more than the same time a year ago.
ASP.NET Core was first released in 2016 and is now a mature open source and cross-platform web application framework. It's an excellent choice for building backends (application code that runs on the server) that interact with databases such as SQL Server. It also works well in cloud platforms such as Microsoft Azure.
In this first chapter, we'll start by learning about the single-page application (SPA) architecture. Then, we'll create an ASP.NET Core and React app using the standard template in Visual Studio. We will use this to review and understand the critical parts of a React and ASP.NET Core app. Then, we'll learn where the entry points of both the ASP.NET Core and React apps are and how they integrate with each other. We'll also learn how Visual Studio runs both the frontend and backend together in development mode, as well as how it packages them up, ready for production. By the end of this chapter, we'll have gained fundamental knowledge so that we can start building an app that uses both of these awesome technologies, something we'll gradually build upon throughout this book.
In this chapter, we'll cover the following topics:
Let's get started!
We will need to use the following tools in this chapter:
a) ASP.NET and web development
b) Azure development
c) Node.js development
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition.
Check out the following video to see the code in action: https://bit.ly/3riGWib.
In this section, we will start to understand the single-page application (SPA) architecture.
A SPA is a web app that loads a single HTML page that is dynamically updated by JavaScript as the user interacts with the app. Imagine a simple sign-up form where a user can enter a name and an email address. When the user fills out and submits the form, a whole page refresh doesn't occur. Instead, some JavaScript in the browser handles the form submission with an HTTP POST request and then updates the page with the result of the request. Refer to the following diagram:
Figure 1.1 – Form in a SPA
So, after the first HTTP request that returns the single HTML page, subsequent HTTP requests are only for data and not HTML markup. All the pages are rendered in the client's browser by JavaScript.
So, how are different pages with different URL paths handled? For example, if I enter https://qanda/questions/32139 in the browser's address bar, how does it go to the correct page in the app? Well, the browser's history API lets us change the browser's URL and handle changes in JavaScript. This process is often referred to as routing and, in Chapter 5, Routing with React Router, we'll learn how we can build apps with different pages.
The SPA architecture is what we are going to use throughout this book. We'll use React to render our frontend and use ASP.NET Core for the backend API.
Now that we have a basic understanding of the SPA architecture, we'll take a closer look at a SPA-templated app that Visual Studio can create for us.
In this section, we are going to start by creating an ASP.NET Core and React app using the standard template in Visual Studio. This template is perfect for us to review and understand basic backend components in an ASP.NET Core SPA.
Once we have scaffolded the app using the Visual Studio template, we will inspect the ASP.NET Core code, starting from its entry point. During our inspection, we will learn how the request/response pipeline is configured and how requests to endpoints are handled.
Let's open Visual Studio and carry out the following steps to create our templated app:

Figure 1.2 – Visual Studio start-up dialog

Figure 1.3 – Creating a new web app in Visual Studio

Figure 1.4 – Specifying a project name and location
Another dialog will appear that allows us to specify the version of ASP.NET Core we want to use, as well as the specific type of project we want to create.

Figure 1.5 – The project template and ASP.NET Core version
Important Note
If ASP.NET Core 5.0 isn't listed, make sure that the latest version of Visual Studio is installed. This can be done by choosing the Check for Updates option from the Help menu.
Figure 1.6 – The home page of the app
We'll find out later in this chapter why the app took so long to run the first time. For now, we've created the ASP.NET Core React SPA. Now, let's inspect the backend code.
An ASP.NET Core app is a console app that creates a web server. The entry point for the app is a method called Main in a class called Program, which can be found in the Program.cs file in the root of the project:
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[]
args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
This method creates a web host using Host.CreateDefaultBuilder, which configures items such as the following:
We can override the default builder using fluent APIs, which start with Use. For example, to adjust the root of the web content, we can add the highlighted line in the following snippet:
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseContentRoot("some-path");
webBuilder.UseStartup<Startup>();
});
The last thing that is specified in the builder is the Startup class, which we'll look at in the following section.
The Startup class is found in Startup.cs and configures the services that the app uses, as well as the request/response pipeline. In this subsection, we will understand the two main methods within this class.
Services are configured using a method called ConfigureServices. This method is used to register items such as the following:
Services are added by calling methods on the services parameter and, generally, start with Add. Notice the call to the AddSpaStaticFiles method in the following code snippet:
public void ConfigureServices(IServiceCollection services)
{0
services.AddControllersWithViews();
services.AddSpaStaticFiles(configuration =>
{
configuration.RootPath = "ClientApp/build";
});
}
This is a key part of how the React app is integrated into ASP.NET Core in production since this specifies the location of the React app.
Important Note
The ClientApp/Build files are only used in production mode, though. Next, we'll find out how the React app is integrated into ASP.NET Core in development mode.
When a request comes into ASP.NET Core, it goes through what is called the request/response pipeline, where some middleware code is executed. This pipeline is configured using a method called Configure. We will use this method to define exactly which middleware is executed and in what order. Middleware code is invoked by methods that generally start with Use in the app parameter. So, we would typically specify middleware such as authentication early in the Configure method, and in the MVC middleware toward the end. The pipeline that the template created is as follows:
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
...
app.UseStaticFiles();
app.UseSpaStaticFiles();
app.UseRouting();
app.UseEndpoints( ... );
app.UseSpa(spa =>
{
spa.Options.SourcePath = "ClientApp";
if (env.IsDevelopment())
{
spa.UseReactDevelopmentServer(npmScript:
"start");
}
});
}
Notice that a method called UseSpaStaticFiles is called in the pipeline, just before the routing and endpoints are set up. This allows the host to serve the React app, as well as the web API.
Also, notice that a UseSpa method is called after the endpoint middleware. This is the middleware that will handle requests to the React app, which will simply serve the single page in the React app. It is placed after UseEndpoints so that requests to the web API take precedence over requests to the React app.
The UseSpa method has a parameter that is actually a function that executes when the app is run for the first time. This function contains a branch of logic that calls spa.UseReactDevelopmentServer(npmScript: "start") if you're in development mode. This tells ASP.NET Core to use a development server by running npm start. We'll delve into the npm start command later in this chapter. So, in development mode, the React app will be run on a development server rather than having ASP.NET Core serve the files from ClientApp/Build. We'll learn more about this development server later in this chapter.
Next, we will learn how custom middleware can be added to the ASP.NET Core request/response pipeline.
We can create our own middleware using a class such as the following one. This middleware logs information about every single request that is handled by the ASP.NET Core app:
public class CustomLogger
{
private readonly RequestDelegate _next;
public CustomLogger(RequestDelegate next)
{
_next = next ?? throw new
ArgumentNullException(nameof(next));
}
public async Task Invoke(HttpContext httpContext)
{
if (httpContext == null) throw new
ArgumentNullException(nameof(httpContext));
// TODO - log the request
await _next(httpContext);
// TODO - log the response
}
}
This class contains a method called Invoke, which is the code that is executed in the request/response pipeline. The next method to call in the pipeline is passed into the class and held in the _next variable, which we need to invoke at the appropriate point in our Invoke method. The preceding example is a skeleton class for a custom logger. We would log the request details at the start of the Invoke method and log the response details after the _next delegate has been executed, which will be when the rest of the pipeline has been executed.
The following diagram is a visualization of the request/response pipeline and shows how each piece of middleware in the pipeline is invoked:
Figure 1.7 – Visualization of the request/response pipeline
We make our middleware available as an extension method on the IApplicationBuilder interface in a new source file:
public static class MiddlewareExtensions
{
public static IApplicationBuilder UseCustomLogger(this
IApplicationBuilder app)
{
return app.UseMiddleware<CustomLogger>();
}
}
The UseMiddleware method in IApplicationBuilder is used to register the middleware class. The middleware will now be available in an instance of IApplicationBuilder in a method called UseCustomLogger.
So, the middleware can be added to the pipeline in the Configure method in the Startup class, as follows:
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseCustomLogger();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseSpaStaticFiles();
app.UseMvc(...);
app.UseSpa(...);
}
In the previous example, the custom logger is invoked at the start of the pipeline so that the request is logged before it is handled by any other middleware. The response that is logged in our middleware will have been handled by all the other middleware as well.
So, the Startup class allows us to configure how all requests are generally handled. How can we specify exactly what happens when requests are made to a specific resource in a web API? Let's find out.
Web API resources are implemented using controllers. Let's have a look at the controller that the template project created by opening WeatherForecastController.cs in the Controllers folder. This contains a class called WeatherForecastController that inherits from ControllerBase with a Route annotation:
[ApiController]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{
...
}
The annotation specifies the web API resource URL that the controller handles. The [controller] object is a placeholder for the controller name, minus the word Controller. This controller will handle requests to weatherforecast.
The Get method in the class is called an action method. Action methods handle specific requests to the resource for a specific HTTP method and subpath. We decorate the method with an attribute to specify the HTTP method and subpath the method handles. In our example, we are handling an HTTP GET request to the root path (weatherforecast) on the resource:
[HttpGet]
public IEnumerable<WeatherForecast> Get()
{
...
}
Let's have a closer look at the web API at runtime by carrying out the following steps:

Figure 1.8 – A request to the weatherforecast endpoint in the browser developer tools
Figure 1.9 – The response body for the weatherforecast endpoint in the browser developer tools
If we look back at the Get action method, we are returning an object of the IEnumerable<WeatherForecast> type. The MVC middleware automatically converts this object into JSON and puts it in the response body with a 200 status code for us.
So, that was a quick look at the backend that the template scaffolded for us. The request/response pipeline is configured in the Startup class and the endpoint handlers are implement using controller classes.
In the next section, we'll walk through the React frontend.
It's time to turn our attention to the React frontend. In this section, we'll inspect the frontend code, starting with the entry point, which is a single HTML page. We will explore how the frontend is executed in development mode and how it is built in preparation for deployment. We will then learn how the frontend dependencies are managed and also understand why it took over a minute to run the app for the first time. Finally, we will explore how React components fit together and how they access the ASP.NET Core backend.
We have a good clue as to where the entry point is from our examination of the Startup class in the ASP.NET Core backend. In the Configure method, the SPA middleware is set up with the source path set to ClientApp:
app.UseSpa(spa =>
{
spa.Options.SourcePath = "ClientApp";
if (env.IsDevelopment())
{
spa.UseReactDevelopmentServer(npmScript: "start");
}
});
If we look in the ClientApp folder, we'll see a file called package.json. This is a file that is often used in React apps and contains information about the project, its npm dependencies, and the scripts that can be run to perform tasks.
Important Note
If we open the package.json file, we will see react listed as a dependency:
"dependencies": {
"react": "^16.0.0",
...
"react-scripts": "^3.4.1",
...
},
A version is specified against each package name. The versions in your package.json file may be different to the ones shown in the preceding code snippet. The ^ symbol in front of the version means that the latest minor version can be safely installed, according to semantic versioning.
Important Note
So, react 16.14.0 can be safely installed because this is the latest minor version of React 16 at the time of writing this book.
The react-scripts dependency gives us a big clue as to how React was scaffolded. react-scripts is a set of scripts from the popular Create React App (CRA) tool that was built by the developers at Facebook. This tool has done a huge amount of configuration for us, including creating a development server, bundling, linting, and unit testing. We'll learn more about CRA in the next chapter.
The root HTML page for an app scaffolded by CRA is index.html, which can be found in the public folder in the ClientApp folder. It is this page that hosts the React app. The root JavaScript file that is executed for an app scaffolded by CRA is index.js, which is in the ClientApp folder. We'll examine both the index.html and index.js files later in this chapter.
Next, we will learn how the React frontend is executed in development mode.
In the following steps, we'll examine the ASP.NET Core project file to see what happens when the app runs in development mode:

Figure 1.10 – Opening the project file in Visual Studio
This is an XML file that contains information about the Visual Studio project.
<Target Name="DebugEnsureNodeEnv" BeforeTargets="Build" Condition=" '$(Configuration)' == 'Debug' And !Exists('$(SpaRoot)node_modules') ">
<!-- Ensure Node.js is installed -->
<Exec Command="node --version"
ContinueOnError="true">
<Output TaskParameter="ExitCode"
PropertyName="ErrorCode" />
</Exec>
<Error Condition="'$(ErrorCode)' != '0'"
Text="Node.js is required to build and run this
project. To continue, please install Node.js from
https://nodejs.org/, and then restart your
command prompt or IDE."
/>
<Message Importance="high" Text="Restoring
dependencies using 'npm'.
This may take several minutes..." />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm
install" />
</Target>
This executes tasks when the ClientApp/node-modules folder doesn't exist and the Visual Studio project is run in debug mode, which is the mode that's used when we press F5.
> node --version
This command returns the version of Node that is installed. This may seem like an odd thing to do, but its purpose is to determine whether node is installed. If Node is not installed, the command will error and be caught by the Error task, which informs the user that Node needs to the installed and where to install it from.

Figure 1.11 – Restoring npm dependencies message when running a project for the first time
> npm install
This command downloads all the packages that are listed as dependencies in package.json into a folder called node_modules:
Figure 1.12 – The node_modules folder
We can see this in the Solution Explorer window if the Show All Files option is on. Notice that there are a lot more folders in node_modules than dependencies listed in package.json. This is because the dependencies will have dependencies. So, the packages in node_modules are all the dependencies in the dependency tree.
At the start of this section, we asked ourselves why it took such a long time for the project to run the app for the first time. The answer is that this last task takes a while because there are a lot of dependencies to download and install. On subsequent runs, node_modules will have been created, so these sets of tasks won't get invoked.
Earlier in this chapter, we learned that ASP.NET Core invokes an npm start command when the app is in development mode. If we look at the scripts section in package.json, we'll see the definition of this command:
"scripts": {
"start": "rimraf ./build && react-scripts start",
...
}
This command deletes a folder called build and runs a Webpack development server.
Important Note
Why would we want to use the Webpack development server when we already have our ASP.NET Core backend running in IIS Express? The answer is a shortened feedback loop, which will increase our productivity. Later, we'll see that we can make a change to a React app running in the Webpack development server and that those changes are automatically loaded. There is no stopping and restarting the application, so there's a really quick feedback loop and great productivity.
The publishing process is the process of building artifacts to run an application in a production environment.
Let's continue and inspect the XML ASP.NET Core project file by looking at the Target element, which has a Name attribute of PublishRunWebPack. The following code executes a set of tasks when the Visual Studio project is published:
<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
<!-- As part of publishing, ensure the JS resources are
freshly built in production mode -->
<Exec WorkingDirectory="$(SpaRoot)" Command="npm install"
/>
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run
build" />
<!-- Include the newly-built files in the publish output -->
<ItemGroup>
<DistFiles Include="$(SpaRoot)build\**" />
<ResolvedFileToPublish Include="@(DistFiles-
>'%(FullPath)')"
Exclude="@(ResolvedFileToPublish)">
<RelativePath>%(DistFiles.Identity)</RelativePath>
<CopyToPublishDirectory>PreserveNewest
</CopyToPublishDirectory>
</ResolvedFileToPublish>
</ItemGroup>
</Target>
The first task that is run is the execution of the npm install command via an Exec task. This will ensure that all the dependencies are downloaded and installed. Obviously, if we've already run our project in debug mode, then the dependencies should already be in place.
The next task is an Exec task that runs the following npm command:
> npm run build
This task will run an npm script called build. If we look in the package.json file again, we'll see this script in the scripts section:
"scripts": {
"start": "rimraf ./build && react-scripts start",
"build": "react-scripts build",
"test": "cross-env CI=true react-scripts test --
env=jsdom",
"eject": "react-scripts eject",
"lint": "eslint ./src/"
}
This references the create-react-app scripts, which bundle the React app ready for production, optimizing it for great performance, and outputting the content into a folder called build.
The next set of tasks defined in the ItemGroup element take their content from the build folder and place it in the publish location, along with the rest of the content to publish.
Let's give this a try and publish our app:

Figure 1.13 – Publishing to a folder

Figure 1.14 – Publish location
Figure 1.15 – Publish profile screen
After a while, we'll see the content appear in the folder we specified, including a ClientApp folder. If we look in this ClientApp folder, we'll see a build folder containing the React app, ready to be run in a production environment. Notice that the build folder contains index.html, which is the single page that will host the React app in production.
Important Note
Earlier, we learned that frontend dependencies are defined in package.json. Why not just list all the dependencies as script tags in index.html? Why do we need the extra complexity of npm package management in our project? The answer is that a long list of dependencies is hard to manage. If we used script tags, we'd need to make sure these are ordered correctly. We'd also be responsible for downloading the packages, placing them locally in our project, and keeping them up to date. We have a huge list of dependencies in our scaffolded project already, without starting work on any functionality in our app. For these reasons, managing dependencies with npm has become an industry standard.
Let's open package.json again and look at the dependencies section:
"dependencies": {
"bootstrap": "^4.1.3",
"jquery": "3.4.1",
"merge": "^1.2.1",
"oidc-client": "^1.9.0",
"react": "^16.0.0",
"react-dom": "^16.0.0",
"react-router-bootstrap": "^0.24.4",
"react-router-dom": "^4.2.2",
"react-scripts": "^3.0.1",
"reactstrap": "^6.3.0",
"rimraf": "^2.6.2"
},
We've already observed the react dependency, but what is the react-dom dependency? Well, React doesn't just target the web; it also targets native mobile apps. This means that react is the core React library that is used for both web and mobile, and react-dom is the library that's specified for targeting the web.
The react-router-dom package is the npm package for React Router and helps us manage the different pages in our app in the React frontend, without us needing to do a round trip to the server. We'll learn more about React Router in Chapter 5, Routing with React Router. The react-router-bootstrap package allows Bootstrap to work nicely with React Router.
We can see that this React app has a dependency for Bootstrap 4.1 with the bootstrap npm package. So, Bootstrap CSS classes and components can be referenced to build the frontend in our project. The reactstrap package is an additional package that allows us to consume Bootstrap nicely in React apps. Bootstrap 4.1 has a dependency on jQuery, which is the reason why we have the jquery package dependency.
The merge package contains a function that merges objects together, while oidc-client is a package for interacting with OpenID Connect (OIDC) and OAuth2.
The final dependency that we haven't covered yet is rimraf. This simply allows files to be deleted, regardless of the host operating system. We can see that this is referenced in the start script:
"scripts": {
"start": "rimraf ./build && react-scripts start",
...
}
Earlier in this chapter, we learned that this script is invoked when our app is running in development mode. So, rimraf ./build deletes the build folder and its contents before the development server starts.
If we look further down, we'll see a section called devDependencies. These are dependencies that are only used during development and not in production:
"devDependencies": {
"ajv": "^6.9.1",
"cross-env": "^5.2.0",
"eslint": "^6.8.0",
"eslint-config-react-app": "^5.2.1",
"eslint-plugin-flowtype": "^4.6.0",
"eslint-plugin-import": "^2.20.0",
"eslint-plugin-jsx-a11y": "^6.2.3",
"eslint-plugin-react": "^7.18.3"
},
The following is a brief description of these dependencies:
Let's move on and learn how the single page is served and how the React app is injected into it.
We know that the single page that hosts the React app is index.html, so let's examine this file. This file can be found in the public folder of the ClientApp folder. The React app will be injected into the div tag, which has an id of root:
<div id="root"></div>
Let's run our app again in Visual Studio to confirm that this is the case by pressing F5. If we open the developer tools in the browser page that opens and inspect the DOM in the Elements panel, we'll see this div tag with the React content inside it:
Figure 1.16 – Root div element and script elements
Notice the script elements at the bottom of the body element. This contains all the JavaScript code for our React app, including the React library itself. However, these script elements don't exist in the source index.html file, so how did they get there in the served page? Webpack added them after bundling all the JavaScript together and splitting it up into optimal chunks that can be loaded on demand. If we look in the ClientApp folder and subfolders, we'll see that the static folder doesn't exist. The JavaScript files don't exist either. What's going on? These are virtual files that are created by the Webpack development server. Remember that when we run the app with Visual Studio debugger, the Webpack development server serves index.html. So, the JavaScript files are virtual files that the Webpack development server creates.
Now, what happens in production mode when the Webpack development server isn't running? Let's have a closer look at the app we published earlier in this chapter. Let's look in the index.html file in the Build folder, which can be found in the ClientApp folder. The script elements at the bottom of the body element will look something like the following:
<script>
!function(e){...}([])
</script>
<script src="/static/js/2.f6873cc5.chunk.js"></script>
<script src="/static/js/main.61537c83.chunk.js"></script>
Carriage returns have been added in the preceding code snippet to make it more readable. The highlighted parts of the filenames may vary each time the app is published. The filenames are unique in order to break browser caching. If we look for these JavaScript files in our project, we'll find that they do exist. So, in production mode, the web server will serve this physical JavaScript file.
If we open this JavaScript file, we'll see it contains all the JavaScript for our app. This JavaScript is minified so that the file can be downloaded to the browser nice and quickly.
Important Note
However, the file isn't small and contains a lot of JavaScript. What's going on here? Well, the file contains not only our JavaScript app code but also the code from all the dependencies, including React itself.
Now, it's time to start looking at the React app code and how components are implemented. Remember that the root JavaScript file is index.js in the ClientApp folder. Let's open this file and look closely at the following block of code:
const rootElement = document.getElementById('root');
ReactDOM.render(
<BrowserRouter basename={baseUrl}>
<App />
</BrowserRouter>,
rootElement);
The first statement selects the div element we discovered earlier, which contains the root ID and stores it in a variable called rootElement.
The next statement extends over multiple lines and calls the render function from the React DOM library. It is this function that injects the React app content into the root div element. The rootElement variable, which contains a reference to the root div element, is passed into this function as the second parameter.
The first parameter that is passed into the render function is more interesting. In fact, it doesn't even look like legal JavaScript! This is, in fact, JSX, which we'll learn about in detail in Chapter 3, Getting Started with React and TypeScript.
Important Note
So, the first parameter passes in the root React component called BrowserRouter, which comes from the React Router library. We'll learn more about this component in Chapter 5, Routing with React Router.
Nested inside the BrowserRouter component is a component called App. If we look at the top of the index.js file, we will see that the App component is imported from a file called App.js:
import App from './App';
Important Note
So, the App component is contained in the App.js file. Let's have a quick look. A class called App is defined in this file:
export default class App extends Component {
static displayName = App.name;
render () {
return (
<Layout>
<Route exact path='/' component={Home} />
<Route path='/counter' component={Counter} />
<Route path='/fetch-data' component={FetchData} />
</Layout>
);
}
}
Notice the export and default keywords before the class keyword.
Important Note
A method called render defines the output of the component. This method returns JSX, which, in this case, references a Layout component in our app code and a Route component from React Router.
So, we are starting to understand how React components can be composed together to form a UI.
Now, let's go through the React development experience by making a simple change:
render () {
return (
<div>
<h1>Hello, React!</h1>
<p>Welcome to your new single-page application,
built with:
</p>
...
</div>
);
}
Figure 1.17 – The home page is automatically updated in the browser
The app is automatically updated with our change. The Webpack development server automatically updated the running app with the change when the file was saved. The experience of seeing our changes implemented almost immediately gives us a really productive experience when developing our React frontend.
The final topic we'll cover in this chapter is how the React frontend consumes the backend web API. If the app isn't running, then run it by pressing F5 in Visual Studio. If we click on the Fetch data option in the top navigation bar in the app that opens in the browser, we'll see a page showing weather forecasts:
Figure 1.18 – Weather forecast data
If we cast our minds back to earlier in this chapter, in the Understanding controllers section, we looked at an ASP.NET Core controller that surfaced a web API that exposed the data at weatherforecast. So, this is a great place to have a quick look at how a React app can call an ASP.NET Core web API.
The component that renders this page is in FetchData.js. Let's open this file and look at the constructor class:
constructor (props) {
super(props);
this.state = { forecasts: [], loading: true };
}
The constructor class in a JavaScript class is a special method that automatically gets invoked when a class instance is created. So, it's a great place to initialize class-level variables.
The constructor initializes a component state, which contains the weather forecast data, and a flag to indicate whether the data is being fetched. We'll learn more about component state in Chapter 3, Getting Started with React and TypeScript.
Let's have a look at the componentDidMount method:
componentDidMount() {
this.populateWeatherData();
}
This method gets invoked by React when the component is inserted into the tree and is the perfect place to load data. This method calls a populateWeatherData method, so, let's have a look at that:
async populateWeatherData() {
const response = await fetch('weatherforecast');
const data = await response.json();
this.setState({ forecasts: data, loading: false });
}
Notice the async keyword before the populateWeatherData function name. Also, notice the await keywords within the function.
Important Note
We can see that a function called fetch is used within this method.
Important Note
The parameter that's passed into the fetch function is the path to the web API resource; that is, weatherforecast. A relative path can be used because the React app and web API are of the same origin.
Once the weather forecast data has been fetched from the web API and the response has been parsed, the data is placed in the component's state.
Hang on a minute, though – the native fetch function isn't implemented in Internet Explorer (IE). Does that mean our app won't work in IE? Well, the fetch function isn't available in IE, but CRA has set up a polyfill for this so that it works perfectly fine.
Important Note
Now, let's turn our attention to the render method:
render () {
let contents = this.state.loading
? <p><em>Loading...</em></p>
: FetchData.renderForecastsTable(this.state.forecasts);
return (
<div>
<h1 id="tabelLabel">Weather forecast</h1>
<p>This component demonstrates fetching data from the
server.</p>
{contents}
</div>
);
}
The code may contain concepts you aren't familiar with, so don't worry if this doesn't make sense to you at this point. I promise that it will make sense as we progress through this book!
We already know that the render method in a React component returns JSX, and we can see that JSX is returned in this render method as well. Notice the {contents} reference in the JSX, which injects the contents JavaScript variable into the markup below the p tag, at the bottom of the div tag. The contents variable is set in the first statement in the render method and is set so that Loading... is displayed while the web API request is taking place, along with the result of FetchData.renderForecastsTable when the request has finished. We'll have a quick look at this now:
static renderForecastsTable (forecasts) {
return (
<table className='table table-striped' aria-
labelledby="tabelLabel">
<thead>
<tr>
<th>Date</th>
<th>Temp. (C)</th>
<th>Temp. (F)</th>
<th>Summary</th>
</tr>
</thead>
<tbody>
{forecasts.map(forecast =>
<tr key={forecast.dateFormatted}>
<td>{forecast.dateFormatted}</td>
<td>{forecast.temperatureC}</td>
<td>{forecast.temperatureF}</td>
<td>{forecast.summary}</td>
</tr>
)}
</tbody>
</table>
);
}
This function returns JSX, which contains an HTML table with the data from the forecasts data array injected into it. The map method on the forecasts array is used to iterate through the items in the array and render tr tags in the HTML table containing the data.
Important Note
Notice that we have applied a key attribute to each tr tag. What is this for? This isn't a standard attribute on an HTML table row, is it?
Important Note
Again, this is a lot to take in at this point, so don't worry if there are bits you don't fully understand. This will all become second nature to you by the end of this book.
In this chapter, we started off by learning that all the pages in a SPA are rendered in JavaScript with the help of a framework such as React, along with requests for data. This is handled by a backend API with the help of a framework such as ASP.NET Core. We now understand that the Startup class configures services that are used in the ASP.NET Core backend, as well as the request/response pipeline. Requests to specific backend API resources are handled by controller classes.
We also saw how CRA was leveraged by the ASP.NET Core React template to create the React app. This tool did a huge amount of setup and configuration for us, including creating a development server, bundling, linting, and even creating key polyfills for IE. We learned that the React app lives in the ClientApp folder in an ASP.NET Core React templated project, with a file called index.html being the single page. A file called package.json defines key project information for the React app, including its dependencies and the tasks that are used to run and build the React app.
This chapter has given us a great overview of all the basic parts of an ASP.NET Core React app and how they work together. We'll explore many of the topics we've covered in this chapter in greater depth throughout this book.
With the knowledge we've gained from this chapter, we are now ready to start creating the app we are going to build through this book, which we'll start to do in the next chapter.
Have a go at answering the following questions to test the knowledge that you have acquired in this chapter:
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseAuthentication();
app.UseHttpsRedirection();
app.UseMvc();
}
Which is invoked first in the request/response pipeline – authentication or the MVC controllers?
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<MyStartup>();
});
The following are some useful links so that you can learn more about the topics that were covered in this chapter:
Throughout this book, we are going to develop a question-and-answer app; we will refer to it as the Q&A app. Users will be able to submit a question and other users will be able to submit answers. They will also be able to search for previous questions and view the answers that were given for them. In this chapter, we are going to start building this app by creating the ASP.NET Core and React projects.
In the previous chapter, we learned how to create an ASP.NET Core and React app using the template in Visual Studio. However, we'll create our app in a slightly different manner in this chapter and understand the reasoning behind this decision.
Our React app will use TypeScript, so we'll learn about the benefits of TypeScript and how to create a React and TypeScript app.
We will start this chapter by creating an ASP.NET Web API project before moving on to create a separate frontend project that uses React and TypeScript. We will then add a tool to the frontend project that identifies potential problematic code, as well as a tool that automatically formats the code.
We'll cover the following topics in this chapter:
By the end of this chapter, we will be ready to start building the frontend of our Q&A app with React and TypeScript.
We will need the following tools in this chapter:
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. In order to restore code from a chapter, download the relevant source code repository and open the relevant folder in the relevant editor. If the code is frontend code, then npm install can be entered into the Terminal to restore the dependencies.
Check out the following video to see the Code in Action: https://bit.ly/2J7rc0k.
Creating an ASP.NET Core Web API project
We are going to create the ASP.NET Core and React projects separately in this chapter. In Chapter 1, Understanding the ASP.NET 5 React Template, we discovered that old versions of React and create-react-app were used. Creating the React project separately allows us to use a more recent version of React and create-react-app. Creating the React project separately also allows us to use TypeScript with React, which will help us be more productive as the code base grows.
In this section, we will create our ASP.NET Core backend in Visual Studio.
Let's open Visual Studio and carry out the following steps:

Figure 2.1 – Creating a new project

Figure 2.2 – Selecting a web application project

Figure 2.3 – Naming the project
Now, another dialog will appear that will allow us to specify the version of ASP.NET Core we want to use, as well as the specific type of project we want to create.

Figure 2.4 – Selecting an API project
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
...
}
else
{
app.UseHttpsRedirection();
}
app.UseRouting();
...
}
We have made this change because, in development mode, our frontend will use the HTTP protocol. By default, the Firefox browser doesn't allow network requests for an app that has a different protocol to the backend. Due to this, we want the frontend and backend to use the HTTP protocol in development mode.
That's the only change we are going to make to our backend in this chapter. In the next section, we'll create the React frontend project.
In Chapter 1, Understanding the ASP.NET 5 React Template, we discovered that create-react-app (CRA) was leveraged by the Visual Studio template to create the React app. We also learned that CRA did a lot of valuable setup and configuration for us. We are going to leverage CRA in this section to create our React app. CRA is a package in the npm registry that we will execute to scaffold a React and TypeScript project. First, we will take the time to understand the benefits of using TypeScript.
TypeScript adds an optional static typing layer on top of JavaScript that we can use during our development. Static types allow us to catch certain problems earlier in the development process. For example, if we make a mistake when referencing a variable, TypeScript will spot this immediately once we've mistyped the variable, as shown in the following screenshot:
Figure 2.5 – TypeScript catching an unknown variable
Another example is that, if we forget to pass a required property when referencing a React component, TypeScript informs us of the mistake straight away:
Figure 2.6 – TypeScript catching a missing React component property
This means we get a build-time error rather than a runtime error.
This also helps tools such as Visual Studio Code provide accurate IntelliSense; robust refactoring features, such as renaming a class; and great code navigation.
As we start building our frontend, we'll quickly experience the types of benefits that make us more productive.
Now that we are starting to understand the benefits of TypeScript, it's time to create a React project that uses TypeScript in the next subsection.
Let's create the React and TypeScript app with CRA by carrying out the following steps:
> npx create-react-app frontend --template typescript
The npx tool is part of npm that temporarily installs the create-react-app npm package and uses it to create our project.
We have told the create-react-app npm package to create our project in a folder called frontend.
The –-template typescript option has created our React project with TypeScript.
> cd frontend
> npm start

Figure 2.7 – App component in our React app
So, why are we using Visual Studio Code to develop our React app and not Visual Studio? Well, the overall experience is a little better and faster when developing frontend code with Visual Studio Code.
So, we now have a React and TypeScript app using the latest version of CRA. In the next section, we are going to add more automated checks to our code by introducing linting into our project.
Linting is a series of checks that are used to identify code that is potentially problematic. A linter is a tool that performs linting, and it can be run in our code editor as well as in the continuous integration (CI) process. So, linting helps us write consistent and high-quality code as it is being written.
ESLint is the most popular linter in the React community and has already been installed in our project for us by CRA. Due to this, we will be using ESLint as our linting tool for our app.
Important Note
In the following subsections, we will learn how to configure ESLints rules, as well as how to configure Visual Studio Code to highlight violations.
CRA has already installed ESLint and configured it for us.
Important Note
We need to tell Visual Studio Code to lint TypeScript code. Let's carry out the following steps to do this:

Figure 2.9 – Visual Studio Code ESLint extension

Figure 2.8 – ESLint: Probe setting
This setting tells Visual Studio Code which languages to run through ESLint while validating code.
Important Note
The preceding screenshot shows the setting being added to all the projects for the current user because it is in the User tab. If we just want to change a setting in the current project, we can find it in the Workspace tab and adjust it.

Figure 2.9 – Visual Studio Code ESLint extension
Now, Visual Studio Code will be using ESLint to validate our code. Next, we will learn how to configure ESLint.
Now that Visual Studio Code is linting our code, let's carry out the following steps to understand how we can configure the rules that ESLint executes:
{
"extends": "react-app"
}
This file defines the rules that ESLint executes. We have just told it to execute all the rules that have been configured in CRA.
const App: React.FC = () => {
const unused = 'something';
return (
...
);
};
We'll see that ESLint immediately flags this line as being unused:

Figure 2.10 – ESLint catching an unused variable
That's great – this means our code is being linted.
{
"extends": "react-app",
"rules": {
"no-debugger":"warn"
}
}
Here, we have told ESLint to warn us about the use of debugger statements.
Important Note
The list of available ESLint rules can be found at https://eslint.org/docs/rules/.
const App: React.FC = () => {
const unused = 'something';
debugger;
return (
...
);
};
We will immediately see that ESLint flags this up:
Figure 2.11 – ESLint catching a debugger statement
Now, we have linting configured in our project. Let's clean up the code by performing the following steps:
To quickly recap, CRA installs and configures ESLint for us. We can adjust the configuration using a .eslintrc.json file.
In the next section, we'll look at how we can autoformat the code.
Enforcing a consistent code style improves the readability of the code base, but it can be a pain, even if ESLint reminds us to do it. Wouldn't it be great if those semicolons we forgot to add to the end of our statements were just automatically added for us? Well, that is what automatic code formatting tools can do for us, and Prettier is one of these great tools.
We will start this section by installing Prettier before configuring it to work nicely with ESLint and Visual Studio Code.
We are going to add Prettier to our project by following these steps in Visual Studio Code:
> npm install prettier --save-dev
> npm install eslint-config-prettier eslint-plugin-prettier --save-dev
eslint-config-prettier disables ESLint rules that conflict with Prettier. eslint-plugin-prettier is an ESLint rule that formats code using Prettier.
{
"extends": ["react-app","plugin:prettier/recommended"],
"rules": {
"prettier/prettier": [
"error",
{
"endOfLine": "auto"
}
]
}
}
{
"printWidth": 80,
"singleQuote": true,
"semi": true,
"tabWidth": 2,
"trailingComma": "all"
"endOfLine": "auto"
}
These rules will result in lines over 80 characters long being sensibly wrapped, double quotes being automatically converted into single quotes, semicolons being automatically added to the end of statements, indentations automatically being set to two spaces, and trailing commas being automatically added wherever possible to items such as arrays on multiple lines.

Figure 2.12 – Visual Studio Code Prettier extension
Figure 2.13 – Settings for Prettier to format on save
So, that's Prettier set up. Whenever we save a file in Visual Studio Code, it will be automatically formatted.
Once Prettier has been installed, the following error may appear on the React import:
Figure 2.14 – Error with React once Prettier has been installed
To resolve this, run the following command:
> npm install
Once the command has finished running, the problem will be resolved.
Some of the files may not be formatted as per our Prettier settings. Start the frontend by running the following command:
> npm start
Some errors will appear in the browser:
Figure 2.15 – Prettier error
To resolve these errors, simply go to each problem file and press Ctrl + S to save it. Each file will then be formatted as per our rules.
To quickly recap, we installed Prettier to automatically format our frontend code with the eslint-config-prettier and eslint-plugin-prettier packages to make it play nicely with ESLint. The formatting can be configured in a file called .prettierrc.
In this chapter, we created our projects for the Q&A app that we are going to build throughout this book. We created the backend using the Web API ASP.NET Core template and the frontend using Create React App. We included TypeScript so that our frontend code is strongly typed, which will help us catch problems earlier and help Visual Studio Code provide a better development experience.
We added linting to our frontend code to drive quality and consistency into our code base. ESLint is our linter and its rules are configured in a file called .eslintrc.json. We also added Prettier to our frontend code, which automatically formats our code. This is really helpful in code reviews. We then configured the formatting rules in a .prettierrc file and used eslint-config-prettier to stop ESLint conflicting with Prettier.
So, we now have two separate projects for the frontend and backend, unlike what we had with the SPA template. This makes sense, mainly because we'll be using Visual Studio to develop the backend and Visual Studio Code to develop the frontend. So, there isn't any need to start both the frontend and backend together from within Visual Studio.
In the next chapter, we are going to start building the frontend in React and TypeScript.
Have a go at answering the following questions to test what you have learned in this chapter:
The following are some useful links for learning more about the topics that were covered in this chapter:
In this section, we will build the frontend of our Q&A app using React and TypeScript. We will learn different approaches to styling, how to implement client-side routes, how to implement forms efficiently, and also how to manage complex state.
This section comprises the following chapters:
In this chapter, we will start to build the Q&A React frontend with TypeScript by creating a function-based component that shows the home page in the app. This will show the most recent questions being asked in a list. As part of this, we'll take the time to understand strict mode and JSX. We'll then move on and create more components using props to pass data between them. At the end of the chapter, we'll start to understand component state and how it can make components interactive, along with events.
We'll cover the following topics in this chapter:
Let's get started!
We will need the following tools in this chapter:
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. In order to restore code from a chapter, you can download the source code repository and open the relevant folder in the relevant editor. If the code is frontend code, then you can use npm install in the Terminal to restore the dependencies.
Check out the following video to see the code in action: https://bit.ly/3mzfoSp.
In this section, we're going to understand JSX, which we briefly touched on in Chapter 1, Understanding the ASP.NET 5 React Template. We already know that JSX isn't valid JavaScript and that we need a preprocessor step to convert it into JavaScript. We are going to use the Babel REPL to play with JSX to get an understanding of how it maps to JavaScript by carrying out the following steps:
<span>Q and A</span>
The following appears in the right-hand pane, which is what our JSX has compiled down to:
React.createElement("span", null, "Q and A");
<header><span>Q and A</span></header>
React.createElement(
"header",
null,
React.createElement(
"span",
null,
"Q and A"
)
);
Note that the format of the code snippet will be slightly different to the format shown in the Babel REPL. The preceding snippet is more readable and allows us to clearly see the nested React.createElement statements.
<header><a href="/">Q and A</a></header>
React.createElement(
"header",
null,
React.createElement(
"a",
{ href: "/" },
"Q and A"
)
);
var appName = "Q and A";
<header><a href="/">{appName}</a></header>
We can see that this compiles to the following with the JavaScript code:
var appName = "Q and A";
React.createElement(
"header",
null,
React.createElement(
"a",
{ href: "/" },
appName
)
);
So, the appName variable is declared in the first statement, exactly how we defined it, and is passed in as the children parameter in the nested React.createElement call.
const appName = "Q and A";
<header><a href="/">{appName + " app"}</a></header>
This compiles down to the following:
var appName = "Q and A";
React.createElement(
"header",
null,
React.createElement(
"a",
{ href: "/" },
appName + " app"
)
);
So, JSX can be thought of as HTML with JavaScript mixed in using curly braces. This makes it incredibly powerful since regular JavaScript can be used to conditionally render elements, as well as render elements in a loop.
Now that we have an understanding of JSX, we are going to learn about React strict mode in the next section.
React strict mode helps us write better React components by carrying out certain checks. This includes checks on class component life cycle methods.
React components can either be implemented using a class or a function. Class components have special methods called life cycle methods that can execute logic at certain times in the component's life cycle.
Strict mode checks that the life cycle methods will function correctly in React concurrent mode.
Important Note
Strict mode checks life cycle methods in third-party libraries, as well as the life cycle methods we have written. So, even if we build our app using function components, we may still get warnings about problematic life cycle methods.
Strict mode checks also warn about usage of old APIs, such as the old context API. We will learn about the recommended context API in Chapter 12, Interacting with RESTful APIs.
The last category of checks that strict mode performs are checks for unexpected side effects. Memory leaks and invalid application state are also covered in these checks.
Important Note
Strict mode can be turned on by using a StrictMode component from React. Create React App has already enabled strict mode for the entirety of our app in index.tsx:
ReactDOM.render(
<React.StrictMode>
<App />
</React.StrictMode>,
document.getElementById('root')
);
The StrictMode component is wrapped around all the React components in the component tree that will be checked. So, the StrictMode component is usually placed right at the top of the component tree.
Let's temporarily add usage of an old API to App.tsx. If the frontend project isn't open in Visual Studio Code, open it and carry out the following steps:
class ProblemComponent extends React.Component {
render() {
return <div ref="div" />;
}
}
This is a class component that uses an old refs API in React. Refs is short for references but is more often referred to as refs within the React community. Don't worry about fully understanding the syntax of this component – the key point is that it uses an API, which isn't recommended.
Important Note
A React ref is a feature that allows us to access the DOM node. More information on React refs can be found at https://reactjs.org/docs/refs-and-the-dom.html.
<div className="App">
<header className="App-header">
<ProblemComponent />
…
</header>
</div>
npm start

Figure 3.1 – Strict mode warning
Strict mode has output a warning to the console about an old API being used.
Now that we have a good understanding of strict mode, we are going to start creating the components for the home page in our app.
In this section, we are going to start by creating a component for the header of our app, which will contain our app name and the ability to search for questions. Then, we'll implement some components so that we can start to build the home page of the app, along with some mock data.
We can create a basic Header component and reference it within our App component by carrying out the following steps:
import React from 'react';
We need to import React because, as we learned at the start of this chapter, JSX is transpiled into JavaScript React.createElement statements. So, without React, these statements will error out.
export const Header = () => <div>header</div>;
Congratulations! We have implemented our first function-based React component!
The preceding component is actually an arrow function that is set to the Header variable.
Important Note
Notice that there are no curly braces or a return keyword. Instead, we just define the JSX that the function should return directly after the fat arrow. This is called an implicit return.
We use the const keyword to declare and initialize the Header variable.
Important Note
We can now use the Header component within the App component.
import { Header } from './Header';
import React from 'react';
import './App.css';
import { Header } from './Header';
function App() {
return (
<div className="App">
<Header />
</div>
);
};
export default App;
Figure 3.2 – Header component
Congratulations again – we have just consumed our first React component!
So, the arrow function syntax is a really nice way of implementing function-based components. The implicit return feature reduces the number of characters we need to type in. We'll use arrow functions with implicit returns heavily throughout this book.
We're going to work on the Header component a little more so that it eventually looks as follows:
Figure 3.3 – All the elements in the Header component
So, the Header component will contain the app name, which will be Q & A, a search input, and a Sign In link.
With the app still running, carry out the following steps to modify the Header component:
export const Header = () => (
<div>
<a href="./">Q & A</a>
</div>
);
Notice that the implicit return statement containing the JSX is now in parentheses.
Important Note
When an implicit return statement is on multiple lines, parentheses are required. When an implicit return is on just a single line, we can get away without the parentheses.
Prettier automatically adds parentheses to an implicit return if they are needed, so we don't need to worry about remembering this rule.
<div>
<a href="./">Q & A</a>
<input type="text" placeholder="Search..." />
</div>
<div>
<a href="./">Q & A</a>
<input type="text" placeholder="Search..." />
<a href="./signin"><span>Sign In</span></a>
</div>
Important Note
user.svg was in the starter project for this chapter. You can download it from https://github.com/PacktPublishing/NET-5-and-React-17---Second-Edition/blob/master/chapter-03/start/frontend/src/user.svg if you didn't start this chapter with the start project.
import React from 'react';
import user from './user.svg';
export const UserIcon = () => (
<img src={user} alt="User" width="12px" />
);
Here, we have created a component called UserIcon that renders an img tag, with the src attribute set to the svg file we imported from user.svg.
import { UserIcon } from './Icons';
export const Header = () => (
<div>
<a href="./">Q & A</a>
<input type="text" placeholder="Search..." />
<a href="./signin">
<UserIcon />
<span>Sign In</span>
</a>
</div>
);
Figure 3.4 – Updated Header component
Our header doesn't look great, but we can see the elements in the Header component we just created. We'll style our Header component in the next chapter, Chapter 4, Styling React Components with Emotion.
Let's create another component to get more familiar with this process. This time, we'll create a component for the home page by carrying out the following steps:
import React from 'react';
export const HomePage = () => (
<div>
<div>
<h2>Unanswered Questions</h2>
<button>Ask a question</button>
</div>
</div>
);
Our home page simply consists of a title containing the text, Unanswered Questions, and a button to submit a question.
import { HomePage } from './HomePage';
<div className="App">
<Header />
<HomePage />
</div>
Figure 3.5 – Page title with the Ask a question button
We have made a good start on the HomePage component. In the next section, we will create some mock data that will be used within it.
We desperately need some data so that we can develop our frontend. In this section, we'll create some mock data in our frontend. We will also create a function that components will call to get data. Eventually, this function will call our real ASP.NET Core backend. Follow these steps:
export interface QuestionData {
questionId: number;
title: string;
content: string;
userName: string;
created: Date;
}
Before moving on, let's understand the code we have just entered since we have just written some TypeScript.
Important Note
An interface is a type that defines the structure for an object, including all its properties and methods. Interfaces don't exist in JavaScript, so they are purely used by the TypeScript compiler during the type checking process. We create an interface with the interface keyword, followed by its name, followed by the properties and methods that make up the interface in curly braces. More information can be found at https://www.typescriptlang.org/docs/handbook/interfaces.html.
So, our interface is called QuestionData and it defines the structure of the questions we expect to be working with. We have exported the interface so that it can be used throughout our app when we interact with question data.
Notice what appear to be types after the property names in the interface. These are called type annotations and is a TypeScript feature that doesn't exist in JavaScript.
Important Note
Type annotations lets us declare variables, properties, and function parameters with specific types. This allows the TypeScript compiler to check that the code adheres to these types. In short, type annotations allow TypeScript to catch bugs where our code is using the wrong type much earlier than if we were writing our code in JavaScript.
Notice that we have specified that the created property has a Date type.
Important Note
The Date type is a special type in TypeScript that represents the Date JavaScript object. This Date object represents a single moment in time and is specified as the number of milliseconds since midnight on January 1, 1970, UTC. More information can be found at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date.
export interface AnswerData {
answerId: number;
content: string;
userName: string;
created: Date;
}
export interface QuestionData {
questionId: number;
title: string;
content: string;
userName: string;
created: Date;
answers: AnswerData[];
}
Notice the square brackets in the type annotation for the answers property.
Important Note
Square brackets after a type denote an array of the type. More information can be found at https://www.typescriptlang.org/docs/handbook/basic-types.html#array.
const questions: QuestionData[] = [
{
questionId: 1,
title: 'Why should I learn TypeScript?',
content:
'TypeScript seems to be getting popular so I
wondered whether it is worth my time learning
it? What benefits does it give
over JavaScript?',
userName: 'Bob',
created: new Date(),
answers: [
{
answerId: 1,
content: 'To catch problems earlier speeding
up your developments',
userName: 'Jane',
created: new Date(),
},
{
answerId: 2,
content:
'So, that you can use the JavaScript
features of tomorrow, today',
userName: 'Fred',
created: new Date(),
},
],
},
{
questionId: 2,
title: 'Which state management tool should
I use?',
content:
'There seem to be a fair few state management
tools around for React - React, Unstated, ...
Which one should I use?',
userName: 'Bob',
created: new Date(),
answers: [],
},
];
Notice that we typed out our questions variable, which contains the array of the QuestionData interface we have just created. If we miss a property out or misspell it, the TypeScript compiler will complain.
export const getUnansweredQuestions = (): QuestionData[] => {
return questions.filter(q => q.answers.length ===
0);
};
This function returns the question array items we have just created, which contains no answers, by making use of the array.filter method.
Important Note
The array.filter method in an array executes the function that was passed into it for each array item, and then creates a new array with all the elements that return truthy from the function. A truthy value is any value other than false, 0, "", null, undefined, or NaN. More information can be found at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter.
Notice that we defined the return type, QuestionData[], for the function after the function parameters.
In the next section, we are going to use the getUnansweredQuestions function to provide the home page with data.
Components can have properties that allow consumers to pass parameters into them, just like when we pass parameters into a JavaScript function. React function components accept a single parameter named props, which holds its properties. The word props is short for properties.
In this section, we'll learn all about how to implement strongly typed props, including optional and default props. Then, we'll implement the rest of the home page to assist in our learning.
We are going to implement some child components that the HomePage component will use. We will pass the unanswered questions data to the child components via props.
Let's go through the following steps to implement the QuestionList component:
import React from 'react';
import { QuestionData } from './QuestionsData';
interface Props {
data: QuestionData[];
}
We have called the props interface Props and it contains a single property to hold an array of questions.
export const QuestionList = (props: Props) => <ul></ul>;
Notice the parameter, props, in the function component. We have given it a Props type with a type annotation. This means we can pass a data prop into QuestionList when we reference it in JSX.
export const QuestionList = (props: Props) => (
<ul>
{props.data.map((question) => (
<li key={question.questionId}>
</li>
))}
</ul>
);
We are using the map method within the data array to iterate through the data that's been passed into the component.
Important Note
map is a standard method that is available in a JavaScript array. The method iterates through the items in the array, invoking the function that's passed into it for each array item. The function is expected to return an item that will form a new array. In summary, it is a way of mapping an array to a new array. More information can be found at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map.
So, we iterate through the questions that are passed into QuestionList and render a li HTML element for each array item.
Notice the key prop we pass into the li element.
Important Note
The key prop helps React detect when the element changes or is added or removed. When we output content in a loop, in React, it is good practice to apply this prop and set it to a unique value within the loop. This helps React distinguish it from the other elements during the rendering process. If we don't provide a key prop, React will make unnecessary changes to the DOM that can impact performance. More information can be found at https://reactjs.org/docs/lists-and-keys.html.
export const QuestionList = ({ data }: Props) => (
<ul>
{data.map((question) => (
<li key={question.questionId} >
</li>
))}
</ul>
);
Important Note
Destructuring is a special syntax that allows us to unpack objects or arrays into variables. More information on destructuring can be found at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment.
Notice that we directly reference the data variable in the JSX and not through the props variable, like we did in the previous example. This is a nice pattern to use, particularly when there are more props.
Before we can complete the QuestionList component, we must create its child component, Question, which we'll do next.
Follow these steps to implement the Question component:
import React from 'react';
import { QuestionData } from './QuestionsData';
interface Props {
data: QuestionData;
}
export const Question = ({ data }: Props) => (
<div>
<div>
{data.title}
</div>
<div>
{`Asked by ${data.userName} on
${data.created.toLocaleDateString()} ${data.
created.toLocaleTimeString()}`}
</div>
</div>
);
So, we are rendering the question title, who asked the question, and when it was asked.
Notice that we are using the toLocaleDateString and toLocaleTimeString functions on the data.created Date object to output when the question was asked.
Important Note
That completes our Question component nicely.
Now, we can wire up the components we have just created using our props so that we get the unanswered questions rendered on the home page. Follow these steps to do so:
import { Question } from './Question';
{data.map((question) => (
<li key={question.questionId}>
<Question data={question} />
</li>
))}
import { QuestionList } from './QuestionList';
import { getUnansweredQuestions } from './QuestionsData';
<div>
<div>
<h2>Unanswered Questions</h2>
<button>Ask a question</button>
</div>
<QuestionList data={getUnansweredQuestions()} />
</div>
Notice that we pass the array of questions into the data prop by calling the getUnansweredQuestions function we created and imported earlier in this chapter.
Figure 3.6 – Unanswered questions
If we had more than one unanswered question in our mock data, they would be the output on our home page.
We are going to finish this section on props by understanding optional and default props, which can make our components more flexible for consumers.
A prop can be optional so that the consumer doesn't necessarily have to pass it into a component. For example, we could have an optional prop in the Question component that allows a consumer to change whether the content of the question is rendered or not. We'll do this now:
export const Question = ({ data }: Props) => (
<div>
<div>
{data.title}
</div>
<div>
{data.content.length > 50
? `${data.content.substring(0, 50)}...`
: data.content}
</div>
<div>
{`Asked by ${data.userName} on
${data.created.toLocaleDateString()} ${data.
created.toLocaleTimeString()}`}
</div>
</div>
);
Here, we have used a JavaScript ternary operator to truncate the content if it is longer than 50 characters.
Important Note
A JavaScript ternary is a short way of implementing a conditional statement that results in one of two branches of logic being executed. The statement contains three operands separated by a question mark (?) and a colon (:). The first operand is a condition, the second is what is returned if the condition is true, and the third is what is returned if the condition is false. The ternary operator is a popular way of implementing conditional logic in JSX. More information can be found at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Conditional_Operator.
We have also used template literals and interopolation in this code snippet.
Important Note
JavaScript template literals are strings contained in backticks (`). A template literal can include expressions that inject data into the string. Expressions are contained in curly brackets after a dollar sign. This is often referred to as interpolation. More information can be found at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals.
interface Props {
data: QuestionData;
showContent: boolean;
}
export const Question = ({ data, showContent }: Props) =>
<div>
{data.title}
</div>
{showContent && (
<div>
{data.content.length > 50
? `${data.content.substring(0, 50)}...`
: data.content}
</div>
)}
<div>
{`Asked by ${data.userName} on
${data.created.toLocaleDateString()} ${data.created.
toLocaleTimeString()}`}
</div>
We have just changed the component so that it only renders the question's content if the showContent prop is true using the short-circuit operator, &&.
Important Note
The short-circuit operator (&&) is another way of expressing conditional logic. It has two operands, with the first being the condition and the second being the logic to execute if the condition evaluates to true. It is often used in JSX to conditionally render an element if the condition is true.

Figure 3.7 – TypeScript compilation error on the Question component
This is because showContent is a required prop in the Question component and we haven't passed it in. It can be a pain to always have to update consuming components when a prop is added. Couldn't showContent just default to false if we don't pass it in? Well, this is exactly what we are going to do next.
interface Props {
data: QuestionData;
showContent?: boolean;
}
Important Note
Optional properties are actually a TypeScript feature. Function parameters can also be made optional by putting a question mark at the end of the parameter name before the type annotation; for example, (duration?:number).
Now, the compilation error in QuestionList.tsx has gone away and the app will render the unanswered questions without their content.
What if we wanted to show the question's content by default and allow consumers to suppress this if required? We'll do just this using two different approaches to default props.
export const Question = ({ data, showContent }: Props) => (
...
);
Question.defaultProps = {
showContent: true,
};
If we look at the running app, we'll see the question content being rendered as expected:

Figure 3.8 – Unanswered questions with content
export const Question = ({ data, showContent = true }: Props) => ( ... )
This arguably makes the code more readable because the default is right next to its parameter. This means our eyes don't need to scan right down to the bottom of the function to see that there is a default value for a parameter.
So, our home page is looking good in terms of code structure. However, there are a couple of components in HomePage.tsx that can be extracted so that we can reuse them as we develop the rest of the app. We'll do this next.
The children prop is a magical prop that all React components automatically have. It can be used to render child elements. It's magical because it's automatically there, without us having to do anything, as well as being extremely powerful. In the following steps, we'll use the children prop when creating Page and PageTitle components:
import React from 'react';
interface Props {
children: React.ReactNode;
}
export const PageTitle = ({
children,
}: Props) => <h2>{children}</h2>;
We define the children prop with a type annotation of ReactNode. This will allow us to use a wide range of child elements, such as other React components and plain text.
We have referenced the children prop inside the h2 element. This means that the child elements that consuming components specify will be placed inside the h2 element.
import React from 'react';
import { PageTitle } from './PageTitle';
interface Props {
title?: string;
children: React.ReactNode;
}
export const Page = ({ title, children }: Props) => (
<div>
{title && <PageTitle>{title}</PageTitle>}
{children}
</div>
);
Here, the component takes in an optional title prop and renders this inside the PageTitle component.
The component also takes in a children prop. In the consuming component, the content nested within the Page component will be rendered where we have just placed the children prop.
import { Page } from './Page';
import { PageTitle } from './PageTitle';
export const HomePage = () => (
<Page>
<div>
<PageTitle>Unanswered Questions</PageTitle>
<button>Ask a question</button>
</div>
<QuestionList data={getUnansweredQuestions()} />
</Page>
);
Notice that we aren't taking advantage of the title prop in the Page component in HomePage. This is because this page needs to have the Ask a question button to the right of the title, so we are rendering this within HomePage. However, other pages that we implement will take advantage of the title prop we have implemented.
So, the children prop allows a consumer to render custom content within the component. This gives the component flexibility and makes it highly reusable, as we'll discover when we use the Page component throughout our app. Something you may not know, however, is that the children prop is actually a function prop. We'll learn about function props in the next section.
Props can consist of primitive types, such as the boolean showContent prop we implemented in the Question component. Props can also be objects and arrays, as we have seen with the Question and QuestionList components. This in itself is powerful. However, props can also be functions, which allows us to implement components that are extremely flexible.
Using the following steps, we are going to implement a function prop on the QuestionList component that allows the consumer to render the question as an alternative to QuestionList rendering it:
interface Props {
data: QuestionData[];
renderItem?: (item: QuestionData) => JSX.Element;
}
export const QuestionList = ({ data, renderItem }: Props) => …
{data.map((question) => (
<li key={question.questionId} >
{renderItem ? renderItem(question) : <Question
data={question} />}
</li>
))}
Important Note
Conditions in if statements and ternaries will execute the second operand if the condition evaluates to truthy, and the third operand if the condition evaluates to falsy. true is only one of many truthy values. In fact, false, 0, "", null, undefined, and NaN are falsy values and everything else is truthy.
So, renderItem will be truthy and will execute if it has been passed as a prop.
<QuestionList
data={getUnansweredQuestions()}
renderItem={(question) => <div>{question.title}</div>}
/>
If we look at the running app, we'll see this effect:

Figure 3.9 – Custom rendered question
Important Note
The pattern of implementing a function prop to allow consumers to render an internal piece of the component is often referred to as a render prop. It makes the component extremely flexible and useable in many different scenarios.
<QuestionList
data={getUnansweredQuestions()}
renderItem={(question) => <div>{question.title}</div>}
/>
We can already see that function props are extremely powerful. We'll use these again when we cover handling events later in this chapter. Before we look at events, however, we are going to cover another fundamental part of a component, which is state.
Components can use what is called state to have the component re-render when a variable in the component changes. This is crucial for implementing interactive components. For example, when filling out a form, if there is a problem with a field value, we can use state to render information about that problem. State can also be used to implement behavior when external things interact with a component, such as a web API. We are going to do this in this section after changing the getUnansweredQuestions function in order to simulate a web API call.
Changing getUnansweredQuestions so that it's asynchronous
The getUnansweredQuestions function doesn't simulate a web API call very well because it isn't asynchronous. In this section, we are going to change this. Follow these steps to do so:
const wait = (ms: number): Promise<void> => {
return new Promise(resolve => setTimeout(resolve, ms));
};
This function will wait asynchronously for the number of milliseconds we pass into it. The function uses the native JavaScript setTimeout function internally so that it returns after the specified number of milliseconds. Notice that the function returns a Promise object.
Important Note
A promise is a JavaScript object that represents the eventual completion (or failure) of an asynchronous operation and its resulting value. The Promise type in TypeScript is like the Task type in .NET. More information can be found at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise.
Notice <void> after the Promise type in the return type annotation. Angle brackets after a TypeScript type indicate that this is a generic type.
Important Note
Generic types are a mechanism for allowing the consumer's own type to be used in the internal implementation of the generic type. The angle brackets allow the consumer type to be passed in as a parameter. Generics in TypeScript is very much like generics in .NET. More information can be found at https://www.typescriptlang.org/docs/handbook/generics.html.
We are passing a void type into the generic Promise type. But what is the void type?
The void type is another TypeScript-specific type that is used to represent a non-returning function. So, void in TypeScript is like void in .NET.
export const getUnansweredQuestions = async (): Promise<QuestionData[]> => {
await wait(500);
return questions.filter(q => q.answers.length ===
0);
};
Notice the await keyword before the call to the wait function and the async keyword before the function signature.
async and await are two JavaScript keywords we can use to make asynchronous code read almost identically to synchronous code. await stops the next line from executing until the asynchronous statement has completed, while async simply indicates that the function contains asynchronous statements. So, these keywords are very much like async and await in .NET.
We return Promise<QuestionData[]> rather than QuestionData[] because the function doesn't return the questions straight away. Instead, it returns the questions eventually.

Figure 3.10 – Type error on the data prop
This is because the return type of the function has changed and no longer matches what we defined in the QuestionList props interface.
{/* <QuestionList data={getUnansweredQuestions()} /> */}
Important Note
Lines of code can be commented out in Visual Studio Code by highlighting the lines and pressing Ctrl + / (forward slash).
Eventually, we're going to change HomePage so that we can store the questions in the local state and then use this value in the local state to pass to QuestionList. To do this, we need to invoke getUnansweredQuestions when the component is first rendered and set the value that's returned to state. We'll do this in the next section.
So, how do we execute logic when a function-based component is rendered? Well, we can use a useEffect hook in React, which is what we are going to do in the following steps:
export const HomePage = () => {
return (
<Page>
...
</Page>
);
};
export const HomePage = () => {
React.useEffect(() => {
console.log('first rendered');
}, []);
return (
...
);
};
Important Note
The useEffect hook is a function that allows a side effect, such as fetching data, to be performed in a component. The function takes in two parameters, with the first parameter being a function to execute. The second parameter determines when the function in the first parameter should be executed. This is defined in an array of variables that, if changed, results in the first parameter function being executed. If the array is empty, then the function is only executed once the component has been rendered for the first time. More information can be found at https://reactjs.org/docs/hooks-effect.html.
So, we output first rendered into the console when the HomePage component is first rendered.
Figure 3.11 – useEffect being executed
So, our code is executed when the component is first rendered, which is great.
Note that we shouldn't worry about the ESLint warnings about the unused QuestionList component and the getUnansweredQuestions variable. This is because these will be used when we uncomment the reference to the QuestionList component.
The time has come to implement state in the HomePage component so that we can store any unanswered questions. But how do we do this in function-based components? Well, the answer is to use another React hook called useState. Follow the steps listed in HomePage.tsx to do this:
import {
getUnansweredQuestions,
QuestionData
} from './QuestionsData';
const [
questions,
setQuestions,
] = React.useState<QuestionData[]>([]);
React.useEffect(() => {
console.log('first rendered');
}, []);
Important Note
The useState function returns an array containing the state variable in the first element and a function to set the state in the second element. The initial value of the state variable is passed into the function as a parameter. The TypeScript type for the state variable can be passed to the function as a generic type parameter. More information can be found at https://reactjs.org/docs/hooks-state.html.
Notice that we have destructured the array that's returned from useState into a state variable called questions, which is initially an empty array, and a function to set the state called setQuestions. We can destructure arrays to unpack their contents, just like we did previously with objects.
So, the type of the questions state variable is an array of QuestionData.
const [
questions,
setQuestions,
] = React.useState<QuestionData[]>([]);
const [
questionsLoading,
setQuestionsLoading,
] = React.useState(true);
We have initialized this state to true because the questions are being fetched immediately in the first rendering cycle. Notice that we haven't passed a type into the generic parameter. This is because, in this case, TypeScript can cleverly infer that this is a boolean state from the default value, true, that we passed into the useState parameter.
React.useEffect(() => {
const questions = await getUnansweredQuestions();
},[]);
We immediately get a compilation error:

Figure 3.12 – useEffect error
React.useEffect(async () => {
const questions = await getUnansweredQuestions();
}, []);

Figure 3.13 – Another useEffect error
Unfortunately, we can't specify an asynchronous callback in the useEffect parameter.
React.useEffect(() => {
const doGetUnansweredQuestions = async () => {
const unansweredQuestions = await
getUnansweredQuestions();
};
doGetUnansweredQuestions();
}, []);
useEffect(() => {
const doGetUnansweredQuestions = async () => {
const unansweredQuestions = await
getUnansweredQuestions();
setQuestions(unansweredQuestions);
setQuestionsLoading(false);
};
doGetUnansweredQuestions();
}, []);
<Page>
<div ... >
...
</div>
<QuestionList data={questions} />
</Page>
If we look at the running app, we'll see that the questions are being rendered nicely again.
<Page>
<div>
...
</div>
{questionsLoading ? (
<div>Loading…</div>
) : (
<QuestionList data={questions || []} />
)}
</Page>
Here, we are rendering a Loading... message while the questions are being fetched. Our home page will render nicely again in the running app, and we should see a Loading... message while the questions are being fetched.
// React.useEffect(() => {
// ...
// }, []);
console.log('rendered');
return ...
Every time the HomePage component is rendered, we'll see a rendered message in the console:

Figure 3.14 – Rendered twice with no state changes
So, the component is rendered twice when no state is set.
In development mode, components are rendered twice if in strict mode and if the component contains state. This is so that React can detect unexpected side effects.
React.useEffect(() => {
const doGetUnansweredQuestions = async () => {
const unansweredQuestions = await
getUnansweredQuestions();
setQuestions(unansweredQuestions);
// setQuestionsLoading(false);
};
doGetUnansweredQuestions();
}, []);
The component is rendered four times:

Figure 3.15 – Rendered four times with a state change
React renders a component when state is changed and because we are in strict mode, we get a double render.
We get a double render when the component first loads and a double render after the state change. So, we get four renders in total.
React.useEffect(() => {
const doGetUnansweredQuestions = async () => {
const unansweredQuestions = await
getUnansweredQuestions();
setQuestions(unansweredQuestions);
setQuestionsLoading(false);
};
doGetUnansweredQuestions();
}, []);
The component is rendered six times:

Figure 3.16 – Rendered six times when two pieces of state are changed
So, we are starting to understand how we can use state to control what is rendered when external things such as users or a web API interact with components. A key point that we need to take away is that when we change state in a component, React will automatically re-render the component.
Important Note
Structuring a React app into a container and presentational components often allows presentation components to be used in different scenarios. Later in this book, we'll see that we can easily reuse QuestionList on other pages in our app.
In the next section, we are going to learn how to implement logic when users interact with components using events.
JavaScript events are invoked when a user interacts with a web app. For example, when a user clicks a button, a click event will be raised from that button. We can implement a JavaScript function to execute some logic when the event is raised. This function is often referred to as an event listener.
Important Note
React allows us to declaratively attach events in JSX using function props, without the need to use addEventListener and removeEventListener. In this section, we are going to implement a couple of event listeners in React.
In this section, we are going to implement an event listener on the Ask a question button in the HomePage component. Follow these steps to do so:
<button onClick={handleAskQuestionClick}>
Ask a question
</button>
Important Note
Event listeners in JSX can be attached using a function prop that is named with on before the native JavaScript event name in camel case. So, a native click event can be attached using an onClick function prop. React will automatically remove the event listener for us before the element is destroyed.
const handleAskQuestionClick = () => {
console.log('TODO - move to the AskPage');
};
return ...
Figure 3.17 – Click event
So, handling events in React is super easy! In Chapter 5, Routing with React Router, we'll finish off the implementation of the handleAskQuestionClick function and navigate to the page where a question can be asked.
In this section, we are going to handle the change event on the input element and interact with the event parameter in the event listener. Follow these steps to do so:
<input
type="text"
placeholder="Search..."
onChange={handleSearchInputChange}
/>
export const Header = () => {
const handleSearchInputChange = (
e: React.ChangeEvent<HTMLInputElement>
) => {
console.log(e.currentTarget.value);
};
return ( ... );
};
Figure 3.18 – Change event
In this section, we've learned that we can implement strongly typed event listeners, which will help us avoid making mistakes when using the event parameter. We'll finish off the implementation of the search input in Chapter 6, Working with Forms.
In this chapter, we learned that JSX compiles JavaScript nested calls into createElement functions in React, which allows us to mix HTML and JavaScript.
We learned that we can create a React component using functions with strongly typed props passed in as parameters. Now, we know that a prop can be a function, which is how events are handled.
The component state is used to implement behavior when users or other external things interact with it. Due to this, we understand that a component and its children are re-rendered when the state is changed.
By completing this chapter, we understand that React can help us discover problems in our app when it is run in strict mode. We also understand that a component is double rendered in strict mode when it contains state.
In the next chapter, we are going to style the home page.
Try to answer the following questions to test your knowledge of this chapter:
interface Props {
name: string;
active: boolean;
}
How can we destructure the Props parameter and default active to true?
const [rating, setRating] = React.useState(0);
export const myComponent = ({name, active = true}: Props) => …
React.useEffect(() => {
getItems(category)
}, [category]);
The following are some useful links so that you can learn more about the topics that were covered in this chapter:
In this chapter, we will style the Q&A app we have built so far with a popular CSS-in-JS library called Emotion. We will start by understanding how you can style components with plain CSS and its drawbacks. Next, we will move on to understanding how CSS-in-JS addresses the problems that plain CSS has before installing Emotion. Finally, we will style components using Emotion's css prop before creating some reusable styled components.
We'll cover the following topics in this chapter:
The following tools are required for this chapter:
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. In order to restore code from a chapter, the source code repository can be downloaded and the relevant folder can be opened in the relevant editor. If the code is frontend code, then you can use npm install in the terminal to restore the dependencies.
Check out the following video to see the Code in Action: https://bit.ly/2WALbb2
In this section, we're going to style the body, app container, and header container with regular CSS and understand the drawbacks of this approach.
We are going to use the traditional approach to style the document's body. Follow these steps to do so:
import './index.css';
To reference a CSS file in a React component, we specify the location of the file after the import statement. index.css is in the same folder as index.tsx, so the import path is ./.
body {
margin: 0;
background-color: #f7f8fa;
}
Congratulations, we have just applied some styles to our app!
> npm start
The app looks like the following:
Figure 4.1 – Styled HTML body
The background color of the app is now light gray. Leave the app running as we progress to styling our App component.
We are going to apply a CSS class to the App component in the following steps:
function App() {
return (
<div className="container">
<Header />
<HomePage />
</div>
);
}
Why is a className attribute used to reference CSS classes? Shouldn't we use the class attribute? Well, we already know that JSX compiles down to JavaScript, and since class is a keyword in JavaScript, React uses a className attribute instead. React converts className to class when it adds elements to the HTML DOM.
Important Note
The React team is currently working on allowing class attributes to be used instead of className. See https://github.com/facebook/react/issues/13525 for more information.
.container {
font-family: 'Segoe UI', 'Helvetica Neue',
sans-serif;
font-size: 16px;
color: #5c5a5a;
}
Figure 4.2 – App component styled with CSS
We see that the text content within the app has the font, size, and color we specified. Leave the app running while we apply more CSS in the next subsection.
We are going to apply a CSS class to the Header component in the following steps:
.container {
position: fixed;
box-sizing: border-box;
top: 0;
width: 100%;
display: flex;
align-items: center;
justify-content: space-between;
padding: 10px 20px;
background-color: #fff;
border-bottom: 1px solid #e3e2e2;
box-shadow: 0 3px 7px 0 rgba(110, 112, 114, 0.21);
}
This style will fix the element it is applied to the top of the page. The elements within it will flow horizontally across the page and be positioned nicely.
import './Header.css';
<div className="container">

Figure 4.3 – Header component styled with CSS
We can resolve this issue by being careful when naming and structuring our CSS by using something such as BEM. For example, we could have named the CSS classes app-container and header-container so that they don't collide. However, there's still some risk of the chosen names colliding, particularly in large apps and when new members join a team.
Important Note
In the next section, we will learn how to resolve this issue with CSS modules.
CSS modules are a mechanism for scoping CSS class names. The scoping happens as a build step rather than in the browser. In fact, CSS modules are already available in our app because Create React App has configured them in webpack.
We are going to update the styles on the Header and App components to CSS modules in the following steps:
import styles from './App.module.css'
<div className={styles.container}>
This references the container class with the App CSS module.
import styles from './Header.module.css'
<div className={styles.container}>
npm start

Figure 4.4 – Styling with CSS modules
We can see that styles from the Header component no longer leak into the App component. We can see that the CSS modules have updated the class names on the elements, prefixing them with the component name and adding random-looking characters to the end.

Figure 4.5 – CSS modules in a head tag
This is great! CSS modules automatically scope CSS that is applied to React components without us having to be careful with the class naming.
In the next section, we are going to learn about another approach to styling React components, which is arguably more powerful.
In this section, we're going to style the App, Header, and HomePage components with a popular CSS-in-JS library called Emotion. Along the way, we will discover the benefits of CSS-in-JS over CSS modules.
With our frontend project open in Visual Studio Code, let's install Emotion into our project by carrying out the following steps:
> npm install @emotion/react @emotion/styled

Figure 4.6 – Styled components Visual Studio Code extension
Important Note
This extension was primarily developed for the Styled Components CSS in the JS library. CSS highlighting and IntelliSense work for Emotion as well, though.
That's Emotion installed within our project and set up nicely in Visual Studio Code.
Let's style the App component with Emotion by carrying out the following steps:
Important Note
In React 17 and beyond, it is no longer necessary to include the React import statement to render JSX, as is the case in App.tsx. However, the React import statement is still required if we want to use functions from the library, such as useState and useEffect.
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
The css function is what we'll use to style an HTML element. The comment above the import statement tells Babel to use this jsx function to transform JSX into JavaScript.
Important Note
It is important to include the /** @jsxImportSource @emotion/react */ comment; otherwise, the transpilation process will error out. It is also important that this is placed right at the top of the file.
<div css={css`
font-family: 'Segoe UI', 'Helvetica Neue', sans-serif;
font-size: 16px;
color: #5c5a5a;
`}>
<Header />
<HomePage />
</div>
We put the styles in a css prop on an HTML element in what is called a tagged template literal.
Important Note
A template literal is a string enclosed by backticks (``) that can span multiple lines and can include a JavaScript expression in curly braces, prefixed with a dollar sign (${expression}). Template literals are great when we need to merge static text with variables.
A tagged template literal is a template string that is executed through a function that is specified immediately before the template literal string. The function is executed on the template literal before the string is rendered in the browser.
So, Emotion's css function is being used in a tagged template literal to render the styles defined in backticks (``) on the HTML element.
export const gray1 = '#383737';
export const gray2 = '#5c5a5a';
export const gray3 = '#857c81';
export const gray4 = '#b9b9b9';
export const gray5 = '#e3e2e2';
export const gray6 = '#f7f8fa';
export const primary1 = '#681c41';
export const primary2 = '#824c67';
export const accent1 = '#dbb365';
export const accent2 = '#efd197';
export const fontFamily = "'Segoe UI', 'Helvetica Neue',sans-serif";
export const fontSize = '16px';
Here, we have defined six shades of gray, two shades of the primary color for our app, two shades of an accent color, as well as the font family we'll use with the standard font size.
import { fontFamily, fontSize, gray2 } from './Styles';
<div
css={css`
font-family: ${fontFamily};
font-size: ${fontSize};
color: ${gray2};
`}
>
<Header />
<HomePage />
</div>
Congratulations – we have just styled our first component with Emotion!
Let's explore the styling in the running app. This will help us understand how Emotion is applying styles:

Figure 4.7 – App component styled with Emotion
We can see that the div element we styled has a class name that starts with css and ends with the component name, with a unique name in the middle. The styles in the CSS class are the styles we defined in our component with Emotion. So, Emotion doesn't generate inline styles as we might have thought. Instead, Emotion generates styles that are held in unique CSS classes. If we look in the HTML header, we'll see the CSS class defined in a style tag:

Figure 4.8 – Emotion styles in the head tag
So, during the app's build process, Emotion has transformed the styles into a real CSS class.
We can style the Header component with Emotion by carrying out the following steps:
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
import { fontFamily, fontSize, gray1, gray2, gray5 } from './Styles';
<div
css={css`
position: fixed;
box-sizing: border-box;
top: 0;
width: 100%;
display: flex;
align-items: center;
justify-content: space-between;
padding: 10px 20px;
background-color: #fff;
border-bottom: 1px solid ${gray5};
box-shadow: 0 3px 7px 0 rgba(110, 112, 114, 0.21);
`}
>
...
</div>
In this section, we will learn how to style pseudo-classes with Emotion. We will then move on to learn how to style nested elements:
<a
href="./"
css={css`
font-size: 24px;
font-weight: bold;
color: ${gray1};
text-decoration: none;
`}
>
Q & A
</a>
Here, we are making the app name fairly big, bold, and dark gray, and also removing the underline.
<input
type="text"
placeholder="Search..."
onChange={handleSearchInputChange}
css={css`
box-sizing: border-box;
font-family: ${fontFamily};
font-size: ${fontSize};
padding: 8px 10px;
border: 1px solid ${gray5};
border-radius: 3px;
color: ${gray2};
background-color: white;
width: 200px;
height: 30px;
`}
/>
Here, we are using the standard font family and size and giving the search box a light-gray, rounded border.
<input
type="text"
placeholder="Search..."
css={css`
…
:focus {
outline-color: ${gray5};
}
`}
/>
The pseudo-class is defined by being nested within the CSS for the input. The syntax is the same as in regular CSS with a colon (:) before the pseudo-class name and its CSS properties within curly brackets.
<a
href="./signin"
css={css`
font-family: ${fontFamily};
font-size: ${fontSize};
padding: 5px 10px;
background-color: transparent;
color: ${gray2};
text-decoration: none;
cursor: pointer;
:focus {
outline-color: ${gray5};
}
`}
>
<UserIcon />
<span>Sign In</span>
</a>
The styles have added some space around the icon and the text inside the link and removed the underline from it. We have also changed the color of the line around the link when it has focus using a pseudo-class selector.
<a
href="./signin"
css={css`
…
span {
margin-left: 7px;
}
`}
>
<UserIcon />
<span>Sign In</span>
</a>
We have chosen to use a nested element selector on the anchor tag to style the span element. This is equivalent to applying the style directly on the span element, as follows:
<span
css={css`
margin-left: 7px;
`}
>
Sign In
</span>
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
<img
src={user}
alt="User"
css={css`
width: 12px;
opacity: 0.6;
`}
/>
We've moved the width from the attribute on the img tag into its CSS style. Now, the icon is a nice size and appears to be a little lighter in color.
Figure 4.9 – Fully styled header
We are getting the hang of Emotion now.
The syntax for defining the styling properties is exactly the same as defining properties in CSS, which is nice if we already know CSS well. We can even nest CSS properties in a similar manner to how we can do this in SCSS.
The remaining component to style is HomePage – we'll look at that next.
In this section, we are going to learn how to create reusable styled components while styling the HomePage component. Let's carry out the following steps:
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
export const Page = ({ title, children }: Props) => (
<div
css={css`
margin: 50px auto 20px auto;
padding: 30px 20px;
max-width: 600px;
`}
>
…
</div>
);
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
<Page>
<div
css={css`
display: flex;
align-items: center;
justify-content: space-between;
`}
>
<PageTitle>Unanswered Questions</PageTitle>
<button onClick={handleAskQuestonClick}>Ask a question</button>
</div>
</Page>
This puts the page title and the Ask a question button on the same line.
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
<h2
css={css`
font-size: 15px;
font-weight: bold;
margin: 10px 0px 5px;
text-align: center;
text-transform: uppercase;
`}
>
{children}
</h2>
This reduces the size of the page title and makes it uppercase, which will make the page's content stand out more.
import styled from '@emotion/styled';
export const PrimaryButton = styled.button`
background-color: ${primary2};
border-color: ${primary2};
border-style: solid;
border-radius: 5px;
font-family: ${fontFamily};
font-size: ${fontSize};
padding: 5px 10px;
color: white;
cursor: pointer;
:hover {
background-color: ${primary1};
}
:focus {
outline-color: ${primary2};
}
:disabled {
opacity: 0.5;
cursor: not-allowed;
}
`;
Here, we've created a styled component in Emotion by using a tagged template literal.
Important Note
A tagged template literal is a template literal to be parsed with a function. The template literal is contained in backticks (``) and the parsing function is placed immediately before it. More information can be found at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals.
In a styled component, the parsing function before the backticks (``) references a function within Emotion's styled function. The function is named with the HTML element name that will be created and is styled with the provided style. This is button in this example.
So, this styled component creates a flat, slightly rounded button with our chosen primary color.
import { PrimaryButton } from './Styles';
<Page>
<div ... >
<PageTitle>…</PageTitle>
<PrimaryButton onClick={handleAskQuestionClick}>
Ask a question
</PrimaryButton>
</div>
</Page>

Figure 4.10 – Styled page title and primary button
There is still work to do on the home page's styling, such as rendering the list of unanswered questions. We'll do this in the next section.
In this section, we are going to complete the styling on the home page.
Let's go through the following steps to style the QuestionList component:
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react;
import { accent2, gray5 } from './Styles';
<ul
css={css`
list-style: none;
margin: 10px 0 0 0;
padding: 0px 20px;
background-color: #fff;
border-bottom-left-radius: 4px;
border-bottom-right-radius: 4px;
border-top: 3px solid ${accent2};
box-shadow: 0 3px 5px 0 rgba(0, 0, 0, 0.16);
`}
>
…
</ul>
So, the ul element will appear without bullet points and with a rounded border. The top border will be slightly thicker and in the accent color. We've added a box shadow to make the list pop out a bit.
<ul ...
>
<li key={question.questionId}
css={css`
border-top: 1px solid ${gray5};
:first-of-type {
border-top: none;
}
`}
>
…
</li>
</ul>
The style on the list items adds a light-gray top border. This will act as a line separator between each list item.
Figure 4.11 – Styled unanswered questions container
Next, we will add styling to the questions.
Follow these steps to style the Question component:
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
import { gray2, gray3 } from './Styles';
<div
css={css`
padding: 10px 0px;
`}
>
<div>
{data.title}
</div>
…
</div>
<div
css={css`
padding: 10px 0px;
font-size: 19px;
`}
>
{data.title}
</div>
{showContent && (
<div
css={css`
padding-bottom: 10px;
font-size: 15px;
color: ${gray2};
`}
>
…
</div>
)}
<div
css={css`
font-size: 12px;
font-style: italic;
color: ${gray3};
`}
>
{`Asked by ${data.userName} on
${data.created.toLocaleDateString()} ${data.created.
toLocaleTimeString()}`}
</div>
Figure 4.12 – Completed home page styling
That completes the styling on the home page. It is looking much nicer now.
In this chapter, we learned about different approaches to styling a React app. We now understand that CSS-in-JS libraries automatically scope styles to components and allow us to use dynamic variables within styles.
We understand how to use the Emotion CSS-in-JS library to style React components using its css prop. We can style a pseudo-class by nesting it within the css prop style string.
We know how to create reusable styled components using the styled function in Emotion. These can be consumed as regular React components in our app.
In the next chapter, we are going to add more pages to our app and learn how to implement client-side routing to these pages.
Try to answer the following questions to test your knowledge of this chapter:
import React from 'react';
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
import { primaryAccent1 } from './Colors';
interface Props {
children: React.ReactNode
}
export const SubmitButton = ({ children }: Props) => {
return (
<button
css={css`
background-color: primaryAccent1;
`}
>
{children}
</button>
);
};
<input
type="text"
placeholder="Enter your name"
/>
<button
css={css`
background-color: ${primaryAccent1};
`}
>
{children}
</button>
<button
css={css`
background-color: ${primaryAccent1};
:focus {
outline: none;
}
`}
>
{children}
</button>
export const PageTitle = styled.h2`
font-size: 15px;
font-weight: bold;
margin: 10px 0px 5px;
text-align: center;
text-transform: uppercase;
`;
<input
type="text"
placeholder="Enter your name"
css={css`
::placeholder {
color: #dcdada;
}
`}
/>
The following are some useful links so that you can learn more about the topics that were covered in this chapter:
So far, our Q&A app only contains one page, so the time has come to add more pages to the app. In Chapter 1, Understanding the ASP.NET 5 React Template, we learned that pages in a Single Page Application (SPA) are constructed in the browser without any request for the HTML to the server.
React Router is a great library that helps us to implement client-side pages and the navigation between them. So, we are going to bring it into our project in this chapter.
In this chapter, we will declaratively define the routes that are available in our app. We learn how to provide feedback to users when they navigate to paths that don't exist. We'll implement a page that displays the details of a question, along with its answers. This is where we will learn how to implement route parameters. We'll begin by implementing the question search feature, where we will learn how to handle query parameters. We will also start to implement the page for asking a question and optimize this so that its JavaScript is loaded on demand rather than when the app loads.
We'll cover the following topics in this chapter:
We'll use the following tools in this chapter:
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. In order to restore code from a chapter, the source code repository can be downloaded and the relevant folder opened in the relevant editor. If the code is frontend code, then npm install can be entered in the terminal to restore the dependencies.
Check out the following video to see the code in action: http://bit.ly/34XoKyz
In this section, we are going to install React Router with the corresponding TypeScript types by carrying out the following steps:
> npm install react-router-dom
Important note
Make sure react-router-dom version 6+ has been installed and listed in package.json. If version 5 has been installed, then version 6 can be installed by running npm install react-router-dom@next.
> npm install history
A peer dependency is a dependency that is not automatically installed by npm. This is why we have installed it in our project.
That's it—nice and simple! We'll start to declare the routes in our app in the next section.
We declare the pages in our app using BrowserRouter, Routes, and Route components. BrowserRouter is the top-level component that performs navigation between pages. Paths to pages are defined in Route components nested within a Routes component. The Routes component decides which Route component should be rendered for the current browser location.
We are going to start this section by creating blank pages that we'll eventually implement throughout this book. Then, we'll declare these pages in our app using BrowserRouter, Routes, and Route components.
Let's create blank pages for signing in, asking a question, viewing search results, and viewing a question with its answers by carrying out the following steps:
import React from 'react';
import { Page } from './Page';
export const SignInPage = () => (
<Page title="Sign In">{null}</Page>
);
Here, we have used the Page component we created in the previous chapter to create an empty page that has the title Sign In. We are going to use a similar approach for the other pages we need to create.
Notice that we are rendering null in the content of the Page component at the moment. This is a way of telling React to render nothing.
import React from 'react';
import { Page } from './Page';
export const AskPage = () => (
<Page title="Ask a question">{null}</Page>
);
import React from 'react';
import { Page } from './Page';
export const SearchPage = () => (
<Page title="Search Results">{null}</Page>
);
import React from 'react';
import { Page } from './Page';
export const QuestionPage = () => (
<Page>Question Page</Page>
);
The title on the question page is going to be styled differently, which is why we are not using the title prop on the Page component. We have simply added some text on the page for the time being so that we can distinguish this page from the other pages.
So, that's our pages created. Now, it's time to define all the routes to these pages.
We are going to define all of the routes to the pages we created by carrying out the following steps:
import { BrowserRouter, Routes, Route } from 'react-router-dom';
import { AskPage } from './AskPage';
import { SearchPage } from './SearchPage';
import { SignInPage } from './SignInPage';
<BrowserRouter>
<div css={ ... } >
<Header />
<HomePage />
</div>
</BrowserRouter>
<BrowserRouter>
<div css={ ... } >
<Header />
<Routes>
<Route path="" element={<HomePage/>} />
<Route path="search" element={<SearchPage/>} />
<Route path="ask" element={<AskPage/>} />
<Route path="signin" element={<SignInPage/>} />
</Routes>
</div>
</BrowserRouter>
Each route is defined in a Route component that defines what should be rendered in an element prop for a given path in a path prop. The route with the path that best matches the browser's location is rendered.
For example, if the browser location is http://localhost:3000/search, then the second Route component (that has path set to "search") will be the best match. This will mean that the SearchPage component is rendered.
Notice that we don't need a preceding slash (/) on the path because React Router performs relative matching by default.
Figure 5.1 – Search page
Here, we can see that React Router has decided that the best match is the Route component with a path of "search", and so renders the SearchPage component.
Feel free to visit the other pages as well – they will render fine now.
So, that's our basic routing configured nicely. What happens if the user enters a path in the browser that doesn't exist in our app? We'll find out in the next section.
In this section, we'll handle paths that aren't handled by any of the Route components. By following these steps, we'll start by understanding what happens if we put an unhandled path in the browser:

Figure 5.2 – Unhandled path
So, nothing is rendered beneath the header when we browse to a path that isn't handled by a Route component. This makes sense if we think about it.
<Routes>
<Route path="" element={<HomePage/>} />
<Route path="search" element={<SearchPage/>} />
<Route path="ask" element={<AskPage/>} />
<Route path="signin" element={<SignInPage/>} />
<Route path="*" element={<NotFoundPage/>} />
</Routes>
In order to understand how this works, let's think about what the Routes component does again – it renders the Route component that best matches the browser location. Path * will match any browser location, but isn't very specific. So, * won't be the best match for a browser location of /, /search, /ask, or /signin, but will catch invalid routes.
import React from 'react';
import { Page } from './Page';
export const NotFoundPage = () => (
<Page title="Page Not Found">{null}</Page>
);
import { NotFoundPage } from './NotFoundPage';
Figure 5.3 – Unhandled path
So, once we understand how the Routes component works, implementing a not-found page is very easy. We simply use a Route component with a path of * inside the Routes component.
At the moment, we are navigating to different pages in our app by manually changing the location in the browser. In the next section, we'll learn how to implement links to perform navigation within the app itself.
In this section, we are going to use the Link component from React Router to declaratively perform navigation when clicking the app name in the app header. Then, we'll move on to programmatically performing navigation when clicking the Ask a question button to go to the ask page.
At the moment, when we click on Q and A in the top-left corner of the app, it is doing an HTTP request that returns the whole React app, which, in turn, renders the home page. We are going to change this by making use of React Router's Link component so that navigation happens in the browser without an HTTP request. We are also going to make use of the Link component for the link to the sign-in page as well. We'll learn how to achieve this by performing the following steps:
import { Link } from 'react-router-dom';
<Link
to="/"
css={ ... }
>
Q & A
</Link>
<Link
to="signin"
css={ ... }
>
<UserIcon />
<span>Sign In</span>
</Link>
So, the Link component is a great way of declaratively providing client-side navigation options in JSX. The task we performed in the last step confirms that all the navigation happens in the browser without any server requests, which is great for performance.
Sometimes, it is necessary to do navigation programmatically. Follow these steps to programmatically navigate to the ask page when the Ask a question button is clicked:
import { useNavigate } from 'react-router-dom';
This is a hook that returns a function that we can use to perform a navigation.
const navigate = useNavigate();
const handleAskQuestionClick = () => {
...
};
const handleAskQuestionClick = () => {
navigate('ask');
};
So, we can declaratively navigate by using the Link component and programmatically navigate using the useNavigate hook in React Router. We will continue to make use of the Link component in the next section.
In this section, we are going to define a Route component for navigating to the question page. This will contain a variable called questionId at the end of the path, so we will need to use what is called a route parameter. We'll implement more of the question page content in this section as well.
Let's carry out the following steps to add the question page route:
import { QuestionPage } from './QuestionPage';
<Routes>
…
<Route path="questions/:questionId"
element={<QuestionPage />} />
<Route path="*" element={<NotFoundPage/>} />
</Routes>
Note that the path we entered contains :questionId at the end.
Important note
Route parameters are defined in the path with a colon in front of them. The value of the parameter is then available to destructure in the useParams hook.
The Route component could be placed in any position within the Routes component. It is arguably more readable to keep the wildcard route at the bottom because this is the least specific path and therefore will be the last to be matched.
import { useParams } from 'react-router-dom';
export const QuestionPage = () => {
const { questionId } = useParams();
return <Page>Question Page</Page>;
};
We have also changed QuestionPage to have an explicit return statement.
<Page>Question Page {questionId}</Page>;
We'll come back and fully implement the question page in Chapter 6, Working with Forms. For now, we are going to link to this page from the Question component.
import { Link } from 'react-router-dom';
<div
css={css`
padding: 10px 0px;
font-size: 19px;
`}
>
<Link
css={css`
text-decoration: none;
color: ${gray2};
`}
to={`/questions/${data.questionId}`}
>
{data.title}
</Link>
</div>
Figure 5.4 – Question page with route parameter
So, we implement route parameters by defining variables in the route path with a colon in front and then picking the value up with the useParams hook.
Let's carry out some more steps to implement the question page a little more:
export const getQuestion = async (
questionId: number
): Promise<QuestionData | null> => {
await wait(500);
const results
= questions.filter(q => q.questionId ===
questionId);
return results.length === 0 ? null : results[0];
};
We have used the array filter method to get the question for the passed-in questionId.
Notice the type annotation for the function's return type. The type passed into the Promise generic type is Question | null, which is called a union type.
Important note
A union type is a mechanism for defining a type that contains values from multiple types. If we think of a type as a set of values, then the union of multiple types is the same as the union of the sets of values. More information is available at https://www.typescriptlang.org/docs/handbook/unions-and-intersections.html#union-types.
So, the function is expected to asynchronously return an object of the QuestionData or null type.
import { QuestionData, getQuestion } from './QuestionsData';
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
import { gray3, gray6 } from './Styles';
export const QuestionPage = () => {
const [
question,
setQuestion,
] = React.useState<QuestionData | null>(null);
const { questionId } = useParams();
return <Page>Question Page {questionId}</Page>;
};
We are going to store the question in the state when the component is initially rendered.
Note that we are using a union type for the state because the state will be null initially while the question is being fetched, and also null if the question isn't found.
export const QuestionPage = () => {
…
const { questionId } = useParams();
React.useEffect(() => {
const doGetQuestion = async (
questionId: number,
) => {
const foundQuestion = await getQuestion(
questionId,
);
setQuestion(foundQuestion);
};
if (questionId) {
doGetQuestion(Number(questionId));
}
}, [questionId]);
return ...
};
So, when it is first rendered, the question component will fetch the question and set it in the state that will cause a second render of the component. Note that we use the Number constructor to convert questionId from a string into a number.
Also, note that the second parameter in the useEffect function has questionId in an array. This is because the function that useEffect runs (the first parameter) is dependent on the questionId value and should rerun if this value changes. If [questionId] wasn't provided, it would get into an infinite loop because every time it called setQuestion, it causes a re-render, which, without [questionId], would always rerun the method.
<Page>
<div
css={css`
background-color: white;
padding: 15px 20px 20px 20px;
border-radius: 4px;
border: 1px solid ${gray6};
box-shadow: 0 3px 5px 0 rgba(0, 0, 0, 0.16);
`}
>
<div
css={css`
font-size: 19px;
font-weight: bold;
margin: 10px 0px 5px;
`}
>
{question === null ? '' : question.title}
</div>
</div>
</Page>
We don't render the title until the question state has been set. The question state is null while the question is being fetched, and it remains null if the question isn't found. Note that we use a triple equals (===) to check whether the question variable is null rather than a double equals (==).
Important note
When using triple equals (===), we are checking for strict equality. This means both the type and the value we are comparing have to be the same. When using a double equals (==), the type isn't checked. Generally, it is good practice to use the triple equals (===) to perform a strict equality check.
If we look at the running app, we will see that the question title has been rendered in a nice white card:

Figure 5.5 – Question page title
<Page>
<div ... >
<div ... >
{question === null ? '' : question.title}
</div>
{question !== null && (
<React.Fragment>
<p
css={css`
margin-top: 0px;
background-color: white;
`}
>
{question.content}
</p>
</React.Fragment>
)}
</div>
</Page>
So, we show the output content from the question in JSX if the question state has been set from the fetched data. Note that this is nested within a Fragment component—what is this for?
Important note
In React, a component can only return a single element. This rule applies to conditional rendering logic where there can be only a single parent React element being rendered. React Fragment allows us to work around this rule because we can nest multiple elements within it without creating a DOM node.
We can see the problem that Fragment solves if we try to return two elements after the short circuit operator:

Figure 5.6 – Reason for react fragment
{question !== null && (
<React.Fragment>
<p ... >
{question.content}
</p>
<div
css={css`
font-size: 12px;
font-style: italic;
color: ${gray3};
`}
>
{`Asked by ${question.userName} on
${question.created.toLocaleDateString()}
${question.created.toLocaleTimeString()}`}
</div>
</React.Fragment>
)}
Now, all the details of the question will render in a nice white card in the running app on the question page:
Figure 5.7 – Question page
So, the question page is looking nice now. We aren't rendering any answers yet though, so let's look at that next.
Follow these steps to create a component that will render a list of answers:
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
import React from 'react';
import { AnswerData } from './QuestionsData';
import { Answer } from './Answer';
import { gray5 } from './Styles';
So, we are going to use an unordered list to render the answers without the bullet points. We have referenced a component, Answer, that we'll create later in these steps.
interface Props {
data: AnswerData[];
}
export const AnswerList = ({ data }: Props) => (
<ul
css={css`
list-style: none;
margin: 10px 0 0 0;
padding: 0;
`}
>
{data.map(answer => (
<li
css={css`
border-top: 1px solid ${gray5};
`}
key={answer.answerId}
>
<Answer data={answer} />
</li>
))}
</ul>
);
Each answer is output in an unordered list in an Answer component, which we'll implement next.
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react';
import React from 'react';
import { AnswerData } from './QuestionsData';
import { gray3 } from './Styles';
interface Props {
data: AnswerData;
}
export const Answer = ({ data }: Props) => (
<div
css={css`
padding: 10px 0px;
`}
>
<div
css={css`
padding: 10px 0px;
font-size: 13px;
`}
>
{data.content}
</div>
<div
css={css`
font-size: 12px;
font-style: italic;
color: ${gray3};
`}
>
{`Answered by ${data.userName} on
${data.created.toLocaleDateString()}
${data.created.toLocaleTimeString()}`}
</div>
</div>
);
import { AnswerList } from './AnswerList';
{question !== null && (
<React.Fragment>
<p ... >
{question.content}
</p>
<div ... >
{`Asked by ${question.userName} on
${question.created.toLocaleDateString()}
${question.created.toLocaleTimeString()}`}
</div>
<AnswerList data={question.answers} />
</React.Fragment>
)}
If we look at the running app on the question page at questions/1, we'll see the answers rendered nicely:
Figure 5.8 – Question page with answers
That completes the work we need to do on the question page in this chapter. However, we need to allow users to submit answers to a question, which we'll cover in Chapter 6, Working with Forms.
Next up, we'll look at how we can work with query parameters with React Router.
A query parameter is part of the URL that allows additional parameters to be passed into a path. For example, /search?criteria=typescript has a query parameter called criteria with a value of typescript. Query parameters are sometimes referred to as a search parameter.
In this section, we are going to implement a query parameter on the search page called criteria, which will drive the search. We'll implement the search page along the way. Let's carry out the following steps to do this:
export const searchQuestions = async (
criteria: string,
): Promise<QuestionData[]> => {
await wait(500);
return questions.filter(
q =>
q.title.toLowerCase()
.indexOf(criteria.toLowerCase()) >= 0 ||
q.content.toLowerCase()
.indexOf(criteria.toLowerCase()) >= 0,
);
};
So, the function uses the array filter method and matches the criteria to any part of the question title or content.
/** @jsxImportSource @emotion/react */
import { css } from '@emotion/react'
import { useSearchParams } from 'react-router-dom';
import { QuestionList } from './QuestionList';
import { searchQuestions, QuestionData } from './QuestionsData';
The useSearchParams hook from React Router is used to access query parameters.
export const SearchPage = () => {
const [searchParams] = useSearchParams();
return (
<Page title="Search Results">{null}</Page>
);
};
The useSearchParams hook returns an array with two elements. The first element is an object containing the search parameters, and the second element is a function to update the query parameters. We have only destructured the first element in our code because we don't need to update query parameters in this component.
export const SearchPage = () => {
const [searchParams] = useSearchParams();
const [
questions,
setQuestions,
] = React.useState<QuestionData[]>([]);
return …
};
export const SearchPage = () => {
const [searchParams] = useSearchParams();
const [
questions,
setQuestions,
] = React.useState<QuestionData[]>([]);
const search = searchParams.get('criteria') || "";
return …
};
The searchParams object contains a get method that can be used to get the value of a query parameter.
const search = searchParams.get('criteria') || '';
React.useEffect(() => {
const doSearch = async (criteria: string) => {
const foundResults = await searchQuestions(
criteria,
);
setQuestions(foundResults);
};
doSearch(search);
}, [search]);
<Page title="Search Results">
{search && (
<p
css={css`
font-size: 16px;
font-style: italic;
margin-top: 0px;
`}
>
for "{search}"
</p>
)}
</Page>
<Page title="Search Results">
{search && (
<p ... >
for "{search}"
</p>
)}
<QuestionList data={questions} />
</Page>
Our QuestionList component is now used in both the home and search pages with different data sources. The reusability of this component has been made possible because we have followed the container pattern we briefly mentioned in Chapter 3, Getting Started with React and TypeScript.
Figure 5.9 – Search results
So, the useSearchParams hook in React Router makes interacting with query parameters nice and easy.
In Chapter 6, Working with Forms, we'll wire up the search box in the header to our search form.
In the next section, we'll learn how we can load components on demand.
At the moment, all the JavaScript for our app is loaded when the app first loads. This is fine for small apps, but for large apps, this can have a negative impact on performance. There may be large pages that are rarely used in the app that we want to load the JavaScript for on demand. This process is called lazy loading.
We are going to lazy load the ask page in this section. It isn't a great use of lazy loading because this is likely to be a popular page in our app, but it will help us learn how to implement this. Let's carry out the following steps:
export const AskPage = () => <Page title="Ask a question" />;
export default AskPage;
import React from 'react';
const AskPage = React.lazy(() => import('./AskPage'));
It is important that this is the last import statement in the file because, otherwise, ESLint may complain that the import statements beneath it are in the body of the module.
The lazy function in React lets us render a dynamic import as a regular component. A dynamic import returns a promise for the requested module that is resolved after it has been fetched, instantiated, and evaluated.

Figure 5.10 – No Suspense component warning
<Route
path="ask"
element={
<React.Suspense
fallback={
<div
css={css`
margin-top: 100px;
text-align: center;
`}
>
Loading...
</div>
}
>
<AskPage />
</React.Suspense>
}
/>
The Suspense fallback prop allows us to render a component while AskPage is loading. So, we are rendering a Loading... message while the AskPage component is being loaded.

Figure 5.11 – AskPage component loaded on demand

Figure 5.12 – Slow 3G option
Figure 5.13 – Suspense fallback
In this example, the AskPage component is small in size, so this approach doesn't really positively impact performance. However, loading larger components on demand can really help performance, particularly on slow connections.
React Router gives us a comprehensive set of components for managing the navigation between pages in our app. We learned that the top-level component is BrowserRouter, which looks for Route components within a Routes component beneath it where we define what components should be rendered for certain paths. The path in a Route component that best matches the current browser location is the one that is rendered.
The useParams hook gives us access to route parameters, and the useSearchParams hook gives us access to query parameters. These hooks are available in any React component under BrowserRouter in the component tree.
We learned that the React lazy function, along with its Suspense component, can be used on large components that are rarely used by users to load them on demand. This helps the performance of the startup time of our app.
In the next chapter, we are going to continue building the frontend of the Q&A app, this time focusing on implementing forms.
The following questions will cement your knowledge of what you have just learned about in this chapter:
<BrowserRouter>
<Routes>
<Route path="search" element={<SearchPage/>} />
<Route path="" element={<HomePage/>} />
</Routes>
</BrowserRouter>
Answer the following questions:
<BrowserRouter>
<Routes>
<Route path="search" element={<SearchPage/>} />
<Route path="" element ={<HomePage/>} />
<Route path="*" element={<NotFoundPage/>} />
</Routes>
</BrowserRouter>
What component will be rendered when the /signin location is entered in the browser?
<Route path="users/:userId" component={UserPage} />
How can we reference the userId route parameter in a component?
<a href="/products">Products</a>
At the moment, the navigation makes a server request. How can we change this so that the navigation happens only within the browser?
<Route path="signin" element={<SignInPage />} />
<Route path="login" element={<SignInPage />} />
const { userId } = useParams();
const [searchParams] = useSearchParams();
const id = searchParams.get('id');
<Link to="products">Products</Link>
const navigate = useNavigate();
We can then use this function at the appropriate place in code to navigate to the /success path:
navigate('success');
The following are some useful links for learning more about the topics that have been covered in this chapter:
Forms are an important topic because they are extremely common in the apps we build. In this chapter, we'll learn how to build forms using React controlled components and discover that there is a fair amount of boilerplate code involved. We will use a popular library to reduce the boilerplate code. This will also help us to build several forms in our app.
Client-side validation is critical to the user experience of the forms we build, so we'll cover this topic in a fair amount of depth. We will also cover how to submit forms.
We'll cover the following topics in this chapter:
By the end of this chapter, you will have learned how to efficiently create forms with their key ingredients.
We'll use the following tools in this chapter:
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. To restore code from a chapter, you can download the source code repository, and the relevant folder can be opened in the relevant editor. If the code is frontend code, then npm install can be entered in the Terminal to restore the required dependencies.
Check out the following video to see the code in action: https://bit.ly/3pherQp.
In this section, we are going to learn how to use what are called controlled components to implement a form. A controlled component has its value synchronized with the state in React. This will make more sense when we've implemented our first controlled component.
Let's open our project in Visual Studio Code and change the search box in our app header to a controlled component. Follow these steps:
import {
Link,
useSearchParams,
} from 'react-router-dom';
export const Header = () => {
const [searchParams] = useSearchParams();
const criteria = searchParams.get('criteria') || '';
const handleSearchInputChange = ...
}
const searchParams = new URLSearchParams(location.search);
const criteria = searchParams.get('criteria') || '';
const [search, setSearch] = React.useState(criteria);
<input
type="text"
placeholder="Search..."
value={search}
onChange={handleSearchChange}
css={ ... }
/>
You'll notice that nothing seems to happen; something is preventing us from entering the value. We have just set the value to some React state, so React is now controlling the value of the search box. This is why we no longer appear to be able to type into it.
We are part way through creating our first controlled input. However, controlled inputs aren't much use if users can't enter anything into them. So, how can we make our input editable again? The answer is that we need to listen to changes that have been made to the input value and update the state accordingly. React will then render the new value from the state.
const handleSearchChange = (e: ChangeEvent<HTMLInputElement>) => {
setSearch(e.currentTarget.value);
};
Now, if we go to the running app and enter something into the search box, this time, it will behave as expected, allowing us to enter characters into it.
<form>
<input
type="text"
placeholder="Search..."
onChange={handleSearchInputChange}
value={search}
css={ ... }
/>
</form>
Eventually, this will allow a user to invoke the search when the Enter key is pressed.
<form onSubmit={handleSubmit}>
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
console.log(search);
};
return …

Figure 6.1 – Controlled component
Now, if we type characters into the search box and press Enter, we will see the submitted search criteria in the console.
In summary, controlled components have their values managed by React's state. It is important that we implement a change handler that updates the state; otherwise, our users won't be able to interact with the component.
If we were to implement a form with several controlled components, we would have to create the state and a change event listener to update the state for each field. That's quite a lot of boilerplate code to write when implementing forms. Is there a forms library that we can use to reduce the amount of repetitive code we must write? Yes! We'll do just this in the next section.
In this section, we are going to use a popular forms library called React Hook Form. This library reduces the amount of code we need to write when implementing forms. Once we have installed React Hook Form, we will refactor the search form we created in the previous section. We will then use React Hook Form to implement forms for asking a question and answering a question.
Let's install React Hook Form by entering the following command into the Terminal:
> npm install react-hook-form
After a few seconds, React Hook Form will be installed.
The react-hook-form package includes TypeScript types, so these aren't in a separate package that we need to install.
Next, we will start to use React Hook Form for the search form in the Header component.
We are going to use React Hook Form to reduce the amount of code in the Header component. Open Header.tsx and follow these steps:
const [search, setSearch] = React.useState(criteria);
React Hook Form will manage the field state, so we don't have to write explicit code for this.
<input
name="search"
type="text"
…
/>
The name property is required by React Hook Form and must have a unique value for a given form. We will eventually be able to access this value from the input element using this name.
<input
name="search"
type="text"
placeholder="Search..."
defaultValue={criteria}
css={ ... }
/>
React Form Hook will eventually manage the value of the input.
Note that defaultValue is a property of the input element for setting its initial value.
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
};
import { useForm } from 'react-hook-form';
type FormData = {
search: string;
};
export const Header = () => {
const { register } = useForm<FormData>();
const [searchParams] = useSearchParams();
...
<input
ref={register}
name="search"
type="text"
placeholder="Search..."
defaultValue={criteria}
css={ ... }
/>
The register function allows an input element to be registered with React Hook Form and then be managed by it. It needs to be set to the ref property on the element.
Important note
The ref property is a special property that React adds to elements that enables the underlying DOM node to be accessed.
The code for our form is a lot shorter now. This is because React Hook Form holds the field state and manages updates to it.
We used the register function from the useForm hook to tell React Hook Form which fields to manage. There are other useful functions and objects in the useForm hook that we will learn about and use in this chapter.
React Form Hook is now controlling the search input field. We will return to this and implement the search submission ability in the Submitting forms section.
Next, will turn our attention to styling our forms.
In this section, we are going to create some styled components that can be used in the forms that we will eventually implement. Open Styles.ts and follow these steps:
import { css } from '@emotion/react';
export const Fieldset = styled.fieldset`
margin: 10px auto 0 auto;
padding: 30px;
width: 350px;
background-color: ${gray6};
border-radius: 4px;
border: 1px solid ${gray5};
box-shadow: 0 3px 5px 0 rgba(0, 0, 0, 0.16);
`;
We will eventually use a fieldset element inside our forms.
export const FieldContainer = styled.div`
margin-bottom: 10px;
`;
export const FieldLabel = styled.label`
font-weight: bold;
`;
const baseFieldCSS = css`
box-sizing: border-box;
font-family: ${fontFamily};
font-size: ${fontSize};
margin-bottom: 5px;
padding: 8px 10px;
border: 1px solid ${gray5};
border-radius: 3px;
color: ${gray2};
background-color: white;
width: 100%;
:focus {
outline-color: ${gray5};
}
:disabled {
background-color: ${gray6};
}
`;
export const FieldInput = styled.input`
${baseFieldCSS}
`;
This causes the input element to include the CSS from the baseFieldCSS variable in the new styled component we are creating.
export const FieldTextArea = styled.textarea`
${baseFieldCSS}
height: 100px;
`;
export const FieldError = styled.div`
font-size: 12px;
color: red;
`;
export const FormButtonContainer = styled.div`
margin: 30px 0px 0px 0px;
padding: 20px 0px 0px 0px;
border-top: 1px solid ${gray5};
`;
The last styled components we are going to create are the submission messages:
export const SubmissionSuccess = styled.div`
margin-top: 10px;
color: green;
`;
export const SubmissionFailure = styled.div`
margin-top: 10px;
color: red;
`;
With that, we've implemented all the styled components that we will use in our forms.
Now that we have implemented these styled components, we will use these to implement our next form.
Now, it's time to implement the form so that our users can ask a question. We'll do this by leveraging React Hook Form and our form's styled components. Follow these steps:
import {
Fieldset,
FieldContainer,
FieldLabel,
FieldInput,
FieldTextArea,
FormButtonContainer,
PrimaryButton,
} from './Styles';
import { useForm } from 'react-hook-form';
type FormData = {
title: string;
content: string;
};
export const AskPage = () => {
const { register } = useForm<FormData>();
return (
<Page title="Ask a question">
{null}
</Page>
);
}
<Page title="Ask a question">
<form>
<Fieldset>
</Fieldset>
</form>
</Page>
<Fieldset>
<FieldContainer>
<FieldLabel htmlFor="title">
Title
</FieldLabel>
<FieldInput
id="title"
name="title"
type="text"
ref={register}
/>
</FieldContainer>
</Fieldset>
Notice how we have tied the label to the input using the htmlFor attribute. This means a screen reader will read out the label when the input has focus. In addition, clicking on the label will automatically set focus on the input.
<Fieldset>
<FieldContainer>
...
</FieldContainer>
<FieldContainer>
<FieldLabel htmlFor="content">
Content
</FieldLabel>
<FieldTextArea
id="content"
name="content"
ref={register}
/>
</FieldContainer>
</Fieldset>
<Fieldset>
<FieldContainer>
...
</FieldContainer>
<FormButtonContainer>
<PrimaryButton type="submit">
Submit Your Question
</PrimaryButton>
</FormButtonContainer>
</Fieldset>
Figure 6.2 – Form for asking a question
Our form renders as expected.
React Hook Form and the styled form components made that job pretty easy. Now,
let's try implementing another form, which is none other than the answer form.
Let's implement an answer form on the question page. Follow these steps:
import {
gray3,
gray6,
Fieldset,
FieldContainer,
FieldLabel,
FieldTextArea,
FormButtonContainer,
PrimaryButton,
} from './Styles';
import { useForm } from 'react-hook-form';
type FormData = {
content: string;
};
export const QuestionPage = () => {
...
const { register } = useForm<FormData>();
return ...
}
<AnswerList data={question.answers} />
<form
css={css`
margin-top: 20px;
`}
>
<Fieldset>
<FieldContainer>
<FieldLabel htmlFor="content">
Your Answer
</FieldLabel>
<FieldTextArea
id="content"
name="content"
ref={register}
/>
</FieldContainer>
<FormButtonContainer>
<PrimaryButton type="submit">
Submit Your Answer
</PrimaryButton>
</FormButtonContainer>
</Fieldset>
</form>
So, the form will contain a single field for the answer content and the submit button will have the caption Submit Your Answer.
Figure 6.3 – Answer form
Our form renders as expected.
We have now built three forms with React Hook Form and experienced first-hand how it simplifies building fields. We also built a handy set of styled form components along the way.
Our forms are looking good, but there is no validation yet. For example, we could submit a blank answer to a question, and it wouldn't be verified since there no such mechanism has been implemented yet. We will enhance our forms with validation in the next section.
Including validation on a form improves the user experience as you can provide immediate feedback on whether the information that's been entered is valid. In this section, we are going to add validation rules to the forms for asking and answering questions. These validation rules will include checks to ensure a field is populated and that it contains a certain number of characters.
We are going to implement validation on the ask form by following these steps:
import {
...
FieldError,
} from './Styles';
const { register, errors } = useForm<FormData>();
const { register, errors } = useForm<FormData>({
mode: 'onBlur',
});
It is important to note that the fields will be validated when the form is submitted as well.
<FieldInput
id="title"
name="title"
type="text"
ref={register({
required: true,
minLength: 10,
})}
/>
The highlighted code states that the title field is required and must be at least 10 characters long.
<FieldInput
...
/>
{errors.title &&
errors.title.type ===
'required' && (
<FieldError>
You must enter the question title
</FieldError>
)}
{errors.title &&
errors.title.type ===
'minLength' && (
<FieldError>
The title must be at least 10 characters
</FieldError>
)}
</FieldContainer>
errors is an object that React Hook Form maintains for us. The keys in the object correspond to the name property in FieldInput. The type property within each error specifies which rule the error is for.
<FieldTextArea
id="content"
name="content"
ref={register({
required: true,
minLength: 50,
})}
/>
<FieldTextArea
...
/>
{errors.content &&
errors.content.type ===
'required' && (
<FieldError>
You must enter the question content
</FieldError>
)}
{errors.content &&
errors.content.type ===
'minLength' && (
<FieldError>
The content must be at least 50 characters
</FieldError>
)}
</FieldContainer>
The errors object will contain a content property when the content field fails a validation check. The type property within the content property indicates which rule has been violated. So, we use this information in the errors object to render the appropriate validation messages.
Without entering anything in the form, click into and out of the fields. You'll see that the form has rendered validation errors, meaning the mechanism that we implemented has worked successfully. Don't type anything in the title field and then enter content that is less than 50 characters:
Figure 6.4 – Validation on the ask form
Here, we can see that the validation errors render as we tab out of the fields.
Let's implement validation on the answer form. We are going to validate that the content has been filled in with at least 50 characters. To do this, follow these steps:
import {
...
FieldError,
} from './Styles';
const { register, errors } = useForm<FormData>({
mode: 'onBlur',
});
<FieldTextArea
id="content"
name="content"
ref={register({
required: true,
minLength: 50,
})}
/>
Here, we've specified that the answer needs to be mandatory and must be at least 50 characters long.
<FieldTextArea
...
/>
{errors.content &&
errors.content.type ===
'required' && (
<FieldError>
You must enter the answer
</FieldError>
)}
{errors.content &&
errors.content.type ===
'minLength' && (
<FieldError>
The answer must be at least 50 characters
</FieldError>
)}
</FieldContainer>
Figure 6.5 – Validation on the answer form
With that, we've finished implementing validation on our forms. React Hook Form has a useful set of validation rules that can be applied to its register function. The errors object from React Hook Form gives us all the information we need to output informative validation error messages. More information on React Hook Form validation can be found at https://react-hook-form.com/get-started#Applyvalidation.
Our final task is to perform submission logic when the user submits our forms. We'll do this in the next section.
Submitting the form is the final part of the form's implementation. We are going to implement form submission logic in all three of our forms, starting with the search form.
Submission logic is logic that performs a task with the data from the form. Often, this task will involve posting the data to a web API to perform a server-side task, such as saving the data to a database table. In this section, our submission logic will simply call functions that will simulate web API calls.
In Header.tsx, carry out the following steps to implement form submission on the search form:
import {
Link,
useSearchParams,
useNavigate,
} from 'react-router-dom';
const navigate = useNavigate();
const { register, handleSubmit } = useForm<FormData>();
This conflicts with the existing handleSubmit, which we'll resolve later in step 5.
<form onSubmit={handleSubmit(submitForm)}>
The handleSubmit function from React Hook Form includes boilerplate code such as stopping the browser posting the form to the server.
Notice that we have passed submitForm to handleSubmit. This is a function that we will implement next that contains our submission logic.
const submitForm = ({ search }: FormData) => {
navigate(`search?criteria=${search}`);
};
React Hook Form passes the function the form data. We destructure the search field value from the form data.
The submission logic programmatically navigates to the search page, setting the criteria query parameter to the search field value.
Figure 6.6 – Search submission
The browser location query parameter is set as expected, with the correct result rendering in the search form.
That's the submission implemented in our first form. Now, we will continue to implement the submission logic in our other forms.
Let's carry out the following steps to implement submission in the ask form:
export interface PostQuestionData {
title: string;
content: string;
userName: string;
created: Date;
}
export const postQuestion = async (
question: PostQuestionData,
): Promise<QuestionData | undefined> => {
await wait(500);
const questionId =
Math.max(...questions.map(q => q.questionId)) + 1;
const newQuestion: QuestionData = {
...question,
questionId,
answers: [],
};
questions.push(newQuestion);
return newQuestion;
};
This function adds the question to the questions array using the Math.max method to set questionId to the next number.
import { postQuestion } from './QuestionsData';
Also, import the SubmissionSuccess message styled component:
import {
...,
SubmissionSuccess,
} from './Styles';
const [
successfullySubmitted,
setSuccessfullySubmitted,
] = React.useState(false);
const {
register,
errors,
handleSubmit,
} = useForm<FormData>({
mode: 'onBlur',
});
const {
register,
errors,
handleSubmit,
formState,
} = useForm<FormData>({
mode: 'onBlur',
});
formState contains information such as whether the form is being submitted and whether the form is valid.
<form onSubmit={handleSubmit(submitForm)}>
const submitForm = async (data: FormData) => {
const result = await postQuestion({
title: data.title,
content: data.content,
userName: 'Fred',
created: new Date()
});
setSuccessfullySubmitted(result ? true : false);
};
The preceding code calls the postQuestion function asynchronously, passing in the title and content from the form data with a hardcoded username and created date.
<Fieldset
disabled={
formState.isSubmitting ||
successfullySubmitted
}
>
isSubmitting is a flag within formState that indicates whether form submission is taking place.
You may notice an isSubmitted flag within formState. This indicates whether a form has been submitted and is true, even if the form is invalid. This is why we use our own state (successfullySubmitted) to indicate that a valid form has been submitted.
{successfullySubmitted && (
<SubmissionSuccess>
Your question was successfully submitted
</SubmissionSuccess>
)}
This message is rendered once the form has been successfully submitted.
Figure 6.7 – Ask a question submission
The form is disabled during and after a successful submission, and we receive the expected success message.
Next, we'll implement form submission in the answer form.
Carry out the following steps to implement form submission in the answer form:
export interface PostAnswerData {
questionId: number;
content: string;
userName: string;
created: Date;
}
export const postAnswer = async (
answer: PostAnswerData,
): Promise<AnswerData | undefined> => {
await wait(500);
const question = questions.filter(
q => q.questionId === answer.questionId,
)[0];
const answerInQuestion: AnswerData = {
answerId: 99,
...answer,
};
question.answers.push(answerInQuestion);
return answerInQuestion;
};
The function finds the question in the questions array and adds the answer to it. The remainder of the preceding code contains straightforward types for the answer to post and the function's result.
import {
…,
postAnswer
} from './QuestionsData';
import {
…,
SubmissionSuccess,
} from './Styles';
const [
successfullySubmitted,
setSuccessfullySubmitted,
] = React.useState(false);
const {
register,
errors,
handleSubmit,
formState
} = useForm<FormData>({
mode: 'onBlur',
});
<form
onSubmit={handleSubmit(submitForm)}
css={…}
>
const submitForm = async (data: FormData) => {
const result = await postAnswer({
questionId: question!.questionId,
content: data.content,
userName: 'Fred',
created: new Date(),
});
setSuccessfullySubmitted(
result ? true : false,
);
};
So, this calls the postAnswer function, asynchronously passing in the content from the field values with a hardcoded username and the created date.
Notice ! after the reference to the question state variable. This is a non-null assertion operator.
Important note
A non-null assertion operator (!) tells the TypeScript compiler that the variable before it cannot be null or undefined. This is useful in situations where the TypeScript compiler isn't smart enough to figure this fact out itself.
So, ! in question!.questionId stops the TypeScript from complaining that question could be null.
<Fieldset
disabled={
formState.isSubmitting ||
successfullySubmitted
}
>
{successfullySubmitted && (
<SubmissionSuccess>
Your answer was successfully submitted
</SubmissionSuccess>
)}
Figure 6.8 – Answer submission
Like the ask form, the answer form is disabled during and after submission, and we receive the expected success message.
So, that's our three forms complete and working nicely.
In this chapter, we learned that forms can be implemented using controlled components in React. With controlled components, React controls the field component values via state, and we are required to implement boilerplate code to manage this state.
React Hook Form is a popular forms library in the React community. This removes the need for boilerplate code, which you need with controlled components.
We now understand that the register function can be set to a React element's ref property to allow React Hook Form to manage that element. Validation rules can be specified within the register function parameter.
We can pass form submission logic into the handleSubmit function from React Hook Form. We learned that isSubmitting is a useful flag within formState that we can use to disable a form while submission is taking place.
In the next chapter, we are going to focus heavily on state management in our app and leverage Redux.
Check whether all of that information about forms has stuck by answering the following questions:
<input
id="firstName"
value={firstName}
/>
However, users are unable to enter characters in the input. What is the problem here?
<label htmlFor="title">{label}</label>
<input id="title" … />
<input
id="firstName"
value={firstName}
onChange={e => setFirstName(e.currentTarget.value)}
/>
Here are some useful links so that you can learn more about the topics that were covered in this chapter:
So far, in our app, the state is held locally within our React components. This approach works well for simple applications. React Redux helps us to handle complex state scenarios robustly. It shines when user interactions result in several changes to state, perhaps some that are conditional, and mainly when the interaction results in web service calls. It's also great when there's lots of shared state across the application.
We'll start this chapter by understanding the Redux pattern and the different terms, such as actions and reducers. We'll follow the principles of Redux and the benefits it brings.
We are going to change the implementation of our app and use Redux to manage unanswered questions. We'll implement a Redux store with a state containing unanswered questions, searched questions, and the question being viewed. We will interact with the store in the home, search, and question pages. These implementations will give us a good grasp of how to use Redux in a React app.
In this chapter, we'll cover the following topics:
By the end of the chapter, we'll understand the Redux pattern and will be comfortable implementing a state using it in React apps.
We'll use the following tools in this chapter:
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. In order to restore the code from a chapter, the source code repository can be downloaded and the relevant folder opened in the relevant editor. If the code is frontend code, then npm install can be entered in the terminal to restore the dependencies.
Check out the following video to see the Code in Action: https://bit.ly/3h5fjVc
Redux is a predictable state container that can be used in React apps. In this section, we'll start by going through the three principles in Redux before understanding the benefits of Redux and the situations it works well in. Then, we will dive into the core concepts so that we understand the terminology and the steps that happen as the state is updated. By doing this, we will be well equipped to implement Redux in our app.
Let's take a look at the three principles of Redux:
Redux shines when many components need access to the same data because the state and its interactions are stored in a single place. Having the state read-only and only updatable with a function makes the interactions easier to understand and debug. It is particularly useful when many components are interacting with the state and some of the interactions are asynchronous.
In the following sections, we'll dive into actions and reducers a little more, along with the thing that manages them, which is called a store.
The whole state of the application lives inside what is called a store. The state is stored in a JavaScript object like the following one:
{
questions: {
loading: false,
unanswered: [{
questionId: 1, title: ...
}, {
questionId: 2, title: ...
}]
}
}
In this example, the single object contains an array of unanswered questions, along with whether the questions are being fetched from a web API.
The state won't contain any functions or setters or getters. It's a simple JavaScript object. The store also orchestrates all the moving parts in Redux. This includes pushing actions through reducers to update the state.
So, the first thing that needs to happen in order to update the state in a store is to dispatch an action. An action is another simple JavaScript object like the one in the following code snippet:
{ type: 'GettingUnansweredQuestions' }
The type property determines the kind of action that needs to be performed. The type property is an important part of the action because the reducer won't know how to change the state without it. In the previous example, the action doesn't contain anything other than the type property. This is because the reducer doesn't need any more information in order to make changes to the state for this action. The following example is another action:
{
type: 'GotUnansweredQuestions',
questions: [{
questionId: 1, title: ...
}, {
questionId: 2, title: ...
}]
}
This time, an additional bit of information is included in the action in a questions property. This additional information is needed by the reducer to make the change to the state for this kind of action.
Reducers are pure functions that make the actual state changes.
Important note
The following is an example of a reducer:
const questionsReducer = (state, action) => {
switch (action.type) {
case 'GettingUnansweredQuestions': {
return {
...state,
loading: true
};
}
case 'GotUnansweredQuestions': {
return {
...state,
unanswered: action.questions,
loading: false
};
}
}
};
Here are some key points regarding reducers:
You'll notice that the actions and reducer we have just seen didn't have TypeScript types. Obviously, we'll include the necessary types when we implement these in the following sections.
The following diagram shows the Redux pieces we have just learned about and how a component interacts with them to get and update a state:
Figure 7.1 – How components interact with Redux to get and update a state
Components get a state from the store. Components update the state by dispatching an action that is fed into a reducer that updates the state. The store passes the new state to the component when it is updated.
Now that we have started to get an understanding of what Redux is, it's time to put this into practice in our app.
Before we can use Redux, we need to install it, along with the TypeScript types. Let's perform the following steps to install Redux:
> npm install redux
Note that the core Redux library contains TypeScript types within it, so there is no need for an additional install for these.
> npm install react-redux
These bits allow us to connect our React components to the Redux store.
> npm install @types/react-redux --save-dev
With all the Redux bits now installed, we can start to build our Redux store.
In this section, we are going to implement the type for the state object in our store, along with the initial value for the state. Perform the following steps to do so:
import { QuestionData } from './QuestionsData';
interface QuestionsState {
readonly loading: boolean;
readonly unanswered: QuestionData[];
readonly viewing: QuestionData | null;
readonly searched: QuestionData[];
}
export interface AppState {
readonly questions: QuestionsState;
}
So, our store is going to have a questions property that is an object containing the following properties:
const initialQuestionState: QuestionsState = {
loading: false,
unanswered: [],
viewing: null,
searched: [],
};
So, we have defined the types for the state object and created the initial state object. We have made the state read-only by using the readonly keyword before the state property names.
Let's now move on and define types to represent our actions.
Actions initiate changes to our store state. In this section, we are going to create functions that create all the actions in our store. We will start by understanding all the actions that will be required in our store.
The three processes that will interact with the store are as follows:
Each process comprises the following steps:
Each process has two state changes. This means that each process requires two actions:
So, our store will have six actions in total.
We are going to create the actions in Store.ts. Let's create the two actions for the process that gets unanswered questions. Perform the following steps:
export const GETTINGUNANSWEREDQUESTIONS =
'GettingUnansweredQuestions';
export const gettingUnansweredQuestionsAction = () =>
({
type: GETTINGUNANSWEREDQUESTIONS,
} as const);
Notice the as const keywords after the object is being returned. This is a TypeScript const assertion.
Important note
A const assertion on an object will give it an immutable type. It also will result in string properties having a narrow string literal type rather than the wider string type.
The type of this action without the const assertion would be as follows:
{
type: string
}
The type of this action with the type assertion is as follows:
{
readonly type: 'GettingUnansweredQuestions'
}
So, the type property can only be 'GettingUnansweredQuestions' and no other string value because we have typed it to that specific string literal. Also, the type property value can't be changed because it is read-only.
export const GOTUNANSWEREDQUESTIONS =
'GotUnansweredQuestions';
export const gotUnansweredQuestionsAction = (
questions: QuestionData[],
) =>
({
type: GOTUNANSWEREDQUESTIONS,
questions: questions,
} as const);
This time, the action contains a property called questions to hold the unanswered questions, as well as the fixed type property. We are expecting the questions to be passed into the function in the questions parameter.
That completes the implementation of the action types for getting unanswered questions.
Let's add the two actions for viewing a question using a similar approach:
export const GETTINGQUESTION = 'GettingQuestion';
export const gettingQuestionAction = () =>
({
type: GETTINGQUESTION,
} as const);
export const GOTQUESTION = 'GotQuestion';
export const gotQuestionAction = (
question: QuestionData | null,
) =>
({
type: GOTQUESTION,
question: question,
} as const);
Notice that the action type property is given a unique value. This is required so that the reducer can determine what changes to make to the store's state.
We also make sure that the type property is given a value that is meaningful. This helps the readability of the code.
The data returned from the server can be a question or can be null if the question isn't found. This is why a union type is used.
The final actions in the store are for searching questions. Let's add these now:
export const SEARCHINGQUESTIONS =
'SearchingQuestions';
export const searchingQuestionsAction = () =>
({
type: SEARCHINGQUESTIONS,
} as const);
export const SEARCHEDQUESTIONS =
'SearchedQuestions';
export const searchedQuestionsAction = (
questions: QuestionData[],
) =>
({
type: SEARCHEDQUESTIONS,
questions,
} as const);
The action types are again given unique and meaningful values. The data returned from the server search is an array of questions.
In summary, we have used the Action type from Redux to create interfaces for our six actions. This ensures that the action contains the required type property.
Our Redux store is shaping up nicely now. Let's move on and create a reducer.
A reducer is a function that will make the necessary changes to the state. It takes in the current state and the action being processed as parameters and returns the new state. In this section, we are going to implement a reducer. Let's perform the following steps:
type QuestionsActions =
| ReturnType<typeof gettingUnansweredQuestionsAction>
| ReturnType<typeof gotUnansweredQuestionsAction>
| ReturnType<typeof gettingQuestionAction>
| ReturnType<typeof gotQuestionAction>
| ReturnType<typeof searchingQuestionsAction>
| ReturnType<typeof searchedQuestionsAction>;
We have used the ReturnType utility type to get the return type of the action functions. ReturnType expects a function type to be passed into it, so we use the typeof keyword to get the type of each function.
Important note
When typeof is used for a type, TypeScript will infer the type from the variable after the typeof keyword.
const questionsReducer = (
state = initialQuestionState,
action: QuestionsActions
) => {
// TODO - Handle the different actions and return
// new state
return state;
};
The reducer takes in two parameters, one for the current state and another for the action that is being processed. The state will be undefined the first time the reducer is called, so we default this to the initial state we created earlier.
The reducer needs to return the new state object for the given action. We're simply returning the initial state at the moment.
It is important that a reducer always returns a value because a store may have multiple reducers. In this case, all the reducers are called, but won't necessarily process the action.
const questionsReducer = (
state = initialQuestionState,
action: QuestionsActions,
) => {
switch (action.type) {
case GETTINGUNANSWEREDQUESTIONS: {
}
case GOTUNANSWEREDQUESTIONS: {
}
case GETTINGQUESTION: {
}
case GOTQUESTION: {
}
case SEARCHINGQUESTIONS: {
}
case SEARCHEDQUESTIONS: {
}
}
return state;
};
Notice that the type property in the action parameter is strongly typed and that we can only handle the six actions we defined earlier.
Let's handle the GettingUnansweredQuestions question first:
case GETTINGUNANSWEREDQUESTIONS: {
return {
...state,
loading: true,
};
}
We use the spread syntax to copy the previous state into a new object and then set the loading state to true.
Important note
The spread syntax allows an object to expand into a place where key-value pairs are expected. The syntax consists of three dots followed by the object to be expanded. More information can be found at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax.
The spread syntax is commonly used in reducers to copy old state into the new state object without mutating the state passed into the reducer. This is important because the reducer must be a pure function and not change values outside its scope.
case GOTUNANSWEREDQUESTIONS: {
return {
...state,
unanswered: action.questions,
loading: false,
};
}
We use the spread syntax to copy the previous state into a new object and set the unanswered and loading properties. Notice how we get IntelliSense only for the properties in the GotUnansweredQuestions action:

Figure 7.2 – Narrowed action type
TypeScript has smartly narrowed down the type in the switch branch from the union type that was passed into the reducer for the action parameter.
case GETTINGQUESTION: {
return {
...state,
viewing: null,
loading: true,
};
}
The question being viewed is reset to null and the loading state is set to true while the server request is being made.
case GOTQUESTION: {
return {
...state,
viewing: action.question,
loading: false,
};
}
The question being viewed is set to the question from the action and the loading state is reset to false.
case SEARCHINGQUESTIONS: {
return {
...state,
searched: [],
loading: true,
};
}
The search results are initialized to an empty array and the loading state is set to true while the server request is being made.
case SEARCHEDQUESTIONS: {
return {
...state,
searched: action.questions,
loading: false,
};
}
That's the reducer complete. We used a switch statement to handle the different action types. Within the switch branches, we used the spread syntax to copy the previous state and update the relevant values.
Now, we have all the different pieces implemented for our Redux store, so we are going to create a function to create the store in the next section.
The final task in Store.ts is to create a function that creates the Redux store so that it can be provided to the React components. We need to feed all the store reducers into this function as well. Let's do this by performing the following steps:
import { Store, createStore, combineReducers } from 'redux';
Store is the top-level type representing the Redux store.
We will use the createStore function to create the store later.
combineReducers is a function we can use to put multiple reducers together into a format required by the createStore function.
const rootReducer = combineReducers<AppState>({
questions: questionsReducer
});
An object literal is passed into combineReducers, which contains the properties in our app state, along with the reducer that is responsible for that state. We only have a single property in our app state called questions, and a single reducer managing changes to that state called questionsReducer.
export function configureStore(): Store<AppState> {
const store = createStore(
rootReducer,
undefined
);
return store;
}
This function uses the createStore function from Redux by passing in the combined reducers and undefined as the initial state.
We use the generic Store type as the return type for the function passing in the interface for our app state, which is AppState.
That's all we need to do to create the store.
We have created all the bits and pieces in our store in a single file called Store.ts. For larger stores, it may help maintainability to structure the store across different files. Structuring the store by feature where you have all the actions and the reducer for each feature in a file works well because we generally read and write our code by feature.
In the next section, we will connect our store to the components we implemented in the previous chapters.
In this section, we are going to connect the existing components in our app to our store. We will start by adding what is called a store provider to the root of our component tree, which allows components lower in the tree to consume the store. We will then connect the home, question, and search pages to the Redux store using hooks from React Redux.
Let's provide the store to the root of our component tree. To do that, perform the following steps:
import React from 'react';
import { Provider } from 'react-redux';
import { configureStore } from './Store';
This is the first time we have referenced anything from React-Redux. Remember that this library helps React components interact with a Redux store.
const store = configureStore();
function App() {
...
}
return (
<Provider store={store}>
<BrowserRouter>
...
</BrowserRouter>
</Provider>
);
Components lower in the component tree can now connect to the store.
Let's connect the home page to the store. To do that, perform the following steps:
import { useSelector, useDispatch } from 'react-redux';
We will eventually use the useSelector function to get state from the store. The useDispatch function will be used to invoke actions.
import {
gettingUnansweredQuestionsAction,
gotUnansweredQuestionsAction,
AppState,
} from './Store';
import {
getUnansweredQuestions,
} from './QuestionsData';
The QuestionData type will be inferred in our revised implementation.
export const HomePage = () => {
const dispatch = useDispatch();
...
}
export const HomePage = () => {
const dispatch = useDispatch();
const questions = useSelector(
(state: AppState) =>
state.questions.unanswered,
);
...
}
The function passed to useSelector is often referred to as a selector. It takes in the store's state object and contains logic to return the required part of the state from the store.
const questions = useSelector(
(state: AppState) =>
state.questions.unanswered,
);
const questionsLoading = useSelector(
(state: AppState) => state.questions.loading,
);
const questionsLoading = useSelector(
(state: AppState) => state.questions.loading,
);
const [
questions,
setQuestions,
] = React.useState<QuestionData[]>([]);
const [
questionsLoading,
setQuestionsLoading,
] = React.useState(true);
React.useEffect(() => {
const doGetUnansweredQuestions = async () => {
dispatch(gettingUnansweredQuestionsAction());
const unansweredQuestions = await
getUnansweredQuestions();
setQuestions(unansweredQuestions);
setQuestionsLoading(false);
};
doGetUnansweredQuestions();
}, []);
We dispatch the action to the store using the dispatch function.
React.useEffect(() => {
const doGetUnansweredQuestions = async () => {
dispatch(gettingUnansweredQuestionsAction());
const unansweredQuestions = await
getUnansweredQuestions();
dispatch(gotUnansweredQuestionsAction (unansweredQuesti
ons));
setQuestions(unansweredQuestions);
setQuestionsLoading(false);
};
doGetUnansweredQuestions();
}, []);
We pass in the unanswered questions to the function that creates this action. We then dispatch the action using the dispatch function.
React.useEffect(() => {
const doGetUnansweredQuestions = async () => {
…
setQuestions(unansweredQuestions);
setQuestionsLoading(false);
};
doGetUnansweredQuestions();
}, []);
React.useEffect(() => {
…
// eslint-disable-next-line react
hooks/exhaustive-deps
}, []);
Figure 7.3 – HomePage component connected to the Redux store
Congratulations! We have just connected our first component to a Redux store!
The key parts of connecting a component to the store are using the useSelector hook to select the required state and using the useDispatch hook to dispatch actions.
We'll use a similar approach to connect another component to the store.
Let's connect the question page to the store. To do that, perform the following steps:
import {
useSelector,
useDispatch,
} from 'react-redux';
import {
AppState,
gettingQuestionAction,
gotQuestionAction,
} from './Store';
import { getQuestion, postAnswer } from './QuestionsData';
export const QuestionPage = () => {
const dispatch = useDispatch();
...
}
const dispatch = useDispatch();
const question = useSelector(
(state: AppState) => state.questions.viewing,
);
const [
question,
setQuestion,
] = React.useState<QuestionData | null>(null);
React.useEffect(() => {
const doGetQuestion = async (
questionId: number,
) => {
dispatch(gettingQuestionAction());
const foundQuestion = await
getQuestion(questionId);
dispatch(gotQuestionAction(foundQuestion));
};
if (questionId) {
doGetQuestion(Number(questionId));
}
// eslint-disable-next-line react
hooks/exhaustive-deps
}, [questionId]);
Figure 7.4 – QuestionPage component connected to the Redux store
The page will render correctly.
The question page is now connected to the store. Next, we'll connect our final component to the store.
Connecting the search page
Let's connect the search page to the store. To do that, perform the following steps:
import { useSelector, useDispatch } from 'react-redux';
import {
AppState,
searchingQuestionsAction,
searchedQuestionsAction,
} from './Store';
import { searchQuestions } from './QuestionsData';
export const SearchPage = () => {
const dispatch = useDispatch();
...
}
const dispatch = useDispatch();
const questions = useSelector(
(state: AppState) => state.questions.searched,
);
const [
questions,
setQuestions,
] = React.useState<QuestionData[] >([]);
React.useEffect(() => {
const doSearch = async (criteria: string) => {
dispatch(searchingQuestionsAction());
const foundResults = await searchQuestions(
criteria,
);
dispatch(searchedQuestionsAction(foundResults));
};
doSearch(search);
// eslint-disable-next-line react
hooks/exhaustive-deps
}, [search]);
Figure 7.5 – SearchPage component connected to the Redux store
The page will render correctly.
The search page is now connected to the store.
That completes connecting the required components to our Redux store.
We accessed the Redux store state by using the useSelector hook from React Redux. We passed a function into this that retrieved the appropriate piece of state we needed in the React component.
To begin the process of a state change, we invoked an action using a function returned from the useDispatch hook from React Redux. We passed the relevant action object into this function, containing information to make the state change.
Notice that we didn't change the JSX in any of the connected components because we used the same state variable names. We have simply moved this state to the Redux store.
In this chapter, we learned that the state in a Redux store is stored in a single place, is read-only, and is changed with a pure function called a reducer. Our components don't talk directly to the reducer; instead, they dispatch objects called actions that describe the change to the reducer. We now know how to create a strongly typed type Redux store containing a read-only state object with the necessary reducer functions.
We learned that React components can access a Redux store if they are children of a Redux Provider component. We also know how to get state from the store from a component using the useSelector hook and create a dispatcher to dispatch actions with useDispatch as the hook.
There are lots of bits and pieces to get our heads around when implementing Redux within a React app. It does shine in scenarios where the state management is complex because Redux forces us to break the logic up into separate pieces that are easy to understand and maintain. It is also very useful for managing global state, such as user information, because it is easily accessible below the Provider component.
Now, we have built the majority of the frontend in our app, which means it's time to turn our attention to the backend. In the next chapter, we'll focus on how we can interact with the database in ASP.NET.
Before we end this chapter, let's test our knowledge with some questions:
useDispatch(gettingQuestionAction);
const dispatch = useDispatch();
...
dispatch(gettingQuestionAction);
Here are some useful links so that you can learn more about the topics that were covered in this chapter:
In this section, we will build the backend of our Q&A app by creating a REST API for interacting with questions and answers. We'll use Dapper behind the REST API to interact with the SQL Server database. We will learn techniques to make our backend perform and scale well. We will learn how to secure the REST API before consuming it from the frontend.
This section comprises the following chapters:
It's time to start working on the backend of our Q&A app. In this chapter, we are going to build the database for the app and interact with it from ASP.NET Core with a library called Dapper.
We'll start by understanding what Dapper is and the benefits it brings over the Entity Framework. We'll create the data access layer in our app by learning how to read data from the database into model classes using Dapper. We'll then move on to writing data to the database from model classes.
Deploying database changes during releases of our app is an important and non-trivial task. So, we'll set up the management of database migrations using a library called DbUp toward the end of this chapter.
In this chapter, we'll cover the following topics:
By the end of this chapter, we will have created a SQL Server database that stores questions and answers and the implemented performant data layer that interacts with it.
We will need to use the following tools in this chapter:
All of the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. To restore code from a chapter, you can download the necessary source code repository and open the relevant folder in the relevant editor. If the code is frontend code, then npm install can be entered in the Terminal to restore any dependencies.
Check out the following video to see the code in action: http://bit.ly/2EVDsv6.
In this section, we are going to create a SQL Server database for our Q&A app. We will then create tables in the database that will store questions and answers. After that, we will create stored procedures that read and write records in these tables.
We are going to create the database using SQL Server Management Studio (SSMS) by carrying out the following steps:

Figure 8.1 – Connecting to SQL Server Express

Figure 8.2 – Creating the Q&A database
Figure 8.3 – The Q&A database in Object Explorer
Nice and easy! We are going to create database tables for the questions and answers in the following section.
Let's create some tables for the users, questions, and answers in our new database in SSMS:
Figure 8.4 – The Q&A database in Object Explorer
Here, a Question table has been created. This holds questions that have been asked and contains the following fields:
An Answer table has also been created. This holds answers to the questions and contains the following fields:
The SQL Script has added some example data. If we right-click on the Question table in Object Explorer and choose the Edit Top 200 rows option, we'll see the data in our table:
Figure 8.5 – Questions in the Q&A database
So, we now have a database that contains our tables and have some nice data to work with.
Let's create some stored procedures that our app will use to interact with the database tables.
Copy the contents of the SQL Script at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition/blob/master/chapter-08/start/backend/SQLScripts/02-Sprocs.sql. Now, follow these steps:

Figure 8.6 – Stored procedures in the Q&A database
We'll be using these stored procedures to interact with the database from the ASP.NET Core app.
EXEC dbo.Question_GetMany_BySearch @Search = 'type'
So, this SQL command will execute the Question_GetMany_BySearch stored procedure by passing in the @Search parameter with a type value. This stored procedure returns questions that have the value of the @Search parameter in the title of its content.
Figure 8.7 – Results from running the stored procedure
With our SQL Server database in place, we can now turn our attention to Dapper.
Dapper is a performance-focused simple object mapper for .NET that helps map SQL query output to instances of a C# class. It is built and maintained by the Stack Overflow team, has been released as open source, and is a popular alternative to Microsoft's Entity Framework.
So, why use Dapper rather than Entity Framework? The goal of Entity Framework is to abstract away the database, so it trades learning SQL for Entity Framework-specific objects such as DBSet and DataContext. We generally don't write SQL with Entity Framework – instead, we write LINQ queries, which are translated into SQL by Entity Framework.
If we are implementing a large database that serves a large number of users, Entity Framework can be a challenge because the queries it generates can be inefficient. We need to understand Entity Framework well to make it scale, which can be a significant investment. When we find Entity Framework queries that are slow, we need to understand SQL to properly understand the root cause. So, it makes sense to invest time in learning SQL really well rather than the abstraction that the Entity Framework provides. Also, if we have a team with good database and SQL skills, it doesn't make sense to not use these.
Dapper is much simplier than Entity Framework. Later in this chapter, we'll see that we can read and write data from a SQL database with just a few lines of C# code. This allows us to interact with stored procedures in the database, thus automatically mapping C# class instances to SQL parameters, along with the results of the query. In the next section, we will install and start using Dapper to access our data.
In this section, we are going to install and configure Dapper. We will also install the Microsoft SQL Server client package that Dapper uses. Let's carry out the following steps:
Important Note
NuGet is a tool that downloads third-party and Microsoft libraries and manages the references to them so that the libraries can easily be updated.

Figure 8.8 – Installing Dapper in the NuGet manager
We may be asked to accept a licensing agreement before Dapper can be downloaded and installed into our project.

Figure 8.9 – Installing Microsoft.Data.Client
{
"ConnectionStrings": {
"DefaultConnection":
"Server=localhost\\SQLEXPRESS;Database=QandA;
Trusted_Connection=True;"
},
...
}
Important Note
The appsettings.json file is a JSON-formatted file that contains various configuration settings for an ASP.NET Core app.
Obviously, change the connection string so that it references your SQL Server and database.
So, that's Dapper installed, along with a connection string to our database in place. Next, we will learn how to read data from the database using Dapper.
In this section, we are going to write some C# code that reads data from the database.
We are going to use the popular repository design pattern to structure our data access code. This will allow us to provide a nice, centralized abstraction of the data layer.
We are going to start by creating a data repository class that will hold all of the queries we are going to make to the data. We are going to create C# classes that hold the data we get from the database, called models.
We will implement methods for getting all the questions, getting questions from a search, getting unanswered questions, getting a single question, getting information stating whether a question exists, and getting an answer.
Let's create a class that will hold all of the methods for interacting with the database:

Figure 8.10 – Skeleton DataRepository class
public interface IDataRepository
{
IEnumerable<QuestionGetManyResponse> GetQuestions();
IEnumerable<QuestionGetManyResponse>
GetQuestionsBySearch(string search);
IEnumerable<QuestionGetManyResponse>
GetUnansweredQuestions();
QuestionGetSingleResponse
GetQuestion(int questionId);
bool QuestionExists(int questionId);
AnswerGetResponse GetAnswer(int answerId);
}
Here, we are going to have six methods in the data repository that will read different bits of data from our database. Note that this won't compile yet because we are referencing classes that don't exist.
public class DataRepository: IDataRepository
{
}

Figure 8.11 – Automatically implementing the IDataRepository interface
Skeleton methods will be added to the repository class that satisfy the interface.
public class DataRepository : IDataRepository
{
private readonly string _connectionString;
...
}
Important Note
The readonly keyword prevents the variable from being changed outside of the class constructor, which is what we want in this case.
public class DataRepository : IDataRepository
{
private readonly string _connectionString;
public DataRepository(IConfiguration configuration)
{
_connectionString =
configuration["ConnectionStrings:DefaultConnection"];
}
...
}
The configuration parameter in the constructor gives us access to items within the appsettings.json file. The key we use when accessing the configuration object is the path to the item we want from the appsettings.json file, with colons being used to navigate the fields in the JSON.
How does the configuration parameter get passed into the constructor? The answer is dependency injection, which we'll cover in the next chapter.
Figure 8.12 – Referencing the Microsoft.Extensions.Configuration namespace
We've made a good start on the repository class. We do have compile errors, but these will disappear as we fully implement the methods.
Let's implement the GetQuestions method first:
using Microsoft.Data.SqlClient;;
using Dapper;
public IEnumerable<QuestionGetManyResponse> GetQuestions()
{
using (var connection = new
SqlConnection(_connectionString))
{
}
}
Notice that we've used a using block to declare the database connection.
Important Note
A using block automatically disposes of the object defined in the block when the program exits the scope of the block. This includes whether a return statement is invoked within the block, as well as errors occurring within the block.
So, the using statement is a convenient way of ensuring the connection is disposed of. Notice that we are using a SqlConnection from the Microsoft SQL client library because this is what the Dapper library extends.
public IEnumerable<QuestionGetManyResponse> GetQuestions()
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
}
}
public IEnumerable<QuestionGetManyResponse> GetQuestions()
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
return connection.Query<QuestionGetManyResponse>(
@"EXEC dbo.Question_GetMany"
);
}
}
We've used a Query extension method from Dapper on the connection object to execute the Question_GetMany stored procedure. We then simply return the results of this query from our method. Nice and simple!
Notice how we pass in a class, QuestionGetManyResponse, into the generic parameter of the Query method. This defines the model class the query results should be stored in. We'll define QuestionGetManyResponse in the next step.
public class QuestionGetManyResponse
{
public int QuestionId { get; set; }
public string Title { get; set; }
public string Content { get; set; }
public string UserName { get; set; }
public DateTime Created { get; set; }
}
The property names match the fields that have been output from the Question_GetMany stored procedure. This allows Dapper to automatically map the data from the database to this class. The property types have also been carefully chosen so that this Dapper mapping process works.
Important Note
Note that the class doesn't need to contain properties for all of the fields that are output from the stored procedure. Dapper will ignore fields that don't have the corresponding properties in the class.
using QandA.Data.Models;
using QandA.Data.Models;
Congratulations – we have implemented our first repository method! This consisted of just a few lines of code that opened a database connection and executed a query. This has shown us that writing data access code in Dapper is super simple.
Let's implement the GetQuestionsBySearch method, which is similar to the GetQuestions method, but this time, the method and stored procedure have a parameter. Let's carry out the following steps:
public IEnumerable<QuestionGetManyResponse> GetQuestionsBySearch(string search)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
// TODO - execute Question_GetMany_BySearch stored
// procedure
}
}
public IEnumerable<QuestionGetManyResponse> GetQuestionsBySearch(string search)
{
using (var connection = new SqlConnection(_connectionString))
{
connection.Open();
return connection.Query<QuestionGetManyResponse>(
@"EXEC dbo.Question_GetMany_BySearch @Search =
@Search",
new { Search = search }
);
}
}
Notice how we pass in the stored procedure parameter value.
Important Note
In this case, we've used an anonymous object for the parameters to save us defining a class for the object.
Why do we have to pass a parameter to Dapper? Why can't we just do the following?
return connection.Query<QuestionGetManyResponse>($"EXEC dbo.Question_GetMany_BySearch '{search}'");
Well, there are several reasons, but the main one is that the preceding code is vulnerable to a SQL injection attack. So, it's always best to pass parameters into Dapper rather than trying to construct the SQL ourselves.
That's our second repository method complete. Nice and simple!
Let's implement the GetUnansweredQuestions method, which is very similar to the GetQuestions method:
public IEnumerable<QuestionGetManyResponse> GetUnansweredQuestions()
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
return connection.Query<QuestionGetManyResponse>(
"EXEC dbo.Question_GetUnanswered"
);
}
}
Here, we opened the connection, executed the Question_GetUnanswered stored procedure, and returned the results in the QuestionGetManyResponse class we had already created.
Let's implement the GetQuestion method now:
public QuestionGetSingleResponse GetQuestion(int questionId)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
var question =
connection.QueryFirstOrDefault<
QuestionGetSingleResponse>(
@"EXEC dbo.Question_GetSingle @QuestionId =
@QuestionId",
new { QuestionId = questionId }
);
// TODO - Get the answers for the question
return question;
}
}
This method is a little different from the previous methods because we are using the QueryFirstOrDefault Dapper method to return a single record (or null if the record isn't found) rather than a collection of records.
public QuestionGetSingleResponse GetQuestion(int questionId)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
var question =
connection.QueryFirstOrDefault<
QuestionGetSingleResponse>(
@"EXEC dbo.Question_GetSingle @QuestionId =
@QuestionId",
new { QuestionId = questionId }
);
question.Answers =
connection.Query<AnswerGetResponse>(
@"EXEC dbo.Answer_Get_ByQuestionId
@QuestionId = @QuestionId",
new { QuestionId = questionId }
);
return question;
}
}
public QuestionGetSingleResponse GetQuestion(int questionId)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
var question =
connection.QueryFirstOrDefault<
QuestionGetSingleResponse>(
@"EXEC dbo.Question_GetSingle @QuestionId =
@QuestionId",
new { QuestionId = questionId }
);
if (question != null)
{
question.Answers =
connection.Query<AnswerGetResponse>(
@"EXEC dbo.Answer_Get_ByQuestionId
@QuestionId = @QuestionId",
new { QuestionId = questionId }
);
}
return question;
}
}
public class QuestionGetSingleResponse
{
public int QuestionId { get; set; }
public string Title { get; set; }
public string Content { get; set; }
public string UserName { get; set; }
public string UserId { get; set; }
public DateTime Created { get; set; }
public IEnumerable<AnswerGetResponse> Answers { get; set; }
}
These properties match up with the data that was returned from the Question_GetSingle stored procedure.
public class AnswerGetResponse
{
public int AnswerId { get; set; }
public string Content { get; set; }
public string UserName { get; set; }
public DateTime Created { get; set; }
}
These properties match up with the data that was returned from the Answer_Get_ByQuestionId stored procedure.
The GetQuestion method should now compile fine.
Now, let's implement the QuestionExists method by following the same approach we followed for the previous methods:
public bool QuestionExists(int questionId)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
return connection.QueryFirst<bool>(
@"EXEC dbo.Question_Exists @QuestionId =
@QuestionId",
new { QuestionId = questionId }
);
}
}
We are using the Dapper QueryFirst method rather than QueryFirstOrDefault because the stored procedure will always return a single record.
The last method we will implement in this section is GetAnswer:
public AnswerGetResponse GetAnswer(int answerId)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
return connection.QueryFirstOrDefault<
AnswerGetResponse>(
@"EXEC dbo.Answer_Get_ByAnswerId @AnswerId =
@AnswerId",
new { AnswerId = answerId }
);
}
}
There is nothing new here – the implementation follows the same pattern as the previous methods.
We have now implemented all of the methods in the data repository for reading data. In the next section, we'll turn our attention to writing data.
In this section, we are going to implement methods in our data repository that will write to the database. We will start by extending the interface for the repository and then do the actual implementation.
The stored procedures that perform the write operations are already in the database. We will be interacting with these stored procedures using Dapper.
We'll start by adding the necessary methods to the repository interface:
public interface IDataRepository
{
...
QuestionGetSingleResponse
PostQuestion(QuestionPostRequest question);
QuestionGetSingleResponse
PutQuestion(int questionId, QuestionPutRequest
question);
void DeleteQuestion(int questionId);
AnswerGetResponse PostAnswer(AnswerPostRequest answer);
}
Here, we must implement some methods that will add, change, and delete questions, as well as adding an answer.
Let's create the PostQuestion method in DataRepository.cs in order to add a new question:
public QuestionGetSingleResponse PostQuestion(QuestionPostRequest question)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
var questionId = connection.QueryFirst<int>(
@"EXEC dbo.Question_Post
@Title = @Title, @Content = @Content,
@UserId = @UserId, @UserName = @UserName,
@Created = @Created",
question
);
return GetQuestion(questionId);
}
}
This is a very similar implementation to the methods that read data. We are using the QueryFirst Dapper method because the stored procedure returns the ID of the new question after inserting it into the database table. Our method returns the saved question by calling the GetQuestion method with questionId, which was returned from the Question_Post stored procedure.
We've used a model class called QuestionPostRequest for Dapper to map to the SQL parameters. Let's create this class in the models folder:
public class QuestionPostRequest
{
public string Title { get; set; }
public string Content { get; set; }
public string UserId { get; set; }
public string UserName { get; set; }
public DateTime Created { get; set; }
}
Great stuff! That's our first write method created.
Let's create the PutQuestion method in DataRepository.cs to change a question. This is very similar to the PostQuestion method we have just implemented:
public QuestionGetSingleResponse PutQuestion(int questionId, QuestionPutRequest question)
{
using (var connection = new SqlConnection(_connectionString))
{
connection.Open();
connection.Execute(
@"EXEC dbo.Question_Put
@QuestionId = @QuestionId, @Title = @Title,
@Content = @Content",
new { QuestionId = questionId, question.Title,
question.Content }
);
return GetQuestion(questionId);
}
}
Notice that we are using the Dapper Execute method because we are simply executing a stored procedure and not returning anything.
We've created the SQL parameters from a model class called QuestionPutRequest and the questionId parameters that were passed into the method. Let's create the QuestionPutRequest class in the models folder:
public class QuestionPutRequest
{
public string Title { get; set; }
public string Content { get; set; }
}
That's another method implemented.
Moving on, let's implement a method for deleting a question:
public void DeleteQuestion(int questionId)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
connection.Execute(
@"EXEC dbo.Question_Delete
@QuestionId = @QuestionId",
new { QuestionId = questionId }
);
}
}
Again, we are using the Dapper Execute method because nothing is returned from the stored procedure.
The last method we are going to implement will allow us to add an answer to a question:
public AnswerGetResponse PostAnswer(AnswerPostRequest answer)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
return connection.QueryFirst<AnswerGetResponse>(
@"EXEC dbo.Answer_Post
@QuestionId = @QuestionId, @Content = @Content,
@UserId = @UserId, @UserName = @UserName,
@Created = @Created",
answer
);
}
}
As well as inserting the answer into the database table, the stored procedure returns the saved answer. Here, we are using the Dapper QueryFirst method to execute the stored procedure and return the saved answer.
We also need to create the AnswerPostRequest model class in the models folder:
public class AnswerPostRequest
{
public int QuestionId { get; set; }
public string Content { get; set; }
public string UserId { get; set; }
public string UserName { get; set; }
public DateTime Created { get; set; }
}
That completes our data repository. We've chosen to have a single method containing all of the methods that will read and write data. We can, of course, create multiple repositories for different areas of the database, which would be a good idea if the app was larger.
As we add features to our app that involve database changes, we'll need a mechanism for deploying these database changes. We'll look at this in the next section.
DbUp is an open source library that helps us deploy changes to SQL Server databases. It keeps track of SQL Scripts embedded within an ASP.NET Core project, along with which ones have been executed on the database. It contains methods that we can use to execute the SQL Scripts that haven't been executed yet on the database.
In this section, we are going to add DbUp to our project and configure it to do our database migrations when our app starts up.
Let's start by installing DbUp by carrying out the following steps in our backend project, in Visual Studio:
Figure 8.13 – Adding DbUp in NuGet Manager
We may be asked to accept a licensing agreement before DbUp can be downloaded and installed in our project.
Configuring DbUp to do migrations on app startup
Now that we have DbUp installed in our project, let's get it to do database migrations when the app starts up:
using DbUp;
public void ConfigureServices(IServiceCollection services)
{
var connectionString =
Configuration.GetConnectionString("DefaultConnection");
EnsureDatabase.For.SqlDatabase(connectionString);
// TODO - Create and configure an instance of the
// DbUp upgrader
// TODO - Do a database migration if there are any
// pending SQL
//Scripts
...
}
This gets the database connection from the appsettings.json file and creates the database if it doesn't exist.
public void ConfigureServices(IServiceCollection services)
{
var connectionString =
Configuration.GetConnectionString("DefaultConnection");
EnsureDatabase.For.SqlDatabase(connectionString);
var upgrader = DeployChanges.To
.SqlDatabase(connectionString, null)
.WithScriptsEmbeddedInAssembly(
System.Reflection.Assembly.GetExecutingAssembly()
)
.WithTransaction()
.Build();
// TODO - Do a database migration if there are any pending SQL
//Scripts
...
}
We've told DbUp where the database is and to look for SQL Scripts that have been embedded in our project. We've also told DbUp to do the database migrations in a transaction.
public void ConfigureServices(IServiceCollection services)
{
var connectionString =
Configuration.GetConnectionString("DefaultConnection");
EnsureDatabase.For.SqlDatabase(connectionString);
var upgrader = DeployChanges.To
.SqlDatabase(connectionString, null)
.WithScriptsEmbeddedInAssembly(
System.Reflection.Assembly.GetExecutingAssembly()
)
.WithTransaction()
.LogToConsole()
.Build();
if (upgrader.IsUpgradeRequired())
{
upgrader.PerformUpgrade();
}
...
}
We are using the IsUpgradeRequired method in the DbUp upgrade to check whether there are any pending SQL Scripts, and using the PerformUpgrade method to do the actual migration.
In the previous subsection, we told DbUp to look for SQL Scripts that have been embedded in our project. Now, we are going to embed SQL Scripts for the tables and stored procedures in our project so that DbUp will execute them if they haven't already been executed when our app loads:

Figure 8.14 – Adding a SQL file to a Visual Studio project

Figure 8.15 – Changing a file to an embedded resource
This embeds the SQL Script in our project so that DbUp can find it.
Important Note
DbUp will run SQL Scripts in name order, so it's important to have a script naming convention that caters to this. In our example, we are prefixing the script name with a two-digit number.
So, those are the SQL Scripts that make up our database. They have been saved within our project.
Now that the database migration code is in place, it is time to test a migration. To do this, we will remove the database tables and stored procedures and expect them to be recreated when our API runs.
Let's carrying out the following steps:

Figure 8.16 – Deleting a database

Figure 8.17 – Adding a database

Figure 8.18 – The SchemaVersions table in Object Explorer

Figure 8.19 – SchemaVersions data
This is a table that DbUp uses to manage what scripts have been executed. So, we'll see our two scripts listed in this table.
With that, our project has been set up to handle database migrations. All we need to do is add the necessary SQL Script files in the SQLScripts folder, remembering to embed them as a resource. DbUp will then perform the migration when the app runs again.
We now understand that Dapper is a simple way of interacting with a database in a performant manner. It's a great choice when our team already has SQL Server skills because it doesn't abstract the database away from us.
In this chapter, we learned that Dapper adds various extension methods to the Microsoft SqlConnection object for reading and writing to the database. Dapper maps the results of a query to instances of a C# class automatically by matching the field names in the query result to the class properties. Query parameters can be passed in using a C# class, with Dapper automatically mapping properties in the C# class to the SQL parameters.
We then discovered that DbUp is a simple open source tool that can be used to manage database migrations. We can embed SQL Scripts within our project and write code that is executed when our app loads to instruct DbUp to check and perform any necessary migrations.
In the next chapter, we are going to create the RESTful API for our app by leveraging the data access code we have written in this chapter.
Answer the following questions to test the knowledge you have gained from this chapter:
return connection.Query<BuildingGetManyResponse>(
@"EXEC dbo.Building_GetMany_BySearch
@Search = @Search",
new { Criteria = "Fred"}
);
CREATE PROC dbo.Building_GetMany
AS
BEGIN
SET NOCOUNT ON
SELECT BuildingId, Name
FROM dbo.Building
END
We have the following statement, which calls the Dapper Query method:
return connection.Query<BuildingGetManyResponse>(
"EXEC dbo.Building_GetMany"
);
We also have the following model, which is referenced in the preceding statement:
public class BuildingGetManyResponse
{
public int Id{ get; set; }
public string Name { get; set; }
}
When our app is run, we find that the Id property within the BuildingGetManyResponse class instance is not populated. Can you spot the problem?
Here are some useful links if you wish to learn more about the topics that were covered in this chapter:
In Chapter 1, Understanding the ASP.NET 5 React Template, we learned that a RESTful endpoint is implemented using an API controller in ASP.NET. In this chapter, we'll implement an API controller for our Q&A app that will eventually allow the frontend to read and write questions and answers. We'll implement a range of controller action methods that handle different HTTP request methods returning appropriate responses.
We'll learn about dependency injection and use this to inject the data repository we created in the previous chapter into the API controller. We'll validate requests so that we can be sure the data is valid before it reaches the data repository.
At the end of the chapter, we'll ensure we aren't asking for unnecessary information in the API requests. This will prevent potential security issues as well as improve the experience for API consumers.
In this chapter, we'll cover the following topics:
We'll use the following tools in this chapter:
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. In order to restore code from a chapter, the source code repository can be downloaded and the relevant folder opened in the relevant editor. If the code is frontend code then npm install can be entered in the terminal to restore the dependencies.
Check out the following video to see the Code in Action: https://bit.ly/34xLwzq.
An API controller is a class that handles HTTP requests for an endpoint in a REST API and sends responses back to the caller.
In this section, we are going to create an API controller to handle requests to an api/questions endpoint. The controller will call into the data repository we created in the previous chapter. We'll also create an instance of the data repository in the API controller using dependency injection.
Creating an API controller for questions
Let's create a controller for the api/questions endpoint. If we don't have our backend project open in Visual Studio, let's do so and carry out the following steps:

Figure 9.1 – Creating a new API controller
[Route("api/[controller]")]
[ApiController]
public class QuestionsController : ControllerBase
{
}
The Route attribute defines the path that our controller will handle. In our case, the path will be api/questions because [controller] is substituted with the name of the controller minus the word Controller.
The ApiController attribute includes behavior such as automatic model validation, which we'll take advantage of later in this chapter.
The class also inherits from ControllerBase. This gives us access to more API-specific methods in our controller.
Next, we will learn how to interact with the data repository from the API controller.
We want to interact with an instance of the data repository we created in the previous chapter into our API controller. Let's carry out the following steps to do this:
using QandA.Data;
using QandA.Data.Models;
[Route("api/[controller]")]
[ApiController]
public class QuestionsController : ControllerBase
{
private readonly IDataRepository _dataRepository;
}
We've used the readonly keyword to make sure the variable's reference doesn't change outside the constructor.
private readonly IDataRepository _dataRepository;
public QuestionsController()
{
// TODO - set reference to _dataRepository
}
We need to set up the reference to _dataRepository in the constructor. We could try the following:
public QuestionsController()
{
_dataRepository = new DataRepository();
}
However, the DataRepository constructor requires the connection string to be passed in. Recall that we used something called dependency injection in the previous chapter to inject the configuration object into the data repository constructor to give us access to the connection string. Maybe we could use dependency injection to inject the data repository into our API controller? Yes, this is exactly what we are going to do.
Important note
Dependency injection is the process of injecting an instance of a class into another object. The goal of dependency injection is to decouple a class from its dependencies so that the dependencies can be changed without changing the class. ASP.NET has its own dependency injection facility that allows class dependencies to be defined when the app starts up. These dependencies are then available to be injected into other class constructors.
public QuestionsController(IDataRepository dataRepository)
{
_dataRepository = dataRepository;
}
So, our constructor now expects the data repository to be passed into the constructor as a parameter. We then simply set our private class-level variable to the data repository passed in.
Unlike the configuration object that was injected into the data repository, the data repository isn't automatically available for dependency injection. ASP.NET already sets up the configuration object for dependency injection for us because it is responsible for this class. However, DataRepository is our class, so we must register this for dependency injection.
using QandA.Data;
public void ConfigureServices(IServiceCollection services)
{
...
services.AddScoped<IDataRepository,
DataRepository>();
}
This tells ASP.NET that whenever IDataRepository is referenced in a constructor, substitute an instance of the DataRepository class.
Important note
The AddScoped method means that only one instance of the DataRepository class is created in a given HTTP request. This means the lifetime of the class that is created lasts for the whole HTTP request.
So, if ASP.NET encounters a second constructor that references IDataRepository in the same HTTP request, it will use the instance of the DataRepository class it created previously.
Important note
To recap, we can use dependency injection to have dependent class instances injected into the constructor of an API controller. Classes that are used in dependency injection need to be registered in the ConfigureServices method in the StartUp class.
So, we now have access to our data repository in our API controller with the help of dependency injection. Next, we are going to implement methods that are going to handle specific HTTP requests.
Action methods are where we can write code to handle requests to a resource. In this section, we are going to implement action methods that will handle requests to the questions resource. We will cover the GET, POST, PUT, and DELETE HTTP methods.
Let's implement our first action method, which is going to return an array of all the questions. Open QuestionsController.cs and carry out the following steps:
[HttpGet]
public IEnumerable<QuestionGetManyResponse> GetQuestions()
{
// TODO - get questions from data repository
// TODO - return questions in the response
}
We decorate the method with the HttpGet attribute to tell ASP.NET that this will handle HTTP GET requests to this resource.
We use the specific IEnumerable<QuestionGetManyResponse> type as the return type.
[HttpGet]
public IEnumerable<QuestionGetManyResponse> GetQuestions()
{
var questions = _dataRepository.GetQuestions();
// TODO - return questions in the response
}
[HttpGet]
public IEnumerable<QuestionGetManyResponse> GetQuestions()
{
var questions = _dataRepository.GetQuestions();
return questions;
}
ASP.NET will automatically convert the questions object to JSON format and put this in the response body. It will also automatically return 200 as the HTTP status code. Nice!

Figure 9.2 – Getting all questions
We'll see the questions from our database output in JSON format. Great, that's our first action method implemented!

Figure 9.3 – Show all files in Solution Explorer
...
"profiles": {
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"launchUrl": "api/questions",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"QandA": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "api/questions",
"applicationUrl":
"https://localhost:5001;http://localhost:5000",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
...
That completes the action method that will handle GET requests to api/questions.
In summary, GET request action methods have an HttpGet attribute decorator. The return type of the method is the type of data we want in the response body, which is automatically converted to JSON for us.
We will continue implementing handlers for more HTTP methods in the following subsections.
We don't always want all of the questions to be returned in the api/questions endpoint. Recall that our frontend had a search feature that returned questions that matched the search criteria. Let's extend our GetQuestions method to handle a search request. To do that, follow these steps:
[HttpGet]
public IEnumerable<QuestionGetManyResponse>
GetQuestions(string search)
{
var questions = _dataRepository.GetQuestions();
return questions;
}

Figure 9.4 – Model binding with no search query parameter
We'll see that the search parameter is null. Press F5 to let the app continue.

Figure 9.5 – Model binding with a search query parameter value
This time the search parameter is set to the value of the search query parameter we put in the browser URL. This process is called model binding.
Important note
Model binding is a process in ASP.NET that maps data from HTTP requests to action method parameters. Data from query parameters is automatically mapped to action method parameters that have the same name. We'll see later in this section that model binding can also map data from the HTTP request body. So, a [FromQuery] attribute could be placed in front of the action method parameter to instruct ASP.NET to map only from the query parameter.
[HttpGet]
public IEnumerable<QuestionGetManyResponse>
GetQuestions(string search)
{
if (string.IsNullOrEmpty(search))
{
return _dataRepository.GetQuestions();
}
else
{
// TODO - call data repository question search
}
}
If there is no search value, we get and return all the questions as we did before, but this time in a single statement.
[HttpGet]
public IEnumerable<QuestionGetManyResponse>
GetQuestions(string search)
{
if (string.IsNullOrEmpty(search))
{
return _dataRepository.GetQuestions();
}
else
{
return
_dataRepository.GetQuestionsBySearch(search);
}
}

Figure 9.6 – Searching questions
We'll see that the TypeScript question is returned as we would expect.
We have started to take advantage of model binding in ASP.NET. Model binding automatically binds the query parameters in a request to action method parameters. We'll continue to use model binding throughout this chapter.
Recall that the home screen of our app, as implemented in Chapter 3, Getting Started with React and TypeScript, shows the unanswered questions. We will create an action method that will handle the api/questions/unanswered path and return unanswered questions. Follow the steps given here.
Let's implement an action method that provides this functionality:
[HttpGet("unanswered")]
public IEnumerable<QuestionGetManyResponse>
GetUnansweredQuestions()
{
return _dataRepository.GetUnansweredQuestions();
}
The implementation simply calls into the data repository GetUnansweredQuestions method and returns the results.
Notice that the HttpGet attribute contains the string "unanswered". This is an additional path to concatenate to the controller's root path. So, this action method will handle GET requests to the api/questions/unanswered path.

Figure 9.7 – Unanswered questions
We get the unanswered question about state management as expected.
That completes the implementation of the action method that handles GET requests to api/questions/unanswered. To handle a subpath in an action method, we pass the subpath in the HttpGet attribute parameter.
Let's move on to implementing the action method for getting a single question. To do that, follow these steps:
[HttpGet("{questionId}")]
public ActionResult<QuestionGetSingleResponse>
GetQuestion(int questionId)
{
// TODO - call the data repository to get the
// question
// TODO - return HTTP status code 404 if the
// question isn't found
// TODO - return question in response with status
// code 200
}
Note the HttpGet attribute parameter.
Important note
The curly brackets tell ASP.NET to put the endpoint subpath in a variable that can be referenced as a method parameter.
In this method, the questionId parameter will be set to the subpath on the endpoint. So, for the api/questions/3 path, questionId would be set to 3.
Notice that the return type is ActionResult<QuestionGetSingleResponse> rather than just QuestionGetSingleResponse. This is because our action method won't return QuestionGetSingleResponse in all cases—there will be a case that will return NotFoundResult when the question can't be found. ActionResult gives us the flexibility to return these different types.
[HttpGet("{questionId}")]
public ActionResult<QuestionGetSingleResponse>
GetQuestion(int questionId)
{
var question =
_dataRepository.GetQuestion(questionId);
// TODO - return HTTP status code 404 if the
// question isn't found
// TODO - return question in response with status
// code 200
}
[HttpGet("{questionId}")]
public ActionResult<QuestionGetSingleResponse> GetQuestion(int questionId)
{
var question =
_dataRepository.GetQuestion(questionId);
if (question == null)
{
return NotFound();
}
// TODO - return question in response with status
// code 200
}
If the question isn't found, the result from the repository call will be null. So, we check for null and return a call to the NotFound method in ControllerBase, which returns HTTP status code 404.
[HttpGet("{questionId}")]
public ActionResult<QuestionGetSingleResponse> GetQuestion(int questionId)
{
var question =
_dataRepository.GetQuestion(questionId);
if (question == null)
{
return NotFound();
}
return question;
}
This will result in HTTP status code 200 being returned in the response with the question in JSON format in the response body.

Figure 9.8 – Getting a question
The question is returned as expected.

Figure 9.9 – Requesting a question that doesn't exist
We can get confirmation that a 404 status code is returned by pressing F12 to open the DevTools and looking at the Network panel to see the status of the response.
That completes the action method for getting a question.
We now understand that endpoint subpath parameters can be implemented by putting the parameter name inside curly brackets inside the HTTP method attribute decorator. We have also learned that there is a handy NotFound method in ControllerBase, which returns an HTTP status code 404 that we can use for requested resources that don't exist.
We've implemented a range of action methods that handle GET requests. It's time to implement action methods for the other HTTP methods next.
Let's implement an action method for posting a question:
[HttpPost]
public ActionResult<QuestionGetSingleResponse>
PostQuestion(QuestionPostRequest questionPostRequest)
{
// TODO - call the data repository to save the
// question
// TODO - return HTTP status code 201
}
Note that we use an HttpPost attribute to tell ASP.NET that this method handles HTTP POST requests.
Note that the method parameter type for questionPostRequest is a class rather than a primitive type. Earlier, in the Extending the GetQuestions action method for searching section, we introduced ourselves to model binding and explained how it maps data from an HTTP request to method parameters. Well, model binding can map data from the HTTP body as well as the query parameters. Model binding can also map to properties in parameters. This means that the data in the HTTP request body will be mapped to properties in the instance of the QuestionPostRequest class.
[HttpPost]
public ActionResult<QuestionGetSingleResponse>
PostQuestion(QuestionPostRequest questionPostRequest)
{
var savedQuestion =
_dataRepository.
PostQuestion(questionPostRequest);
// TODO - return HTTP status code 201
}
[HttpPost]
public ActionResult<QuestionGetSingleResponse>
PostQuestion(QuestionPostRequest questionPostRequest)
{
var savedQuestion =
_dataRepository.PostQuestion(questionPostRequest);
return CreatedAtAction(nameof(GetQuestion),
new { questionId = savedQuestion.QuestionId },
savedQuestion);
}
We return a call to CreatedAtAction from ControllerBase, which will return status code 201 with the question in the response. In addition, it also includes a Location HTTP header that contains the path to get the question.

Figure 9.10 – Creating a new request

Figure 9.11 – Setting the HTTP method and path in Postman

Figure 9.12 – Setting the request body type to JSON in Postman

Figure 9.13 – Adding the request body in Postman

Figure 9.14 – Response body from posting a question
The expected 201 HTTP status code is returned with the saved question in the response.
Note how the question in the response has the generated questionId, which will be useful for the consumer when interacting with the question.

Figure 9.15 – Response headers from posting a question
That's a nice touch.
That completes the implementation of the action method that will handle POST requests to api/questions.
We use the HttpPost attribute decorator to allow an action method to handle POST requests. Executing the CreatedAtAction method from ControllerBase in the action method's return statement will automatically add an HTTP location header containing the path to get the resource as well adding HTTP status code 201 to the response.
Let's move on to updating a question. To do that, implement the following steps:
[HttpPut("{questionId}")]
public ActionResult<QuestionGetSingleResponse>
PutQuestion(int questionId,
QuestionPutRequest questionPutRequest)
{
// TODO - get the question from the data
// repository
// TODO - return HTTP status code 404 if the
// question isn't found
// TODO - update the question model
// TODO - call the data repository with the
// updated question model to update the question
// in the database
// TODO - return the saved question
}
We use the HttpPut attribute to tell ASP.NET that this method handles HTTP PUT requests. We are also putting the route parameter for the question ID in the questionId method parameter.
The ASP.NET model binding will populate the QuestionPutRequest class instance from the HTTP request body.
[HttpPut("{questionId}")]
public ActionResult<QuestionGetSingleResponse>
PutQuestion(int questionId,
QuestionPutRequest questionPutRequest)
{
var question = _dataRepository.
GetQuestion(questionId);
if (question == null)
{
return NotFound();
}
// TODO - update the question model
// TODO - call the data repository with the
// updated question
//model to update the question in the database
// TODO - return the saved question
}
[HttpPut("{questionId}")]
public ActionResult<QuestionGetSingleResponse>
PutQuestion(int questionId,
QuestionPutRequest questionPutRequest)
{
var question = _dataRepository.
GetQuestion(questionId);
if (question == null)
{
return NotFound();
}
questionPutRequest.Title =
string.IsNullOrEmpty(questionPutRequest.Title) ?
question.Title :
questionPutRequest.Title;
questionPutRequest.Content =
string.IsNullOrEmpty(questionPutRequest.Content) ?
question.Content :
questionPutRequest.Content;
// TODO - call the data repository with the
// updated question model to update the question
// in the database
// TODO - return the saved question
}
We use ternary expressions to update the request model with data from the existing question if it hasn't been supplied in the request.
Important note
Allowing the consumer of the API to submit just the information that needs to be updated (rather than the full record) makes our API easy to consume.
[HttpPut("{questionId}")]
public ActionResult<QuestionGetSingleResponse>
PutQuestion(int questionId,
QuestionPutRequest questionPutRequest)
{
var question =
_dataRepository.GetQuestion(questionId);
if (question == null)
{
return NotFound();
}
questionPutRequest.Title =
string.IsNullOrEmpty(questionPutRequest.Title) ?
question.Title :
questionPutRequest.Title;
questionPutRequest.Content =
string.IsNullOrEmpty(questionPutRequest.Content) ?
question.Content :
questionPutRequest.Content;
var savedQuestion =
_dataRepository.PutQuestion(questionId,
questionPutRequest);
return savedQuestion;
}

Figure 9.16 – PUT request path

Figure 9.17 – PUT request body
So, we are requesting that question 3 is updated with the new content we have provided.

Figure 9.18 – PUT response body
The question is updated just as we expect.
The PutQuestion action method we implemented was arguably a handler for PATCH requests because it doesn't require the full record to be submitted. To handle PATCH requests, the HttpPut attribute decorator can be changed to HttpPatch. Note that handling PATCH requests properly requires a NewtonsoftJson NuGet package and registering a special input formatter. More information can be found at https://docs.microsoft.com/en-us/aspnet/core/web-api/jsonpatch.
To handle both PUT and PATCH requests, the method can be decorated with both the HttpPut and HttpPatch attribute decorators. We will leave our implementation with just HttpPut.
That completes the implementation of the action method that will handle PUT requests to api/questions.
Let's implement deleting a question. This follows a similar pattern to the previous methods:
[HttpDelete("{questionId}")]
public ActionResult DeleteQuestion(int questionId)
{
var question =
_dataRepository.GetQuestion(questionId);
if (question == null)
{
return NotFound();
}
_dataRepository.DeleteQuestion(questionId);
return NoContent();
}
We use the HttpDelete attribute to tell ASP.NET that this method handles HTTP DELETE requests. The method expects the question ID to be included at the end of the path.
The method checks that the question exists before deleting it, and returns an HTTP 404 status code if it doesn't exist.
The method returns HTTP status code 204 if the deletion is successful.

Figure 9.19 – DELETE request
A response with HTTP status code 204 is returned as expected.
That completes the implementation of the action method that will handle DELETE requests to api/questions.
The final action method we are going to implement is a method for posting an answer to a question:
[HttpPost("answer")]
public ActionResult<AnswerGetResponse>
PostAnswer(AnswerPostRequest answerPostRequest)
{
var questionExists =
_dataRepository.QuestionExists(
answerPostRequest.QuestionId);
if (!questionExists)
{
return NotFound();
}
var savedAnswer =
_dataRepository.PostAnswer(answerPostRequest);
return savedAnswer;
}
The method checks whether the question exists and returns a 404 HTTP status code if it doesn't. The answer is then passed to the data repository to insert into the database. The saved answer is returned from the data repository, which is returned in the response.
An alternative approach would be to put the questionId into the URL (api/question/{questionId}/answer) and not in the body of the request. This could be achieved by changing the decorator and method signature to the following:
[HttpPost("{questionId}/answer")]
public ActionResult<AnswerGetResponse>
PostAnswer(int questionId, AnswerPostRequest answerPostRequest)
The QuestionId property could also be removed from the AnswerPostRequest model.

Figure 9.20 – Submitting an answer
The answer will be saved and returned in the response as expected.

Figure 9.21 – SQL error when adding an answer with no content
This is because the stored procedure expects the content parameter to be passed into it and will protest if it is not.
An answer without any content is an invalid answer. Ideally, we should stop invalid requests being passed to the data repository and return HTTP status code 400 to the client with details about what is wrong with the request. How do we do this in ASP.NET? Let's find out in the next section.
In this section, we are going to add some validation checks on the request models. ASP.NET will then automatically send HTTP status code 400 (bad request) with details of the problem.
Validation is critical to preventing bad data from getting in the database or unexpected database errors from happening, as we experienced in the previous section. Giving the client detailed information for bad requests also ensures the development experience is good because this will help to correct mistakes.
We can add validation to a model by adding validation attributes to properties in the model that specify rules that should be adhered to. Let's add validation to the request for posting a question:
using System.ComponentModel.DataAnnotations;
This namespace gives us access to the validation attributes.
[Required]
public string Title { get; set; }
The Required attribute will check that the Title property is not an empty string or null.

Figure 9.22 – Validation error when submitting a question with no title
We get a response with HTTP status code 400 as expected with great information about the problem in the response.
Notice also that the breakpoint wasn't reached. This is because ASP.NET checked the model, determined that it was invalid, and returned a bad request response before the action method was invoked.
[Required]
[StringLength(100)]
public string Title { get; set; }
This check will ensure the title doesn't have more than 100 characters. A title containing more than 100 characters would cause a database error, so this is a valuable check.
[Required]
public string Content { get; set; }
[Required(ErrorMessage =
"Please include some content for the question")]
public string Content { get; set; }

Figure 9.23 – Validation error when submitting a question with no content
The UserId, UserName, and Created properties should really be required properties as well. However, we aren't going to add validation attributes to them because we are going to work on them later in this chapter.
Let's add validation to the request for updating a question:
using System.ComponentModel.DataAnnotations;
public class QuestionPutRequest
{
[StringLength(100)]
public string Title { get; set; }
public string Content { get; set; }
}
We are making sure that a new title doesn't exceed 100 characters.

Figure 9.24 – Validation error when updating a question with a long title
That completes the implementation of model validation for PUT requests to api/questions.
Let's add validation to the request for posting an answer:
using System.ComponentModel.DataAnnotations;
public class AnswerPostRequest
{
[Required]
public int QuestionId { get; set; }
[Required]
public string Content { get; set; }
...
}
public class AnswerPostRequest
{
[Required]
public int? QuestionId { get; set; }
[Required]
public string Content { get; set; }
...
}
Important note
The ? allows the property to have a null value as well as the declared type. T? is shortcut syntax for Nullable<T>.
So, why does QuestionId need to be able to hold a null value? This is because an int type defaults to 0 and so if there is no QuestionId in the request body, AnswerPostRequest will come out of the model binding process with QuestionId set to 0, which will pass the required validation check. This means the Required attribute won't catch a request body with no QuestionId. If the QuestionId type is nullable, then it will come out of the model binding processing with a null value if it's not in the request body and will fail the required validation check, which is what we want.
[HttpPost("answer")]
public ActionResult<AnswerGetResponse>
PostAnswer(AnswerPostRequest answerPostRequest)
{
var questionExists =
_dataRepository.QuestionExists(
answerPostRequest.QuestionId.Value);
if (!questionExists)
{
return NotFound();
}
var savedAnswer =
_dataRepository.PostAnswer(answerPostRequest);
return savedAnswer;
}
That completes the implementation of model validation for POST requests to api/questions/answer.
We have experienced that model validation is super easy to implement in our request models. We simply decorate the property in the model that needs validating with the appropriate attribute. We used the Required and StringLength attributes in our implementation, but there are others available in ASP.NET, some of which are as follows:
We haven't added any validation to the UserId, UserName, or Created properties in our request models. In the next section, we are going to find out why and properly handle these properties.
At the moment, we are allowing the consumer to submit all the properties that our data repository requires, including userId, userName, and created. However, these properties can be set on the server. In fact, the client doesn't need to know or care about userId.
Exposing the client to more properties than it needs impacts the usability of the API and can also cause security issues. For example, a client can pretend to be any user submitting questions and answers with our current API.
In the following subsections, we are going to tighten up some requests so that they don't contain unnecessary information. We will start by removing the userId, userName, and created fields from posting questions before moving on to removing the userId and created fields from posting answers.
Our QuestionPostRequest model is used both in the data repository to pass the data to the stored procedure as well as in the API controller to capture the information in the request body. This single model can't properly cater to both these cases, so we are going to create and use separate models. Implement the following steps:
public class QuestionPostFullRequest
{
public string Title { get; set; }
public string Content { get; set; }
public string UserId { get; set; }
public string UserName { get; set; }
public DateTime Created { get; set; }
}
This contains all the properties that are needed by the data repository to save a question.
public class QuestionPostRequest
{
[Required]
[StringLength(100)]
public string Title { get; set; }
[Required(ErrorMessage =
"Please include some content for the question")]
public string Content { get; set; }
}
QuestionGetSingleResponse
PostQuestion(QuestionPostFullRequest question);
public QuestionGetSingleResponse
PostQuestion(QuestionPostFullRequest question)
{
...
}
[HttpPost]
public ActionResult<QuestionGetSingleResponse>
PostQuestion(QuestionPostRequest questionPostRequest)
{
var savedQuestion =
_dataRepository.PostQuestion(new
QuestionPostFullRequest
{
Title = questionPostRequest.Title,
Content = questionPostRequest.Content,
UserId = "1",
UserName = "bob.test@test.com",
Created = DateTime.UtcNow
});
return CreatedAtAction(nameof(GetQuestion),
new { questionId = savedQuestion.QuestionId },
savedQuestion);
}
We've hardcoded the UserId and UserName values for now. In Chapter 11, Securing the Backend, we'll get them from our identity provider.
We've also set the Created property to the current date and time.

Figure 9.25 – Submitting a question
The user and created date are set and returned in the response as expected.
That completes the separation of the models for the HTTP request and data repository for adding questions. This means we are only requesting the information that is necessary for POST requests to api/questions.
Let's tighten up posting an answer:
public class AnswerPostFullRequest
{
public int QuestionId { get; set; }
public string Content { get; set; }
public string UserId { get; set; }
public string UserName { get; set; }
public DateTime Created { get; set; }
}
This contains all the properties that are needed by the data repository to save an answer.
public class AnswerPostRequest
{
[Required]
public int? QuestionId { get; set; }
[Required]
public string Content { get; set; }
}
AnswerGetResponse PostAnswer(AnswerPostFullRequest answer);
public AnswerGetResponse
PostAnswer(AnswerPostFullRequest answer)
{
...
}
[HttpPost("answer")]
public ActionResult<AnswerGetResponse>
PostAnswer(AnswerPostRequest answerPostRequest)
{
var questionExists =
_dataRepository.QuestionExists(
answerPostRequest.QuestionId.Value);
if (!questionExists)
{
return NotFound();
}
var savedAnswer =
_dataRepository.PostAnswer(new
AnswerPostFullRequest
{
QuestionId = answerPostRequest.
QuestionId.Value,
Content = answerPostRequest.Content,
UserId = "1",
UserName = "bob.test@test.com",
Created = DateTime.UtcNow
}
);
return savedAnswer;
}
Figure 9.26 – Submitting an answer
The user and created date are set and returned in the response as expected.
So, that's our REST API tightened up a bit.
In this section, we manually mapped the request model to the model used in the data repository. For large models, it may be beneficial to use a mapping library such as AutoMapper to help us copy data from one object to another. More information on AutoMapper can be found at https://automapper.org/.
In this chapter, we learned how to implement an API controller to handle requests to a REST API endpoint. We discovered that inheriting from ControllerBase and decorating the controller class with the ApiController attribute gives us nice features such as automatic model validation handling and a nice set of methods for returning HTTP status codes.
We used AddScoped to register the data repository dependency so that ASP.NET uses a single instance of it in a request/response cycle. We were then able to inject a reference to the data repository in the API controller class in its constructor.
We learned about the powerful model binding process in ASP.NET and how it maps data from an HTTP request into action method parameters. We discovered that in some cases it is desirable to use separate models for the HTTP request and the data repository because some of the data can be set on the server, and requiring less data in the request helps usability and, sometimes, security.
We used ASP.NET validation attributes to validate models. This is a super simple way of ensuring that the database doesn't get infected with bad data.
We are now equipped to build robust and developer-friendly REST APIs that work with all the common HTTP methods and return responses with appropriate HTTP status codes.
In the next chapter, we are going to focus on the performance and scalability of our REST API.
Answer the following questions to test the knowledge that you have gained in this chapter:
[HttpPost]
public ActionResult<BuildingResponse> PostBuilding(BuildingPostRequest buildingPostRequest)
{
var buildingExists =
_dataRepository.BuildingExists(buildingPostRequest. Code);
if (buildingExists)
{
// TODO - return status code 400
}
...
}
What method from ControllerBase can we use to return status code 400?
public class BuildingPostRequest
{
public string Code { get; set; }
public string Name { get; set; }
public string Description { get; set; }
}
We send an HTTP POST request to the resource with the following body:
{
"code": "BTOW",
"name": "Blackpool Tower",
"buildingDescription": "Blackpool Tower is a
tourist attraction in Blackpool"
}
The Description property in the model isn't getting populated during the request. What is the problem?
public class BuildingPostRequest
{
[Required]
public string Code { get; set; }
[Required]
public string Name { get; set; }
public string Description { get; set; }
}
[Range(1, 10)]
Here are some useful links for learning more about the topics covered in this chapter:
In this chapter, we are going to improve the performance and scalability of our REST API. When we make each improvement, we'll use a load testing tool to verify that there has been an improvement.
We'll start by focusing on database calls and how we can reduce the number of calls to improve performance. We'll then move on to requesting less data with data paging. We'll also look at the impact that caching data in memory has on performance.
Then, we'll learn how to make our API controllers and data repository asynchronous. We'll eventually understand whether this makes our REST API more performant or perhaps more scalable.
In this chapter, we'll cover the following topics:
At the end of this chapter, we'll have the knowledge to implement fast REST APIs that perform well under load.
We'll use the following tools in this chapter:
All of the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. To restore code from a chapter, the source code repository can be downloaded, and the relevant folder opened in the relevant editor. If the code is frontend code, then npm install can be entered in the terminal to restore the dependencies.
Check out the following video to see the Code in Action: https://bit.ly/3piyUEx.
A database round trip is a request from the web API to the database. Database round trips are expensive. The greater the distance between the web API and the database, the more expensive the round trip is. So, we want to keep the trips from the web API to the database to a minimum in order to gain maximum performance.
We will start this section by understanding the N+1 problem and experiencing how it negatively impacts performance. We will then learn how to execute multiple queries in a single database round trip.
The N+1 problem is a classic query problem where there is a parent-child data model relationship. When data is retrieved for this model, the parent items are fetched in a query and then separate queries are executed to fetch the data for each child. So, there are N queries for the children and 1 additional query for the parent, hence the term N+1.
We are going to add the ability to return answers as well as questions in a GET request to the questions REST API endpoint. We are going to fall into the N+1 trap with our first implementation. Let's open our backend project in Visual Studio and carry out the following steps:
public class QuestionGetManyResponse
{
public int QuestionId { get; set; }
public string Title { get; set; }
public string Content { get; set; }
public string UserName { get; set; }
public DateTime Created { get; set; }
public List<AnswerGetResponse> Answers { get; set; }
}
public interface IDataRepository
{
IEnumerable<QuestionGetManyResponse>
GetQuestions();
IEnumerable<QuestionGetManyResponse>
GetQuestionsWithAnswers();
...
}
This method will get all of the questions in the database, including the answers for each question.
public IEnumerable<QuestionGetManyResponse> GetQuestionsWithAnswers()
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
var questions =
connection.Query<QuestionGetManyResponse>(
"EXEC dbo.Question_GetMany");
foreach (var question in questions)
{
question.Answers =
connection.Query<AnswerGetResponse>(
@"EXEC dbo.Answer_Get_ByQuestionId
@QuestionId = @QuestionId",
new { QuestionId = question.QuestionId })
.ToList();
}
return questions;
}
}
So, this makes a database call to get all of the questions and then additional calls to get the answer for each question. We have fallen into the classic N+1 trap!
[HttpGet]
public IEnumerable<QuestionGetManyResponse>
GetQuestions(string search, bool includeAnswers)
{
if (string.IsNullOrEmpty(search))
{
if (includeAnswers)
{
return
_dataRepository.GetQuestionsWithAnswers();
}
else
{
return _dataRepository.GetQuestions();
}
}
else
{
return
_dataRepository.GetQuestionsBySearch(search);
}
}
We've added the ability to have an includeAnswers query parameter that, if set, will call the GetQuestionsWithAnswers data repository method we just added. A fuller implementation would allow answers to be included if a search query parameter is defined, but this implementation will be enough for us to see the N+1 problem and how we can resolve it.
Figure 10.1 – Questions with answers in Postman
The answers are returned with each question, as we expected.
This doesn't seem like much of a problem though. The request took only 174 ms to complete. Well, we only have a couple of answers in our database at the moment. If we had more questions, the request would slow down a fair bit. Also, the test we have just done is for a single user. What happens when multiple users make this request? We'll find out in the next section.
We must load test our API endpoints to verify that they perform appropriately under load. It is far better to find a performance issue in the development process before our users do. WebSurge is a simple load testing tool that we are going to use to test our questions endpoint with the N+1 problem. We are going to perform the load test in our development environment, which is fine for us to see the impact the N+1 problem has. Obviously, the load testing results we are going to see would be a lot faster in a production environment. To use WebSurge for load testing, implement the following steps:

Figure 10.2 – New WebSurge request

Figure 10.3 – Setting the WebSurge request path

Figure 10.4 – WebSurge test response

Figure 10.5 – Load test duration and threads

Figure 10.6 – Load test output

Figure 10.7 – Load test results
So, we managed to get 975 requests per second with this implementation of getting questions with answers. Obviously, the result you get will be different.
Keep a note of the results—we'll use these in a comparison with an implementation that resolves the N+1 problem next.
Wouldn't it be great if we could get the questions and answers in a single database query and then map this data to the hierarchical structure that we require in our data repository? Well, this is exactly what we can do with a feature called multi-mapping in Dapper. Let's look at how we can use this. Follow these steps:
public IEnumerable<QuestionGetManyResponse>
GetQuestionsWithAnswers()
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
return connection.Query<QuestionGetManyResponse>(
"EXEC dbo.Question_GetMany_WithAnswers");
}
}
This is a good start but the Question_GetMany_WithAnswers stored procedure returns tabular data and we require this to be mapped to the questions-and-answers hierarchical structure we have in our QuestionGetManyResponse model:

Figure 10.8 – Tabular data from a stored procedure
This is where Dapper's multi-mapping feature comes in handy.
public IEnumerable<QuestionGetManyResponse> GetQuestionsWithAnswers()
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
var questionDictionary =
new Dictionary<int, QuestionGetManyResponse>();
return connection
.Query<
QuestionGetManyResponse,
AnswerGetResponse,
QuestionGetManyResponse>(
"EXEC dbo.Question_GetMany_WithAnswers",
map: (q, a) =>
{
QuestionGetManyResponse question;
if (!questionDictionary.TryGetValue
(q.QuestionId, out question))
{
question = q;
question.Answers =
new List<AnswerGetResponse>();
questionDictionary.Add(question.
QuestionId, question);
}
question.Answers.Add(a);
return question;
},
splitOn: "QuestionId"
)
.Distinct()
.ToList();
}
}
In the Dapper Query method, we provide a Lambda function that helps Dapper map each question. The function takes in the question and answers that Dapper has mapped from the stored procedure result and we map it to the structure we require. We use a Dictionary called questionDictionary to keep track of the questions we've already created so that we can create an instance of new List<AnswerGetResponse> for the answers for new questions.
We tell Dapper what models to map to with the first two generic parameters in the Query method, which are QuestionGetManyResponse and AnswerGetResponse, but how does Dapper know which fields have been returned from the stored procedure map to which properties in the models? The answer is that we tell Dapper using the splitOn parameter by saying everything before QuestionId goes into the QuestionGetManyResponse model, and everything after, including QuestionId, goes into the AnswerGetResponse model.
We tell Dapper what model the end result should map to with the last generic parameter in the Query method, which is QuestionGetManyResponse in this case.
We use the Distinct method on the results we get from Dapper to remove duplicate questions and then the ToList method to turn the results into a list.
Figure 10.9 – Load test results
This time, our REST API managed to take 1,035 requests per second, which is a bit better than before.
So, Dapper's multi-mapping feature can be used to resolve the N+1 problem and generally achieve better performance. We do need to be careful with this approach, though, as we are requesting a lot of data from the database because of the duplicate parent records. Processing large amounts of data in the web server can be inefficient and lead to a slowdown in the garbage collection process.
There is another feature in Dapper that helps us reduce the amount of database round trips called multi-results. We are going to use this feature to improve the performance of the endpoint that gets a single question, which, at the moment, is making two database calls. To do that, follow these steps:

Figure 10.10 – Option to edit a request

Figure 10.11 – Path to a single question

Figure 10.12 – Results of load testing getting a question
So, the current implementation can take 1,092 requests per second.
using static Dapper.SqlMapper;
public QuestionGetSingleResponse GetQuestion(int questionId)
{
using (var connection = new
SqlConnection(_connectionString))
{
connection.Open();
using (GridReader results =
connection.QueryMultiple(
@"EXEC dbo.Question_GetSingle
@QuestionId = @QuestionId;
EXEC dbo.Answer_Get_ByQuestionId
@QuestionId = @QuestionId",
new { QuestionId = questionId }
)
)
{
var question = results.Read<
QuestionGetSingleResponse>().FirstOrDefault();
if (question != null)
{
question.Answers =
results.Read<AnswerGetResponse>().ToList();
}
return question;
}
}
}
We use the QueryMultiple method in Dapper to execute our two stored procedures in a single database round trip. The results are added into a results variable and can be retrieved using the Read method by passing the appropriate type in the generic parameter.
Figure 10.13 – Getting a question load test – improved results
Our improved API can now take 1,205 requests per second.
In this section, we learned how to fetch parent-child data in a single round trip using the multi-mapping feature in Dapper. We've also learned how to execute multiple queries in a single round trip using the multi-results feature in Dapper. We've also learned how to load test REST API endpoints using WebSurge.
As we mentioned in the multi-mapping example, processing large amounts of data can be problematic. How can we reduce the amount of data we read from the database and process on the web server? We'll find out in the next section.
In this section, we are going to force the consumers of our questions endpoint to specify the page of data when executing the request with the search query parameter. So, we'll only be returning a portion of the data rather than all of it.
Paging helps with performance and scalability in the following ways:
This all adds up to a potentially significant positive impact—particularly for large collections of data.
We will start this section by load testing the current implementation of the questions endpoint. We will then implement paging and see the impact this has on a load test.
Let's carry out the following steps to add lots of questions to our database. This will allow us to see the impact of data paging:
EXEC Question_AddForLoadTest
This will execute a stored procedure that will add 10,000 questions to our database.
Now that we have our questions in place, let's test out the current implementation.
Before we implement data paging, let's see how the current implementation performs under load. To check and verify that, implement the following steps:

Figure 10.14 – Response body error
This error can be resolved by changing the MaxResponseSize setting to 0 on the Session Options tab:

Figure 10.15 – Removing the maximum response size

Figure 10.16 – Searching questions load test result
So, the requests-per-second value to beat is 37.
Implementing data paging
Now, let's revise the implementation of the questions endpoint with the search query parameter so that we can use data paging. To implement that, let's work through the following steps:
public interface IDataRepository
{
...
IEnumerable<QuestionGetManyResponse>
GetQuestionsBySearch(string search);
IEnumerable<QuestionGetManyResponse>
GetQuestionsBySearchWithPaging(
string search,
int pageNumber,
int pageSize);
...
}
So, the method will take in the page number and size as parameters.
public IEnumerable<QuestionGetManyResponse>
GetQuestionsBySearchWithPaging(
string search,
int pageNumber,
int pageSize
)
{
using (var connection = new SqlConnection(_
connectionString))
{
connection.Open();
var parameters = new
{
Search = search,
PageNumber = pageNumber,
PageSize = pageSize
};
return connection.Query<QuestionGetManyResponse>(
@"EXEC dbo.Question_GetMany_BySearch_WithPaging
@Search = @Search,
@PageNumber = @PageNumber,
@PageSize = @PageSize", parameters
);
}
}
So, we are calling a stored procedure named Question_GetMany_BySearch_WithPaging to get the page of data, and passing in the search criteria, page number, and page size as parameters.
[HttpGet]
public IEnumerable<QuestionGetManyResponse>
GetQuestions(
string search,
bool includeAnswers,
int page = 1,
int pageSize = 20
)
{
if (string.IsNullOrEmpty(search))
{
if (includeAnswers)
{
return
_dataRepository.GetQuestionsWithAnswers();
}
else
{
return _dataRepository.GetQuestions();
}
}
else
{
return
_dataRepository.GetQuestionsBySearchWithPaging(
search,
page,
pageSize
);
}
}
Notice that we also accept query parameters for the page number and page size, which default to 1 and 20, respectively.
Figure 10.17 – Improved searching questions load test result
We get the performance improvement we hoped for, with the endpoint now able to take 100 requests per second.
In summary, we implemented data paging by accepting query parameters on the endpoint for the page number and the page size. These parameters are passed to the database query so that it only fetches the relevant page of data. Data paging is well worth considering for APIs that return collections of data, particularly if the collection is large.
In the next section, we are going to tackle the subject of asynchronous code and how this can help with scalability.
In this section, we are going to make the unanswered questions endpoint asynchronous to make it more scalable.
At the moment, all of our API code has been synchronous. For synchronous API code, when a request is made to the API, a thread from the thread pool will handle the request. If the code makes an I/O call (such as a database call) synchronously, the thread will block until the I/O call has finished. The blocked thread can't be used for any other work—it simply does nothing and waits for the I/O task to finish. If other requests are made to our API while the other thread is blocked, different threads in the thread pool will be used for the other requests. The following diagram is a visualization of synchronous requests in ASP.NET:
Figure 10.18 – Synchronous requests
There is some overhead in using a thread—a thread consumes memory and it takes time to spin a new thread up. So, really, we want our API to use as few threads as possible.
If the API was to work in an asynchronous manner, when a request is made to our API, a thread from the thread pool would handle the request (as in the synchronous case). If the code makes an asynchronous I/O call, the thread will be returned to the thread pool at the start of the I/O call and can be used for other requests. The following diagram is a visualization of asynchronous requests in ASP.NET:
Figure 10.19 – Asynchronous requests
So, if we make our API asynchronous, it will be able to handle requests more efficiently and increase scalability. It is important to note that making an API asynchronous won't make it more performant because a single request will take roughly the same amount of time. The improvement we are about to make is so that our API can use the server's resources more efficiently.
In this section, we will convert the action method for unanswered questions to be asynchronous. We will profile the performance before and after this conversion to discover the impact. We will also discover what happens when asynchronous code makes I/O calls synchronously.
Before we change the unanswered questions endpoint, let's test on the current implementation and gather some data to compare against the asynchronous implementation. We will use the performance profiler in Visual Studio along with WebSurge. Carry out the following steps:

Figure 10.20 – Release configuration
This will make the test a little more realistic.

Figure 10.21 – Performance profiler

Figure 10.22 – Configuration for load testing unanswered questions

Figure 10.23 – Performance on the synchronous version of unanswered questions
Figure 10.24 – Load test results on the synchronous version of unanswered questions
So, we now have some performance metrics from the synchronous implementation of unanswered questions.
Next, we are going discover how changing this code to be asynchronous impacts performance.
Now, we are going to change the implementation of the unanswered questions endpoint so that it's asynchronous:
public interface IDataRepository
{
...
IEnumerable<QuestionGetManyResponse>
GetUnansweredQuestions();
Task<IEnumerable<QuestionGetManyResponse>>
GetUnansweredQuestionsAsync();
...
}
The key difference with an asynchronous method is that it returns a Task of the type that will eventually be returned.
public async Task<IEnumerable<QuestionGetManyResponse>>
GetUnansweredQuestionsAsync()
{
using (var connection = new
SqlConnection(_connectionString))
{
await connection.OpenAsync();
return await
connection.QueryAsync<QuestionGetManyResponse>(
"EXEC dbo.Question_GetUnanswered");
}
}
The async keyword before the return type signifies that the method is asynchronous. The implementation is very similar to the synchronous version, except that we use the asynchronous Dapper version of opening the connection and executing the query with the await keyword.
Important note
When making code asynchronous, all the I/O calls in the calling stack must be asynchronous. If any I/O call is synchronous, then the thread will be blocked rather than returning to the thread pool and so threads won't be managed efficiently.
[HttpGet("unanswered")]
public async Task<IEnumerable<QuestionGetManyResponse>>
GetUnansweredQuestions()
{
return await _dataRepository.
GetUnansweredQuestionsAsync();
}
We mark the method as asynchronous with the async keyword and return a Task of the type we eventually want to return. We also call the asynchronous version of the data repository method with the await keyword.
Our unanswered questions endpoint is now asynchronous.

Figure 10.25 – Performance on the asynchronous version of unanswered questions
Compare the results to the synchronous implementation results. The asynchronous one is slightly faster in my test. Notice the extra activities that happen in order to handle asynchronous code, which take up a bit of execution time.
Figure 10.26 – Load test results on the asynchronous version of unanswered questions
The results show a marginal performance improvement. In fact, your results may show a marginal decrease in performance. Asynchronous code can be slower than synchronous because of the overhead required to handle asynchronous code.
The benefit of asynchronous code is that it uses the web server's resources more efficiently under load. So, an asynchronous REST API will scale better than a synchronous REST API.
What happens when we use a synchronous database call in an asynchronous API controller method? We'll find out next.
An easy mistake to make is to mix asynchronous code with synchronous code. Let's find out what happens when this happens by changing the GetUnansweredQuestions action method:
[HttpGet("unanswered")]
public async Task<IEnumerable<QuestionGetManyResponse>> GetUnansweredQuestions()
{
return _dataRepository.GetUnansweredQuestions();
}

Figure 10.27 – Results when sync and async code is mixed
The code functions as though it is synchronous code even though the action method is asychronous.
[HttpGet("unanswered")]
public async Task<IEnumerable<QuestionGetManyResponse>> GetUnansweredQuestions()
{
return await
_dataRepository.GetUnansweredQuestionsAsync();
}
Figure 10.28 – Debug configuration
So, when synchronous and asynchronous code is mixed, it will behave like synchronous code and use the thread pool inefficiently. In asynchronous methods, it is important to check that all I/O calls are asynchronous, and that includes any I/O calls in child methods.
In the next section, we are going to look at how we can optimize requests for data by caching data.
In this section, we are going to cache requests for getting a question. At the moment, the database is queried for each request to get a question. If we cache a question and can get subsequent requests for the question from the cache, this should be faster and reduce the load on the database. We will prove this with load tests.
Before we implement caching, we are going to load test the current implementation of getting a single question using the following steps:
Figure 10.29 – Getting a question without cache – load test results
So, we get 1,113 requests per second without caching.
Stop the REST API from running so that we can implement and use a data cache.
We are going to implement a cache for the questions using the memory cache in ASP.NET:
using QandA.Data.Models;
namespace QandA.Data
{
public interface IQuestionCache
{
QuestionGetSingleResponse Get(int questionId);
void Remove(int questionId);
void Set(QuestionGetSingleResponse question);
}
}
So, we need the cache implementation to have methods for getting, removing, and updating an item in the cache.
using Microsoft.Extensions.Caching.Memory;
using QandA.Data.Models;
namespace QandA.Data
{
public class QuestionCache: IQuestionCache
{
// TODO - create a memory cache
// TODO - method to get a cached question
// TODO - method to add a cached question
// TODO - method to remove a cached question
}
}
Notice that we have referenced Microsoft.Extensions.Caching.Memory so that we can use the standard ASP.NET memory cache.
public class QuestionCache: IQuestionCache
{
private MemoryCache _cache { get; set; }
public QuestionCache()
{
_cache = new MemoryCache(new MemoryCacheOptions
{
SizeLimit = 100
});
}
// TODO - method to get a cached question
// TODO - method to add a cached question
// TODO - method to remove a cached question
}
Notice that we have set the cache limit to be 100 items. This is to limit the amount of memory the cache takes up on our web server.
public class QuestionCache: IQuestionCache
{
...
private string GetCacheKey(int questionId) =>
$"Question-{questionId}";
public QuestionGetSingleResponse Get(int questionId)
{
QuestionGetSingleResponse question;
_cache.TryGetValue(
GetCacheKey(questionId),
out question);
return question;
}
// TODO - method to add a cached question
// TODO - method to remove a cached question
}
We have created an expression to give us a key for a cache item, which is the word Question with a hyphen, followed by the question ID.
We use the TryGetValue method within the memory cache to retrieve the cached question. So, null will be returned from our method if the question doesn't exist in the cache.
public class QuestionCache: IQuestionCache
{
...
public void Set(QuestionGetSingleResponse question)
{
var cacheEntryOptions =
new MemoryCacheEntryOptions().SetSize(1);
_cache.Set(
GetCacheKey(question.QuestionId),
question,
cacheEntryOptions);
}
// TODO - method to remove a cached question
}
Notice that we specify the size of the question in the options when setting the cache value. This ties in with the size limit we set on the cache so that the cache will start to remove questions from the cache when there are 100 questions in it.
public class QuestionCache: IQuestionCache
{
...
public void Remove(int questionId)
{
_cache.Remove(GetCacheKey(questionId));
}
}
Note that if the question doesn't exist in the cache, nothing will happen and no exception will be thrown.
That completes the implementation of our question cache.
Now, we are going to make use of the questions cache in the GetQuestion method of our API controller. Work through the following steps:
public void ConfigureServices(IServiceCollection services)
{
...
services.AddMemoryCache();
services.AddSingleton<IQuestionCache,
QuestionCache>();
}
We register our cache as a singleton in the dependency injection system. This means that a single instance of our class will be created for the lifetime of the app. So, separate HTTP requests will access the same class instance and, therefore, the same cached data. This is exactly what we want for a cache.
...
private readonly IQuestionCache _cache;
public QuestionsController(..., IQuestionCache questionCache)
{
...
_cache = questionCache;
}
[HttpGet("{questionId}")]
public ActionResult<QuestionGetSingleResponse>
GetQuestion(int questionId)
{
var question = _cache.Get(questionId);
if (question == null)
{
question =
_dataRepository.GetQuestion(questionId);
if (question == null)
{
return NotFound();
}
_cache.Set(question);
}
return question;
}
If the question isn't in the cache, then we get it from the data repository and put it in the cache.
[HttpPut("{questionId}")]
public ActionResult<QuestionGetSingleResponse>
PutQuestion(int questionId, QuestionPutRequest
questionPutRequest)
{
...
_cache.Remove(savedQuestion.QuestionId);
return savedQuestion;
}
HttpDelete("{questionId}")]
public ActionResult DeleteQuestion(int questionId)
{
...
_cache.Remove(questionId);
return NoContent();
}
[HttpPost("answer")]
public ActionResult<AnswerGetResponse>
PostAnswer(AnswerPostRequest answerPostRequest)
{
...
_cache.Remove(answerPostRequest.QuestionId.Value);
return savedAnswer;
}

Figure 10.30 – Getting a question with cache load test results
This completes our implementation of the question endpoint with data caching.
It is important to remember to invalidate the cache when the data changes. In our example, this was straightforward, but it can be more complex, particularly if there are other processes outside of the REST API that change the data. So, if we don't have full control of the data changes in the REST API, a cache may not be worth implementing.
Another consideration for whether to use a cache is if the data changes very frequently. In this case, the caching process can actually negatively impact performance because lots of the requests will result in database calls anyway and we have all of the overhead of managing the cache.
However, if the data behind an endpoint changes infrequently and we have control over these changes, then caching is a great way to positively impact performance.
What if the REST API is distributed across several servers? Well, because the memory cache is local to each web server, this could result in database calls where the data is cached on a different server. A solution to this is to implement a distributed cache with IDistributedCache in ASP.NET, which would have a very similar implementation to our memory cache. The complexity is that this needs to connect to a third-party cache such as Redis, which adds financial costs and complexity to the solution. For high-traffic REST APIs, a distributed cache is well worth considering, though.
In this chapter, we learned that we can use Dapper's multi-mapping and multi-result features to reduce database round trips to positively impact performance and allow our REST API to accept more requests per second. We learned also that forcing the client to page through the data they need to consume helps with performance as well.
We learned how to make controller action methods asynchronous and how this positively impacts the scalability of a REST API built in ASP.NET. We also understood that all of the I/O calls in a method and its child methods need to be asynchronous to achieve scalability benefits.
We also learned how to cache data in memory to reduce the number of expensive database calls. We understand that data that is read often and rarely changed is a great case for using a cache.
We will continue to focus on the REST API in the next chapter and turn our attention to the topic of security. We will require users to be authenticated in order to access some endpoints with the REST API.
Try to answer the following questions to check the knowledge that you have gained in this chapter:
using (var connection = new SqlConnection(_connectionString))
{
connection.Open();
using (GridReader results = connection.QueryMultiple(
@"EXEC dbo.Order_GetHeader @OrderId = @OrderId;
EXEC dbo.OrderDetails_Get_ByOrderId @OrderId =
@OrderId",
new { OrderId = orderId }))
{
// TODO - Read the order and details from the
// query result
return order;
}
}
What are the missing statements that will read the order and its details from the results, putting the details in the order model? The order model is of the OrderGetSingleResponse type and contains a Details property of the IEnumerable<OrderDetailGetResponse> type.
public async AnswerGetResponse GetAnswer(int answerId)
{
using (var connection = new SqlConnection(_connectionString))
{
connection.Open();
return await connection
.QueryFirstOrDefaultAsync<AnswerGetResponse>(
"EXEC dbo.Answer_Get_ByAnswerId @AnswerId =
@AnswerId",
new { AnswerId = answerId });
}
}
services.AddScoped<QuestionCache>();
using (var connection = new SqlConnection(_connectionString))
{
connection.Open();
using (GridReader results = connection.QueryMultiple(
@"EXEC dbo.Order_GetHeader @OrderId = @OrderId;
EXEC dbo.OrderDetails_Get_ByOrderId @OrderId = @OrderId",
new { OrderId = orderId }))
{
var order = results.Read<
OrderGetSingleResponse>().FirstOrDefault();
if (order != null)
{
order.Details = results.Read<
OrderDetailGetResponse>().ToList();
}
return order;
}
}
a) The number of page read I/Os is reduced when SQL Server grabs the data.
b) The amount of data transferred from the database server to the web server is reduced.
c) The amount of memory used to store the data on the web server in our model is reduced.
d) The amount of data transferred from the web server to the client is reduced.
Here is the corrected implementation:
public async AnswerGetResponse GetAnswer(int answerId)
{
using (var connection = new SqlConnection(_connectionString))
{
await connection.OpenAsync();
return await connection
.QueryFirstOrDefaultAsync<AnswerGetResponse>(
"EXEC dbo.Answer_Get_ByAnswerId @AnswerId = @AnswerId",
new { AnswerId = answerId });
}
}
public void Set(QuestionGetSingleResponse question)
{
var cacheEntryOptions =
new MemoryCacheEntryOptions()
.SetSize(1)
.SetSlidingExpiration(TimeSpan.FromMinutes(30));
_cache.Set(GetCacheKey(question.QuestionId),
question, cacheEntryOptions);
}
Here are some useful links if you want to learn more about the topics that were covered in this chapter:
In this chapter, we'll implement authentication and authorization in our Q&A app. We will use a popular service called Auth0, which implements OpenID Connect (OIDC), to help us to do this. We will start by understanding what OIDC is and why it is a good choice, before getting our app to interact with Auth0.
At the moment, our web API is accessible by unauthenticated users, which is a security vulnerability. We will resolve this vulnerability by protecting the necessary endpoints with simple authorization. This will mean that only authenticated users can access protected resources.
Authenticated users shouldn't have access to everything, though. We will learn how to ensure authenticated users only get access to what they are allowed to by using custom authorization policies.
We'll also learn how to get details about authenticated users so that we can include them when questions and answers are saved to the database.
We will end the chapter by enabling cross-origin requests in preparation for allowing our frontend to access the REST API.
In this chapter, we'll cover the following topics:
We'll use the following tools and services in this chapter:
All of the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. To restore code from a chapter, the source code repository can be downloaded and the relevant folder opened in the relevant editor. If the code is frontend code, then npm install can be entered in the terminal to restore the dependencies.
Check out the following video to see the Code in Action: http://bit.ly/2EPQ8DY
Before we cover OIDC, let's make sure we understand authentication and authorization. Authentication verifies that the user is who they say they are. In our app, the user will enter their email and password to prove who they are. Authorization decides whether a user has permission to access a resource. In our app, some of the REST API endpoints, such as posting a question, will eventually be protected by authorization checks.
OIDC is an industry-standard way of handling both authentication and authorization as well as other user-related operations. This works well for a wide variety of architectures, including single-page applications (SPAs) such as ours where there is a JavaScript client and a server-side REST API that need to be secured.
The following diagram shows the high-level flow of a user of our app being authenticated and then gaining access to protected resources in the REST API:
Figure 11.1 – OIDC authentication flow
Here are some more details of the steps that take place:
Notice that our app never handles user credentials. When user authentication is required, the user will be redirected to the identity provider to carry out this process. Our app only ever deals with a secure token, which is referred to as an access token, which is a long-encoded string. This token is in JSON Web Token (JWT) format, which again is industry-standard.
The content of a JWT can be inspected using the https://jwt.io/ website. We can paste a JWT into the Encoded box and then the site puts the decoded JWT in the Decoded box, as shown in the following screenshot:
Figure 11.2 – JWT in jwt.io
There are three parts to a JWT, separated by dots, and they appear as different colors in jwt.io:
The header usually contains the type of the token in a typ field and the signing algorithm being used in an alg field. So, the preceding token is a JWT that uses an RSA signature with the SHA-256 asymmetric algorithm. There is also a kid field in the header, which is an opaque identifier that can be used to identify the key that was used to sign the JWT.
The payload of JWTs vary but the following fields are often included:
OIDC deals with securely storing passwords, authenticating users, generating access tokens, and much more. Being able to leverage an industry-standard technology such as OIDC not only saves us lots of time but also gives us the peace of mind that the implementation is very secure and will receive updates as attackers get smarter.
What we have just learned is implemented by Auth0. We'll start to use Auth0 in the next section.
We are going to use a ready-made identity service called Auth0 in our app. Auth0 implements OIDC and is also free for a low number of users. Using Auth0 will allow us to focus on integrating with an identity service rather than spending time building our own.
In this section, we are going to set up Auth0 and integrate it into our ASP.NET backend.
Let's carry out the following steps to set up Auth0 as our identity provider:

Figure 11.3 – Auth0 tenant settings option
The Default Audience option is in the API Authorization Settings section. Change this to https://qanda:

Figure 11.4 – Auth0 Default Audience setting
This tells Auth0 to add https://qanda to the aud payload field in the JWT it generates. This setting triggers Auth0 to generate access tokens in JWT format. Our ASP.NET backend will also check that access tokens contain this data before granting access to protected resources.

Figure 11.5 – Creating a SPA Auth0 client
Figure 11.6 – Creating an API Auth0 client
The name can be anything we choose, but the Identifier setting must match the default audience we set on the tenant. Make sure Signing Algorithm is set to RS256 and then click the CREATE button.
That completes the setup of Auth0.
Next, we will integrate our ASP.NET backend with Auth0.
We can now change our ASP.NET backend to authenticate with Auth0. Let's open the backend project in Visual Studio and carry out the following steps:
Microsoft.AspNetCore.Authentication.JwtBearer
Important Note
Make sure the version of the package you select is supported by the version of .NET you are using. So, for example, if you are targeting .NET 5.0, then select package version 5.0.*.
using Microsoft.AspNetCore.Authentication.JwtBearer;
Add the following lines to the ConfigureServices method in the Startup class:
public void ConfigureServices(IServiceCollection services)
{
...
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme =
JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme =
JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(options =>
{
options.Authority =
Configuration["Auth0:Authority"];
options.Audience =
Configuration["Auth0:Audience"];
});
}
This adds JWT-based authentication specifying the authority and expected audience as the appsettings.json settings.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
...
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
...
}
This will validate the access token in each request if one exists. If the check succeeds, the user on the request context will be set.
{
...,
"Auth0": {
"Authority": "https://your-tentant-id.auth0.com/",
"Audience": "https://qanda"
}
}
We will need to substitute our Auth0 tenant ID into the Authority field. The tenant ID can be found in Auth0 to the left of the user avatar:
Figure 11.7 – Auth0 user avatar
So, Authority for the preceding tenant is https://your-tenant-id.auth0.com/. The Audience field needs to match the audience we specified in Auth0.
Our web API is now validating access tokens in the requests.
Let's quickly recap what we have done in this section. We told our identity provider the path to our frontend and the paths for signing in and out. Identity providers often provide an administration page for us to supply this information. We also told ASP.NET to validate the bearer token in a request using the UseAuthentication method in the Configure method in the Startup class. The validation is configured using the AddAuthentication method in ConfigureServices.
We are going to start protecting some endpoints in the next section.
We are going to start this section by protecting the questions endpoint for adding, updating, and deleting questions as well as posting answers so that only authenticated users can do these operations. We will then move on to implement and use a custom authorization policy so that only the author of the question can update or delete it.
Let's protect the questions endpoint for the POST, PUT, and DELETE HTTP methods by carrying out these steps:
using Microsoft.AspNetCore.Authorization;
[Authorize]
[HttpPost]
public async ... PostQuestion(QuestionPostRequest questionPostRequest)
...
[Authorize]
[HttpPut("{questionId}")]
public async ... PutQuestion(int questionId, QuestionPutRequest questionPutRequest)
...
[Authorize]
[HttpDelete("{questionId}")]
public async ... DeleteQuestion(int questionId)
...
[Authorize]
[HttpPost("answer")]
public async ... PostAnswer(AnswerPostRequest answerPostRequest)
...

Figure 11.8 – Accessing a protected endpoint in Postman without being authenticated
We receive a response with status code 401 Unauthorized. This shows that this action method is now protected.

Figure 11.9 – Getting a test token from Auth0

Figure 11.10 – Adding the Auth0 bearer token to an Authorization HTTP header in Postman

Figure 11.11 – Successfully accessing a protected endpoint in Postman
So, once the authentication middleware is in place, the Authorize attribute protects action methods. If a whole controller needs to be protected, the Authorize attribute can decorate the controller class:
[Authorize]
[Route("api/[controller]")]
[ApiController]
public class QuestionsController : ControllerBase
All of the action methods in the controller will then be protected without having to specify the Authorize attribute. We can also unprotect action methods in a protected controller by using the AllowAnonymous attribute:
[AllowAnonymous]
[HttpGet]
public IEnumerable<QuestionGetManyResponse> GetQuestions(string search, bool includeAnswers, int page = 1, int pageSize = 20)
So, in our example, we could have protected the whole controller using the Authorize attribute and unprotected the GetQuestions, GetUnansweredQuestions, and GetQuestion action methods with the AllowAnonymous attribute to achieve the behavior we want.
Next, we are going to learn how to implement a policy check with endpoint authorization.
At the moment, any authenticated user can update or delete questions. We are going to implement and use a custom authorization policy and use it to enforce that only the author of the question can do these operations. Let's carry out the following steps:
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Authorization;
using QandA.Authorization;
Note that the reference to the QandA.Authorization namespace doesn't exist yet. We'll implement this in a later step.
public void ConfigureServices(IServiceCollection services)
{
...
services.AddHttpClient();
}
The authorization policy has its requirements defined in a class called MustBeQuestionAuthorRequirement, which we'll implement in a later step.
public void ConfigureServices(IServiceCollection services)
{
...
services.AddHttpClient();
services.AddAuthorization(options =>
options.AddPolicy("MustBeQuestionAuthor", policy
=>
policy.Requirements
.Add(new MustBeQuestionAuthorRequirement())));
}
The authorization policy has its requirements defined in a class called MustBeQuestionAuthorRequirement, which we'll implement in a later step.
public void ConfigureServices(IServiceCollection services)
{
...
services.AddHttpClient();
services.AddAuthorization(...);
services.AddScoped<
IAuthorizationHandler,
MustBeQuestionAuthorHandler>();
}
So, the handler for MustBeQuestionAuthorRequirement will be implemented in a class called MustBeQuestionAuthorHandler.
public void ConfigureServices(IServiceCollection services)
{
...
services.AddHttpClient();
services.AddAuthorization(...);
services.AddScoped<
IAuthorizationHandler,
MustBeQuestionAuthorHandler>();
services.AddHttpContextAccessor();
}
Note that AddHttpContextAccessor is a convenience method for AddSingleton<IHttpContextAccessor,HttpContextAccessor>.
using Microsoft.AspNetCore.Authorization;
namespace QandA.Authorization
{
public class MustBeQuestionAuthorRequirement:
IAuthorizationRequirement
{
public MustBeQuestionAuthorRequirement()
{
}
}
}
using System;
using System.Security.Claims;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using QandA.Data;
namespace QandA.Authorization
{
public class MustBeQuestionAuthorHandler:
AuthorizationHandler<MustBeQuestionAuthorRequirement>
{
private readonly IDataRepository _dataRepository;
private readonly IHttpContextAccessor
_httpContextAccessor;
public MustBeQuestionAuthorHandler(
IDataRepository dataRepository,
IHttpContextAccessor httpContextAccessor)
{
_dataRepository = dataRepository;
_httpContextAccessor = httpContextAccessor;
}
protected async override Task
HandleRequirementAsync(
AuthorizationHandlerContext context,
MustBeQuestionAuthorRequirement requirement)
{
// TODO - check that the user is authenticated
// TODO - get the question id from the request
// TODO - get the user id from the name
// identifier claim
// TODO - get the question from the data
// repository
// TODO - if the question can't be found go to
// the next piece of middleware
// TODO - return failure if the user id in the
// question from the data repository is
// different to the user id in the request
// TODO - return success if we manage to get
// here
}
}
}
This inherits from the AuthorizationHandler class, which takes in the requirement it is handling as a generic parameter. We have injected the data repository and the HTTP context into the class.
protected async override Task
HandleRequirementAsync(
AuthorizationHandlerContext context,
MustBeQuestionAuthorRequirement requirement)
{
if (!context.User.Identity.IsAuthenticated)
{
context.Fail();
return;
}
// TODO - get the question id from the request
// TODO - get the user id from the name identifier
// claim
// TODO - get the question from the data repository
// TODO - if the question can't be found go to the
// next piece of middleware
// TODO - return failure if the user id in the
// question from the data repository is different
// to the user id in the request
// TODO - return success if we manage to get here
}
The context parameter in the method contains information about the user's identity in an Identity property. We use the IsAuthenticated property within the Identity object to determine whether the user is authenticated or not. We call the Fail method on the context argument to tell it that the requirement failed.
protected async override Task
HandleRequirementAsync(
AuthorizationHandlerContext context,
MustBeQuestionAuthorRequirement requirement)
{
if (!context.User.Identity.IsAuthenticated)
{
context.Fail();
return;
}
var questionId =
_httpContextAccessor.HttpContext.Request
.RouteValues["questionId"];
int questionIdAsInt = Convert.ToInt32(questionId);
// TODO - get the user id from the name identifier
// claim
// TODO - get the question from the data repository
// TODO - if the question can't be found go to the
//next piece of middleware
// TODO - return failure if the user id in the
//question from the data repository is different
// to the user id in the request
// TODO - return success if we manage to get here
}
We use the RouteValues dictionary within the HTTP context request to get access to get the question ID. The RoutesValues dictionary contains the controller name, the action method name, as well as the parameters for the action method.
protected async override Task
HandleRequirementAsync(
AuthorizationHandlerContext context,
MustBeQuestionAuthorRequirement requirement)
{
...
var questionId =
_httpContextAccessor.HttpContext.Request
.RouteValues["questionId"];
int questionIdAsInt = Convert.ToInt32(questionId);
var userId =
context.User.FindFirst(ClaimTypes.NameIdentifier).
Value;
// TODO - get the question from the data repository
// TODO - if the question can't be found go to the
// next piece of middleware
// TODO - return failure if the user id in the
//question from the data repository is different
// to the user id in the request
// TODO - return success if we manage to get here
}
userId is stored in the name identifier claim.
Important Note
A claim is information about a user from a trusted source. A claim represents what the subject is, not what the subject can do. The ASP.NET authentication middleware automatically puts userId in a name identifier claim for us.
We have used the FindFirst method on the User object from the context parameter to get the value of the name identifier claim. The User object is populated with the claims by the authentication middleware earlier in the request pipeline after it has read the access token.
protected async override Task
HandleRequirementAsync(
AuthorizationHandlerContext context,
MustBeQuestionAuthorRequirement requirement)
{
...
var userId =
context.User.FindFirst(ClaimTypes.NameIdentifier).Value;
var question =
await _dataRepository.GetQuestion(questionIdAsInt);
if (question == null)
{
// let it through so the controller can return a 404
context.Succeed(requirement);
return;
}
// TODO - return failure if the user id in the
//question from the data repository is different
// to the user id in the request
// TODO - return success if we manage to get here
}
protected async override Task
HandleRequirementAsync(
AuthorizationHandlerContext context,
MustBeQuestionAuthorRequirement requirement)
{
...
var question =
await _dataRepository.GetQuestion(questionIdAsInt);
if (question == null)
{
// let it through so the controller can return
// a 404
context.Succeed(requirement);
return;
}
if (question.UserId != userId)
{
context.Fail();
return;
}
context.Succeed(requirement);
}
[Authorize(Policy = "MustBeQuestionAuthor")]
[HttpPut("{questionId}")]
public ... PutQuestion(int questionId, QuestionPutRequest questionPutRequest)
...
[Authorize(Policy = "MustBeQuestionAuthor")]
[HttpDelete("{questionId}")]
public ... DeleteQuestion(int questionId)
...
We have now applied our authorization policy to updating and deleting a question.
Unfortunately, we can't use the test access token that Auth0 gives us to try this out but we will circle back to this and confirm that it works in Chapter 12, Interacting with RESTful APIs.
Custom authorization policies give us lots of flexibility and power to implement complex authorization rules. As we have just experienced in our example, a single policy can be implemented centrally and used on different action methods.
Let's quickly recap what we have learned in this section:
In the next section, we will learn how to reference information about the authenticated user in an API controller.
Now that our REST API knows about the user interacting with it, we can use this to post the correct user against questions and answers. Let's carry out the following steps:
using System.Security.Claims;
using Microsoft.Extensions.Configuration;
using System.Net.Http;
using System.Text.Json;
public async ...
PostQuestion(QuestionPostRequest
questionPostRequest)
{
var savedQuestion =
await _dataRepository.PostQuestion(new
QuestionPostFullRequest
{
Title = questionPostRequest.Title,
Content = questionPostRequest.Content,
UserId =
User.FindFirst(ClaimTypes.NameIdentifier).Value,
UserName = "bob.test@test.com",
Created = DateTime.UtcNow
});
...
}
ControllerBase contains a User property that gives us information about the authenticated user, including the claims. So, we use the FindFirst method to get the value of the name identifier claim.
namespace QandA.Data.Models
{
public class User
{
public string Name { get; set; }
}
}
Note that there is more user information that we can get from Auth0 but we are only interested in the username in our app.
...
private readonly IHttpClientFactory _clientFactory;
private readonly string _auth0UserInfo;
public QuestionsController(
...,
IHttpClientFactory clientFactory,
IConfiguration configuration)
{
...
_clientFactory = clientFactory;
_auth0UserInfo =
$"{configuration["Auth0:Authority"]}userinfo";
}
private async Task<string> GetUserName()
{
var request = new HttpRequestMessage(
HttpMethod.Get,
_auth0UserInfo);
request.Headers.Add(
"Authorization",
Request.Headers["Authorization"].First());
var client = _clientFactory.CreateClient();
var response = await client.SendAsync(request);
if (response.IsSuccessStatusCode)
{
var jsonContent =
await response.Content.ReadAsStringAsync();
var user =
JsonSerializer.Deserialize<User>(
jsonContent,
new JsonSerializerOptions
{
PropertyNameCaseInsensitive = true
});
return user.Name;
}
else
{
return "";
}
}
We make a GET HTTP request to the Auth0 user information endpoint with the Authorization HTTP header from the current request to the ASP.NET backend. This HTTP header will contain the access token that will give us access to the Auth0 endpoint.
If the request is successful, we parse the response body into our User model. Notice that we use the new JSON serializer in .NET. Notice also that we specify case-insensitive property mapping so that the camel case fields in the response map correctly to the title case properties in the class.
public async ... PostQuestion(QuestionPostRequest questionPostRequest)
{
var savedQuestion = await
_dataRepository.PostQuestion(new
QuestionPostFullRequest
{
Title = questionPostRequest.Title,
Content = questionPostRequest.Content,
UserId =
User.FindFirst(ClaimTypes.NameIdentifier).Value,
UserName = await GetUserName(),
Created = DateTime.UtcNow
});
...
}
[Authorize]
[HttpPost("answer")]
public ActionResult<AnswerGetResponse> PostAnswer(AnswerPostRequest answerPostRequest)
{
...
var savedAnswer = _dataRepository.PostAnswer(new
AnswerPostFullRequest
{
QuestionId = answerPostRequest.QuestionId.Value,
Content = answerPostRequest.Content,
UserId =
User.FindFirst(ClaimTypes.NameIdentifier).Value,
UserName = await GetUserName(),
Created = DateTime.UtcNow
});
...
}
Unfortunately, we can't use the test access token that Auth0 gives us to try this out because it doesn't have a user associated with it. However, we will circle back to this and confirm that it works in Chapter 12, Interacting with RESTful APIs.
Our question controller is interacting with the authenticated user nicely now.
To recap, information about the authenticated user is available in a User property within an API controller. The information in the User property is limited to the information contained in the JWT. Additional information can be obtained by requesting it from the relevant endpoint in the identity service provider.
CORS stands for Cross-Origin Resource Sharing and is a mechanism that uses HTTP headers to tell a browser to let a web application run at certain origins (domains) so that it has permission to access certain resources on a server at a different origin.
In this section, we will start by trying to access our REST API from a browser application and discover that it isn't accessible. We will then add and configure CORS in the REST API and verify that it is accessible from a browser application.
Let's carry out the following steps:

Figure 11.12 – CORS error when accessing the REST API from the browser
public void ConfigureServices(IServiceCollection services)
{
...
services.AddCors(options =>
options.AddPolicy("CorsPolicy", builder =>
builder
.AllowAnyMethod()
.AllowAnyHeader()
.WithOrigins(Configuration["Frontend"])));
}
This has defined a CORS policy that allows origins specified in appsettings.json to access the REST API. It also allows requests with any HTTP method and any HTTP header.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
...
app.Routing();
app.UseCors("CorsPolicy");
app.UseAuthentication();
...
}
{
...,
"Frontend": "https://resttesttest.com"
}

Figure 11.13 – Successful cross-origin request
{
...,
"Frontend": "http://localhost:3000"
}
CORS is straightforward to add in ASP.NET. First, we create a policy and use this in the request pipeline. It is important that the UseCors method is placed between the UseRouting and UseEndpoint methods in the Configure method for it to function correctly.
Auth0 is an OIDC identity provider that we can leverage to authenticate and authorize clients. An access token in JWT format is available from an identity provider when a successful sign-in has been made. An access token can be used in requests to access protected resources.
ASP.NET can validate JWTs by first using the AddAuthentication method in the ConfigureServices method in the Startup class and then UseAuthentication in the Configure method.
Once authentication has been added to the request pipeline, REST API resources can be protected by decorating the controller and action methods using the Authorize attribute. Protected action methods can then be unprotected by using the AllowAnonymous attribute. We can access information about a user, such as their claims, via a controller's User property.
Custom policies are a powerful way to allow a certain set of users to get access to protected resources. Requirement and handler classes must be implemented that define the policy logic. The policy can be applied to an endpoint using the Authorize attribute by passing in the policy name as a parameter.
ASP.NET disallows cross-origin requests out of the box. We are required to add and enable a CORS policy for the web clients that require access to the REST API.
Our backend is close to completion now. In the next chapter, we'll turn our attention back to the frontend and start to interact with the backend we have built.
Let's answer the following questions to practice what we have learned in this chapter:
public void Configure(...)
{
...
app.UseEndpoints(...);
app.UseAuthentication();
}
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme =
JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme =
JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(options =>
{
...
options.Audience = "https://myapp";
});
When we try to access protected resources in our ASP.NET backend, we receive an HTTP 401 status code. What is the problem here?
{
"nbf": 1609671475,
"auth_time": 1609671475,
"exp": 1609757875,
...
}
Tip: You can decode the Unix dates using this website: https://www.unixtimestamp.com/index.php.
Authorisation: bearer some-access-token
We receive an HTTP 401 status code from the request, though. What is the problem?
private readonly IHttpContextAccessor _httpContextAccessor;
public MyClass(IHttpContextAccessor httpContextAccessor)
{
_httpContextAccessor = httpContextAccessor;
}
public SomeMethod()
{
var request = _httpContextAccessor.HttpContext.Request;
}
The HttpContextAccessor service must be added to the ConfigureServices method in the Startup class, as follows:
services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
User.FindFirst(ClaimTypes.NameIdentifier).Value
Here are some useful links to learn more about the topics covered in this chapter:
Having completed our REST API, it's now time to interact with it in our React frontend app. We will start by interacting with the unauthenticated endpoints to get questions by using the browser's fetch function. We will deal with the situation when a user navigates away from a page before data is fetched, preventing state errors.
We will leverage the Auth0 tenant that we set up in the last chapter to securely sign users in and out of our app. We will then use the access token from Auth0 to access protected endpoints. We will also make sure that authenticated users are only able to see options that they have permission to perform.
By the end of this chapter, our frontend will be interacting fully with the backend, securely and robustly.
In this chapter, we'll cover the following topics:
We'll use the following tools and services in this chapter:
All of the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. To restore code from a chapter, the source code repository can be downloaded and the relevant folder opened in the relevant editor. If the code is frontend code, then npm install can be entered in the Terminal to restore the dependencies.
Check out the following video to see the code in action: https://bit.ly/37CQqNx
In this section, we are going to use the native fetch function to get unanswered questions from our real REST API. We are then going to use a wrapper function over fetch to make interacting with our backend a little easier. This approach will also centralize the code that interacts with the REST API, which is beneficial when we want to make improvements to it. We'll then move on to using the real REST API to get a single question and search for questions.
We are going to start interacting with the REST API on the home page when displaying the list of unanswered questions. The HomePage component won't actually change, but the getUnansweredQuestions function in QuestionsData.ts will. In getUnansweredQuestions, we'll leverage the native browser fetch function to interact with our REST API. If you haven't already, let's open Visual Studio Code and carry out the following steps.
Open QuestionsData.ts, find the getUnansweredQuestions function, and replace the implementation with the following content:
export const getUnansweredQuestions = async (): Promise<
QuestionData[]
> => {
let unansweredQuestions: QuestionData[] = [];
// TODO - call api/questions/unanswered
// TODO - put response body in unansweredQuestions
return unansweredQuestions;
};
The function takes exactly the same parameters and returns the same type as before, so the components that consume this function shouldn't be impacted by the changes we are about to make. Follow the steps given here:
export const getUnansweredQuestions = async (): Promise<
QuestionData[]
> => {
let unansweredQuestions: QuestionData[] = [];
const response = await fetch(
'http://localhost:17525/api/questions/unanswered'
)
// TODO - put response body in unansweredQuestions
return unansweredQuestions;
};
So, for a GET request, we simply put the path we are requesting in the fetch argument. If your REST API is running on a different port, then don't forget to change the path so that it calls your REST API.
Notice the await keyword before the fetch call. This is because it is an asynchronous function and we want to wait for its promises to be resolved before the next statement is executed.
We have assigned a response variable to the HTTP response object that is returned from the fetch function. Here are some useful properties on the response object that we could interact with:
export const getUnansweredQuestions = async (): Promise<
QuestionData[]
> => {
let unansweredQuestions: QuestionData[] = [];
const response = await fetch(
"http://localhost:17525/api/questions/unanswered"
);
unansweredQuestions = await response.json();
return unansweredQuestions;
};
We have already discovered where to find our Auth0 tenant in the last chapter but, as a reminder, it is to the left of our user avatar:

Figure 12.1 – Auth0 tenant ID

Figure 12.2 – Error on question created date
The problem here is that the created property is deserialized as a string and not a Date object like the Question component expects.
export const getUnansweredQuestions = async (): Promise<
QuestionData[]
> => {
let unansweredQuestions: QuestionData[] = [];
const response = await fetch(
'http://localhost:17525/api/questions/unanswered',
);
unansweredQuestions = await response.json();
return unansweredQuestions.map((question) => ({
...question,
created: new Date(question.created),
}));
};
We use the array map function to iterate through all of the questions, returning a copy of the original question (using the spread syntax) and then overwriting the created property with a Date object from the string date.
Figure 12.3: Unanswered questions output correctly
Great stuff! Our React app is now interacting with our REST API!
We'll need to use the fetch function in every function that needs to interact with the REST API. So, we are going to create a generic http function that we'll use to make all of our HTTP requests. This will nicely centralize the code that calls the REST API. Let's carry out the following steps:
import { webAPIUrl } from './AppSettings';
export interface HttpRequest<REQB> {
path: string;
}
export interface HttpResponse<RESB> {
ok: boolean;
body?: RESB;
}
We've started by importing the root path to our REST API from AppSettings.ts, which was set up in our starter project. The AppSettings.ts file is where we will build all of the different paths that will vary between development and production. Make sure webAPIUrl contains the correct path for your REST API.
We have also defined interfaces for the request and response. Notice that the interfaces contain a generic parameter for the type of the body in the request and response.
export const http = async <
RESB,
REQB = undefined
>(
config: HttpRequest<REQB>,
): Promise<HttpResponse<RESB>> => {
};
We have defaulted the type for the request body to undefined so that the consumer of the function doesn't need to pass it.
export const http = async <
RESB,
REQB = undefined
>(
config: HttpRequest<REQB>,
): Promise<HttpResponse<RESB>> => {
const request = new Request(
`${webAPIUrl}${config.path}`
);
const response = await fetch(request);
};
Notice that we create a new instance of a Request object and pass that into fetch rather than just passing the request path into fetch. This will be useful later in this chapter as we expand this function for different HTTP methods and authentication.
export const http = async <
RESB,
REQB = undefined
>(
config: HttpRequest<REQB>,
): Promise<HttpResponse<RESB>> => {
const request = new Request(
`${webAPIUrl}${config.path}`,
);
const response = await fetch(request);
if (response.ok) {
} else {
}
};
export const http = async <
RESB,
REQB = undefined
>(
config: HttpRequest<REQB>,
): Promise<HttpResponse<RESB>> => {
const request = new Request(
`${webAPIUrl}${config.path}`,
);
const response = await fetch(request);
if (response.ok) {
const body = await response.json();
} else {
}
};
export const http = async <
RESB,
REQB = undefined
>(
config: HttpRequest<REQB>,
): Promise<HttpResponse<RESB>> => {
const request = new Request(
`${webAPIUrl}${config.path}`,
);
const response = await fetch(request);
if (response.ok) {
const body = await response.json();
return { ok: response.ok, body };
} else {
return { ok: response.ok };
}
};
If the response isn't successful, we are going to log the HTTP error.
export const http = async <
RESB,
REQB = undefined
>(
config: HttpRequest<REQB>,
): Promise<HttpResponse<RESB>> => {
const request = new Request(
`${webAPIUrl}${config.path}`,
);
const response = await fetch(request);
if (response.ok) {
const body = await response.json();
return { ok: response.ok, body };
} else {
logError(request, response);
return { ok: response.ok };
}
};
const logError = async (
request: Request,
response: Response,
) => {
const contentType = response.headers.get(
'content-type',
);
let body: any;
if (
contentType &&
contentType.indexOf('application/json') !== -1
) {
body = await response.json();
} else {
body = await response.text();
}
console.error(
`Error requesting ${request.method}
${request.url}`,
body,
);
};
The function checks whether the response is in JSON format and if so calls the json method on the response object to get the JSON body. If the response isn't in JSON format, then the body is retrieved using the text method on the response object. The body of the response is then output to the console along with the HTTP request method and path.
import { http } from './http';
export const getUnansweredQuestions = async (): Promise<
QuestionData[]
> => {
const result = await http<
QuestionDataFromServer[]
>({
path: '/questions/unanswered',
});
if (result.ok && result.body) {
return result.body.map(mapQuestionFromServer);
} else {
return [];
}
};
We pass QuestionDataFromServer[] as the expected response into the http function as the expected response body type. QuestionDataFromServer is an interface that was added to our starter project for this chapter that has the created date as a string—exactly how it arrives from the REST API.
We use a mapping function to return the parsed response body with the created property set as a proper date if there is a response body. Otherwise, we return an empty array. The mapQuestionFromServer mapping function was added to our starter project for this chapter.
This renders the unanswered questions when we save these changes, as it did before:
Figure 12.4 – Unanswered questions output correctly
Our revised implementation of getUnansweredQuestions is a little better because the root path to our REST API isn't hardcoded within it and we are handling HTTP errors better. We'll continue to use and expand our generic http function throughout this chapter.
In this sub-section, we are going to refactor our existing getQuestion function to use our http function to get a single question from our REST API. Carry out the following steps in QuestionsData.ts:
export const getQuestion = async (
questionId: number,
): Promise<QuestionData | null> => {
};
export const getQuestion = async (
questionId: number,
): Promise<QuestionData | null> => {
const result = await http<
QuestionDataFromServer
>({
path: `/questions/${questionId}`,
});
};
export const getQuestion = async (
questionId: number,
): Promise<QuestionData | null> => {
const result = await http<
QuestionDataFromServer
>({
path: `/questions/${questionId}`,
});
if (result.ok && result.body) {
return mapQuestionFromServer(result.body);
} else {
return null;
}
};
Figure 12.5 – Question page
We didn't have to make any changes to any of the frontend components. Nice!
In this sub-section, we are going to refactor our existing searchQuestion function to use our http function to use our REST API to search questions. This is very similar to what we have just done, so we'll do this in one go:
export const searchQuestions = async (
criteria: string,
): Promise<QuestionData[]> => {
const result = await http<
QuestionDataFromServer[]
>({
path: `/questions?search=${criteria}`,
});
if (result.ok && result.body) {
return result.body.map(mapQuestionFromServer);
} else {
return [];
}
};
We make a request to the questions endpoint with the search query parameter containing the criteria. We return the response body with created Date objects if the request is successful or an empty array if the request fails.
The searchQuestions parameter and return type haven't changed. So, when we save the changes and search for a question in the running app, the matched questions will render correctly:
Figure 12.6 – Search page
In the next section, we will take a break from implementing our generic http function and implement code to sign users in to our app via our Auth0.
In this section, we will fully implement the sign-in and sign-out processes from our React frontend. We are going to interact with Auth0 as a part of these processes.
We will start by installing the Auth0 JavaScript client before creating React Router routes and logic to handle the Auth0 sign-in and sign-out processes.
We will also learn about React context in this section. We will use this React feature to centralize information and functions for authentication that components can easily access.
There is a standard Auth0 JavaScript library for single-page applications that we can leverage that will interact nicely with Auth0. The npm package for the library is called @auth0/auth0-spa-js. Let's install this by running the following command in the Visual Studio Code Terminal:
> npm install @auth0/auth0-spa-js
TypeScript types are included in this library, so the Auth0 client library for single-page applications is now installed in our project.
Let's quickly recap the sign-in flow between our app and Auth0:
The sign-out flow is as follows:
So, we will have the following routes in our frontend app:
We now understand that we need four routes in our app to handle the sign-in and sign-out processes. The SignInPage component will handle both of the signin and signin-callback routes. The SignOutPage component will handle both of the signout and signout-callback routes.
Our app already knows about the SignInPage component with the route we have declared in App.tsx. However, it is not handling the sign-in callback from the Auth0. Our app also isn't handling signing out. Let's implement all of this in App.tsx by following these steps:
import { SignOutPage } from './SignOutPage';
<Route
path="signin"
element={<SignInPage action="signin" />}
/>
The action prop doesn't exist yet on the SignInPage component; hence, our app will not compile at the moment. We'll implement the action prop later.
<Route
path="/signin-callback"
element={<SignInPage action="signin-callback" />}
/>
<Route
path="signout"
element={
<SignOutPage action="signout" />
}
/>
<Route
path="/signout-callback"
element={
<SignOutPage action="signout-callback" />
}
/>
All of the routes are in place now for the sign-in, sign-up, and sign-out processes.
We are going to put state and functions for authentication in a central place in our code. We could use Redux for this, but we are going to take this opportunity to use a context in React.
Important Note
We are going to put our authentication state and functions in a React context that we'll provide to all of the components in our app. Let's carry out the following steps:
import React from 'react';
import createAuth0Client from '@auth0/auth0-spa-js';
import Auth0Client from '@auth0/auth0-spa-js/dist/typings/Auth0Client';
import { authSettings } from './AppSettings';
interface Auth0User {
name: string;
email: string;
}
interface IAuth0Context {
isAuthenticated: boolean;
user?: Auth0User;
signIn: () => void;
signOut: () => void;
loading: boolean;
}
export const Auth0Context = React.createContext<IAuth0Context>({
isAuthenticated: false,
signIn: () => {},
signOut: () => {},
loading: true
});
So, our context provides properties for whether the user is authenticated, the user's profile information, functions for signing in and out, and whether the context is loading.
The createContext function requires a default value for the context, so we've passed in an object with appropriate initial property values and empty functions for signing in and out.
export const useAuth = () => React.useContext(Auth0Context);
This is a custom hook in React.
Important Note
Custom hooks are a mechanism for sharing logic in components. They allow the use of React components features such as useState, useEffect, and useContext outside a component. More information on custom hooks can be found at https://reactjs.org/docs/hooks-custom.html.
A common naming convention for custom hooks is to have a prefix of use. So, we've called our custom hook useAuth.
export const AuthProvider: React.FC = ({
children,
}) => {
const [
isAuthenticated,
setIsAuthenticated,
] = React.useState<boolean>(false);
const [user, setUser] = React.useState<
Auth0User | undefined
>(undefined);
const [
auth0Client,
setAuth0Client,
] = React.useState<Auth0Client>();
const [loading, setLoading] = React.useState<
boolean
>(true);
};
We have used a standard type, FC, from the React types to type the component props. This contains a type for the children prop that we are using.
We have declared a state to hold whether the user is authenticated, the user's profile information, a client object from Auth0, and whether the context is loading.
export const AuthProvider: React.FC = ({
children,
}) => {
...
return (
<Auth0Context.Provider
value={{
isAuthenticated,
user,
signIn: () =>
getAuth0ClientFromState().loginWithRedirect(),
signOut: () =>
getAuth0ClientFromState().logout({
client_id: authSettings.client_id,
returnTo:
window.location.origin +
'/signout-callback',
}),
loading,
}}
>
{children}
</Auth0Context.Provider>
);
};
This returns the context's Provider component from React. The object we pass in the value property will be available to consumers of the context we are creating. So, we are giving consumers of the context access to whether the user is authenticated, the user's profile, and functions for signing in and out.
export const AuthProvider: FC = ({ children }) => {
...
const getAuth0ClientFromState = () => {
if (auth0Client === undefined) {
throw new Error('Auth0 client not set');
}
return auth0Client;
};
return (
<Auth0Context.Provider
...
</Auth0Context.Provider>
);
};
So, this function returns the Auth0 client from the state but throws an error if it is undefined.
export const AuthProvider: FC = ({ children }) => {
...
React.useEffect(() => {
const initAuth0 = async () => {
setLoading(true);
const auth0FromHook = await
createAuth0Client(authSettings);
setAuth0Client(auth0FromHook);
const isAuthenticatedFromHook = await
auth0FromHook.isAuthenticated();
if (isAuthenticatedFromHook) {
const user = await auth0FromHook.getUser();
setUser(user);
}
setIsAuthenticated(isAuthenticatedFromHook);
setLoading(false);
};
initAuth0();
}, []);
...
return (
<Auth0Context.Provider
...
</Auth0Context.Provider>
);
};
We've put the logic in a nested initAuth0 function and invoked this because the logic is asynchronous.
We use the createAuth0Client function from Auth0 to create the Auth0 client instance. We pass in some settings using an authSettings variable, which is located in a file called AppSettings.ts. We'll change these settings later in this chapter to reference our specific Auth0 instance.
We call the isAuthenticated function in the Auth0 client to determine whether the user is authenticated and set our isAuthenticated state value. If the user is authenticated, we call the getUser function in the Auth0 client to get the user profile and set our user state.
const initAuth0 = async () => {
setLoading(true);
const auth0FromHook = await createAuth0Client(authSettings);
setAuth0Client(auth0FromHook);
if (
window.location.pathname === '/signin-callback' &&
window.location.search.indexOf('code=') > -1
) {
await auth0FromHook.handleRedirectCallback();
window.location.replace(window.location.origin);
}
const isAuthenticatedFromHook = await auth0FromHook. isAuthenticated();
if (isAuthenticatedFromHook) {
const user = await auth0FromHook.getUser();
setUser(user);
}
setIsAuthenticated(isAuthenticatedFromHook);
setLoading(false);
};
We call the Auth0 client handleRedirectCallback function, which will parse the URL, extract the code, and store it in a variable internally. We also redirect the user to the home page after this has been completed.
export const getAccessToken = async () => {
const auth0FromHook = await createAuth0Client(authSettings);
const accessToken = await auth0FromHook.getTokenSilently();
return accessToken;
};
This calls the Auth0 client getTokenSilently function, which will, in turn, make a request to the Auth0 token endpoint to get the access token securely.
We will use our getAccessToken function later in this chapter to make REST API requests to protected resources.
import { AuthProvider } from './Auth';
function App() {
return (
<AuthProvider>
<BrowserRouter>
...
</BrowserRouter>
</AuthProvider>
);
};
That's our central authentication context complete. We'll use this extensively throughout this chapter.
The App component still isn't compiling because of the missing action prop on the SignInPage and SignOutPage components. We'll resolve these issues next.
Let's implement the sign-in page in SignInPage.tsx as follows:
import { StatusText } from './Styles';
import { useAuth } from './Auth';
StatusText is a shared style we are going to use when we inform the user that we are redirecting to and from Auth0. useAuth is the custom Hook we implemented earlier that will give us access to the authentication context.
type SigninAction = 'signin' | 'signin-callback';
interface Props {
action: SigninAction;
}
The component takes in an action prop that gives the current stage of the sign-in process.
export const SignInPage = ({ action }: Props) => {
};
export const SignInPage = ({ action }: Props) => {
const { signIn } = useAuth();
};
export const SignInPage = ({ action }: Props) => {
const { signIn } = useAuth();
if (action === 'signin') {
signIn();
}
};
Our final task is to return the JSX:
export const SignInPage = ({ action }: Props) => {
const { signIn } = useAuth();
if (action === 'signin') {
signIn();
}
return (
<Page title="Sign In">
<StatusText>Signing in ...</StatusText>
</Page>
);
};
We render the page informing the user that the sign-in process is taking place.
Let's implement the sign-out page in SignOutPage.tsx, which is similar in structure to the SignInPage component. Replace the current content in SignOutPage.tsx with the following code:
import React from 'react';
import { Page } from './Page';
import { StatusText } from './Styles';
import { useAuth } from './Auth';
type SignoutAction = 'signout' | 'signout-callback';
interface Props {
action: SignoutAction;
}
export const SignOutPage = ({ action }: Props) => {
let message = 'Signing out ...';
const { signOut } = useAuth();
switch (action) {
case 'signout':
signOut();
break;
case 'signout-callback':
message = 'You successfully signed out!';
break;
}
return (
<Page title="Sign out">
<StatusText>{message}</StatusText>
</Page>
);
};
A slight difference is that when the component receives the callback, this component will stay in view with a message informing them that they have been successfully signed out.
We are nearly ready to give the sign-in and sign-out processes a try. First, we need to configure our frontend to interact with the correct Auth0 tenant. These are configured in AppSettings.ts:
export const authSettings = {
domain: 'your-domain',
client_id: 'your-clientid',
redirect_uri: window.location.origin + '/signin-
callback',
scope: 'openid profile QandAAPI email',
audience: 'https://qanda',
};
We need to substitute our specific Auth0 domain and client ID in this settings file. We discovered where to find these details from Auth0 in the last chapter but, as a reminder, here are the steps:

Figure 12.7 – Auth0 client ID

Figure 12.8 – Auth0 domain
Important Note
Figure 12.9 – Auth0 API audience
We are now ready to try the sign-in and sign-out processes.
All of the pieces are in place now to give the sign-in and sign-out processes a try. Let's carry out the following steps:

Figure 12.10 – Adding a new user in Auth0

Figure 12.11 – Auth0 login form

Figure 12.12 – App authorization in Auth0
This authorization process happens because this is the first login for this user.
Figure 12.13: Sign-out confirmation message
That completes the sign-in and sign-out process implementations.
At the moment, all of the options in our app are visible regardless of whether the user is authenticated. However, certain options will only function correctly if the user is signed in. For example, if we try submitting a question while not signed in, it will fail. We'll clean this up in the next section.
In this section, we are going to only make relevant options visible for authenticated users. We will do this using the isAuthenticated flag from the useAuth Hook we created in the last section.
We will start by showing either the Sign In option or the Sign Out option in the Header component. We will then only allow authenticated users to ask questions in the HomePage component and answer a question in the QuestionPage component. As part of this work, we will create a reusable AuthorizedPage component that can be used on page components to ensure that they are only accessed by authenticated users.
At the moment, the Header component shows the Sign In and Sign Out options, but the Sign In option is only relevant if the user hasn't signed in. The Sign Out option is only relevant if the user is authenticated. Let's clean this up in Header.tsx in the following steps:
import { useAuth } from './Auth';
export const Header = () => {
...
const { isAuthenticated, user, loading } =
useAuth();
return (
...
);
};
<div ...>
<Link ...>
Q & A
</Link>
<form onSubmit={handleSearchSubmit}>
...
</form>
<div>
{!loading &&
(isAuthenticated ? (
<div>
<span>{user!.name}</span>
<Link to="/signout" css={buttonStyle}>
<UserIcon />
<span>Sign Out</span>
</Link>
</div>
) : (
<Link to="/signin" css={buttonStyle}>
<UserIcon />
<span>Sign In</span>
</Link>
))}
</div>
</div>
We use a short circuit expression to ensure that the Sign In and Sign Out buttons can't be accessed while the context is loading. We use a ternary expression to show the username and the Sign Out button if the user is authenticated and the Sign In button if not.

Figure 12.14 – Header for an unauthenticated user
Figure 12.15 – Header for an authenticated user
That completes the changes needed in the Header component.
Next, we will use our useAuth Hook again to control whether users can ask questions.
Let's move to the HomePage component and only show the Ask a question button if the user is authenticated:
import { useAuth } from './Auth';
export const HomePage = () => {
...
const { isAuthenticated } = useAuth();
return (
...
);
};
<Page>
<div
...
>
<PageTitle>Unanswered Questions</PageTitle>
{isAuthenticated && (
<PrimaryButton onClick={handleAskQuestionClick}>
Ask a question
</PrimaryButton>
)}
</div>
...
</Page>
That completes the changes to the home page. However, the user could still get to the ask page by manually putting the relevant path in the browser.
import React from 'react';
import { Page } from './Page';
import { useAuth } from './Auth';
export const AuthorizedPage: React.FC = ({ children }) => {
const { isAuthenticated } = useAuth();
if (isAuthenticated) {
return <>{children}</>;
} else {
return (
<Page title="You do not have access to this
page">
{null}
</Page>
);
}
};
We use our useAuth Hook and render the child components if the user is authenticated. If the user isn't authenticated, we inform them that they don't have access to the page.
import { AuthorizedPage } from './AuthorizedPage';
<Route
path="ask"
element={
<React.Suspense
...
>
<AuthorizedPage>
<AskPage />
</AuthorizedPage>
</React.Suspense>
}
/>

Figure 12.16 – No Ask button for the unauthenticated user
We'll see that there is no button to ask a question, as we expected.

Figure 12.17 – Protected page for the unauthenticated user
We are informed that we don't have permission to view the page, as we expected.
Figure 12.18 – Ask button for the authenticated user
The Ask a question button is now available, as we expected.
That concludes the changes we need to make for asking a question.
Let's focus on the QuestionPage component now and only allow an answer to be submitted if the user is authenticated:
import { useAuth } from './Auth';
export const QuestionPage: ... = ( ... ) => {
...
const { isAuthenticated } = useAuth();
return (
...
);
};
<AnswerList data={question.answers} />
{isAuthenticated && (
<form
...
>
...
</form>
)}

Figure 12.19 – No answer form for the unauthenticated user
There is no answer form, as we expected.
Figure 12.20 – Answer form for the authenticated user
The answer form is available, as we expected.
That completes the changes to the question page.
In the next section, we are going to interact with the REST API endpoints that require an authenticated user to perform tasks such as submitting a question.
In this section, we'll properly wire up posting questions and answers to our REST API. As part of this work, we will enhance our http function to use a bearer token from Auth0 in the HTTP request. This is because the endpoints for posting questions and answers are protected in the REST API and require a valid bearer token.
All of our changes will be in QuestionsData.ts—our user interface components will be unchanged.
We are going to change the implementation for posting a question to use an access token from Auth0:
import { getAccessToken } from './Auth';
export const postQuestion = async (
question: PostQuestionData,
): Promise<QuestionData | undefined> => {
const accessToken = await getAccessToken();
const result = await http<
QuestionDataFromServer,
PostQuestionData
>({
path: '/questions',
method: 'post',
body: question,
accessToken,
});
if (result.ok && result.body) {
return mapQuestionFromServer(
result.body,
);
} else {
return undefined;
}
};
We get the access token from Auth0 and pass it into the generic http function. If the request was successful, we return the question from the response body with the correct type for the created dates; otherwise, we return undefined.
export interface HttpRequest<REQB> {
path: string;
method?: string;
body?: REQB;
accessToken?: string;
}
We've started by adding the HTTP method, body, and access token to the request interface.
export const http = async <
RESB,
REQB = undefined
>(
config: HttpRequest<REQB>,
): Promise<HttpResponse<RESB>> => {
const request = new Request(
`${webAPIUrl}${config.path}`,
{
method: config.method || 'get',
headers: {
'Content-Type': 'application/json',
},
body: config.body
? JSON.stringify(config.body)
: undefined,
}
);
...
};
We are providing a second argument to the Request constructor that defines the HTTP request method, headers, and body.
Notice that we convert the request body into a string using JSON.stringify. This is because the fetch function doesn't convert the request body into a string for us.
export const http = async <
RESB,
REQB = undefined
>(
config: HttpRequest<REQB>,
): Promise<HttpResponse<RESB>> => {
const request = new Request(
...
);
if (config.accessToken) {
request.headers.set(
'authorization',
`bearer ${config.accessToken}`,
);
}
...
};
If the access token is provided, we add it to an HTTP request header called authorization after the word bearer and the space.
Important Note
authorization is a standard HTTP header that contains credentials to authenticate a user. The value is set to the type of authentication followed by a space, followed by the credentials. So, the word bearer in our case denotes the type of authentication.
Figure 12.21 – Bearer token included in the HTTP request
The question is saved successfully, as we expected. We can also see the access token sent in the HTTP authorization header with the request.
One of the things we couldn't check in the last chapter was whether the correct user was being saved against the question. If we have a look at the question in the database, we'll see the correct user ID and user name stored against the question:
Figure 12.22 – Correct user ID and username stored with the question
That completes posting a question. No changes are required to the AskPage component.
We are going to change the implementation for posting an answer to use the access token and our generic http function. Let's revise the implementation of the postAnswer function to the following:
export const postAnswer = async (
answer: PostAnswerData,
): Promise<AnswerData | undefined> => {
const accessToken = await getAccessToken();
const result = await http<
AnswerData,
PostAnswerData
>({
path: '/questions/answer',
method: 'post',
body: answer,
accessToken,
});
if (result.ok) {
return result.body;
} else {
return undefined;
}
};
This follows the same pattern as the postQuestion function, getting the access token from Auth0 and making the HTTP POST request with the JWT using the http function.
That completes the changes needed for posting an answer.
We can now remove the questions array mock data from QuestionsData.ts as this is no longer used. The wait function can also be removed.
This completes this section on interacting with protected REST API endpoints.
There is a slight problem in the page components at the moment when they request data and set it in the state. The problem is that if the user navigates away from the page while the data is still being fetched, the state will attempt to be set on a component that no longer exists. We are going to resolve this issue on the HomePage, QuestionPage, and SearchPage components by using a cancelled flag that is set when the components are unmounted. We will check this flag after the data is returned and the state is about to be set.
Let's carry out the following steps:
React.useEffect(() => {
let cancelled = false;
const doGetUnansweredQuestions = async () => {
const unansweredQuestions = await
getUnansweredQuestions();
if (!cancelled) {
setQuestions(unansweredQuestions);
setQuestionsLoading(false);
}
};
doGetUnansweredQuestions();
return () => {
cancelled = true;
};
}, []);
We use a cancelled variable to track whether the user has navigated away from the page and, we don't set any state if this is true. We will know whether the user has navigated away from the page because the return function will be called, which sets the cancelled flag.
React.useEffect(() => {
let cancelled = false;
const doGetQuestion = async (questionId: number) =>
{
const foundQuestion = await
getQuestion(questionId);
if (!cancelled) {
setQuestion(foundQuestion);
}
};
...
return () => {
cancelled = true;
};
}, [questionId]);
React.useEffect(() => {
let cancelled = false;
const doSearch = async (criteria: string) => {
const foundResults = await
searchQuestions(criteria);
if (!cancelled) {
setQuestions(foundResults);
}
};
doSearch(search);
return () => {
cancelled = true;
};
}, [search]);
This completes the changes to the page components. The data fetching process within the page components is now a little more robust.
In this chapter, we learned that the browser has a handy fetch function that allows us to interact with REST APIs. This allows us to specify HTTP headers such as authorization, which we use to supply the user's access token in order to access the protected endpoints.
Leveraging the standard Auth0 JavaScript library allows single-page applications to interact with the Auth0 identity provider. It makes all of the required requests and redirects to Auth0 in a secure manner.
Using the React context to share information about the user to components allows them to render information and options that are only relevant to the user.
The AuthProvider and AuthorizedPage components we built in this chapter are generic components that could be used in other apps to help to implement frontend authorization logic.
Our app is very nearly complete now. In the next chapter, we are going to put the frontend and backend through their paces with some automated tests.
The following questions will test our knowledge of what we have just learned:
fetch('http://localhost:17525/api/person', {
method: 'post',
headers: {
'Content-Type': 'application/json',
},
body: {
firstName: 'Fred'
surname: 'Smith'
}
})
const res = await fetch('http://localhost:17525/api/person/1');
console.log('firstName', res.body.firstName);
fetch('http://localhost:17525/api/person/21312')
.then(res => res.json())
.catch(res => {
if (res.status === 404) {
console.log('person not found')
}
});
fetch('http://localhost:17525/api/person/1', {
method: 'delete',
headers: {
'Content-Type': 'application/json',
'authorization': jwt
});
fetch('http://localhost:17525/api/person', {
method: 'post',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
firstName: 'Fred'
surname: 'Smith'
})
})
const res = await fetch('http://localhost:17525/api/person/1');
const body = await res.json();
console.log('firstName', body.firstName);
fetch('http://localhost:17525/api/person/21312')
.then(res => {
if (res.status === 404) {
console.log('person not found')
} else {
return res.json();
}
});
fetch('http://localhost:17525/api/person/1', {
method: 'delete',
headers: {
'Content-Type': 'application/json',
'authorization': `bearer ${jwt}`
});
import React from 'react';
import { useAuth } from './Auth';
export const AuthorizedElement: React.FC = ({ children }) => {
const auth = useAuth();
if (auth.isAuthenticated) {
return < >{children}</ >;
} else {
return null;
}
};
The component would be consumed as follows:
<AuthorizedElement>
<PrimaryButton ...>
Ask a question
</PrimaryButton>
</AuthorizedElement>
Here are some useful links to learn more about the topics covered in this chapter:
In this last section, we will add automated tests to both the ASP.NET Core and React apps. We will deploy the app to Azure using Visual Studio and Visual Studio Code before fully automating the deployment by implementing build and release pipelines in Azure DevOps.
This section comprises the following chapters:
Now, it's time to get our QandA app ready for production. In this chapter, we are going to add automated tests to the frontend and backend of our app, which will give us the confidence to take the next step: moving our app into production.
First, we will focus on the backend and use xUnit to implement unit tests on pure functions with no dependencies. Then, we'll move on to testing our QuestionsController, which does have dependencies. We will also learn how to use Moq to replace our real implementation of dependencies with a fake implementation.
Next, we will turn our attention to testing the frontend of our app with the popular Jest tool. We will learn how to implement unit tests on pure functions and integration tests on React components by leveraging the fantastic React Testing Library.
Then, we will learn how to implement end-to-end tests with Cypress. We'll use this to test a key path through the app where the frontend and backend will be working together.
By the end of this chapter, our tests will give us more confidence that we are not breaking existing functionality when developing and shipping new versions of our app.
In this chapter, we'll cover the following topics:
Let's get started!
We will need the following tools and services in this chapter:
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. In order to restore code from a chapter, you can download the source code repository and open the relevant folder in the relevant editor. If the code is frontend code, then npm install can be entered into the Terminal to restore the dependencies. You will also need to substitute your Auth0 tenant ID and client ID in the appsettings.json file in the backend project, as well as the AppSettings.ts file in the frontend project.
Check out the following video to see the code in action: https://bit.ly/3h3Aib6.
A robust suite of automated tests helps us deliver software faster without sacrificing its quality. There are various types of test, though each type has its own benefits and challenges. In this section, we are going to understand the different types of test and the benefits they bring to a single-page application.
The following diagram shows the three different types of test:
Figure 13.1 – Types of test
In the following subsections, we will examine each type of test, along with their pros and cons.
Unit tests verify that individual and isolated parts of an app work as expected. These tests generally execute very fast, thus giving us a very tight feedback loop so that we know the part of the app that we are developing is working correctly.
These tests can be quick to implement, but this is not necessarily the case if we need to mock out the dependencies of the unit we are testing. This is often the case when unit testing a React frontend, since a true unit test on a component needs to mock out any child components that are referenced in its JSX.
Perhaps the biggest downside of these tests is that they give us the least amount of confidence that the app as a whole is working correctly. We can have a large unit test suite that covers all the different parts of our app, but this is no guarantee that all the parts work together as expected.
The following is an example of a unit test being performed on the increment method of a Counter class:
[Fact]
public void Increment_WhenCurrentCountIs1_ShouldReturn2()
{
var counter = new Counter(1);
var result = counter.increment();
Assert.Equal(2, result);
}
There are no external dependencies on the Counter class or the increment method, so this is a great candidate for a unit test.
End-to-end tests verify that key paths work together as expected. No parts of the app are isolated and mocked away. These tests run a fully functioning app just like a user would, so this gives us the maximum amount of confidence that our app is functioning correctly.
These tests are slow to execute, though, which can delay the feedback loop during development; they're also the most expensive to write and maintain. This is because everything that the tests rely on, such as the data in the database, needs to be consistent each time the tests are executed, which is a challenge when we implement multiple tests that have different data requirements.
The following is a code snippet from an end-to-end test for capturing a subscription email address:
cy.findByLabelText('Email')
.type('carl.rippon@googlemail.com')
.should('have.value', 'carl.rippon@googlemail.com');
cy.get('form').submit();
cy.contains('Thanks for subscribing!');
The statements drive interactions on the web page and check the content of the elements on the page, which are updated along the way.
Integration tests verify that several parts of an app work together correctly. They give us more confidence than unit tests in terms of ensuring that the app as a whole is working as expected. These tests provide the most scope in terms of what is tested because of the many app part combinations that we can choose to test.
These tests are generally quick to execute because slow components such as database and network requests are often mocked out. The time it takes to write and maintain these tests is also short.
For single-page applications, the Return on Investment (ROI) of integration tests is arguably greater than the other two testing types if we choose our tests wisely. This is why the relevant box in the preceding diagram is bigger than other testing types.
The following is an example of an integration test being performed on a React Card component:
test('When the Card component is rendered with a title
prop, it should contain the correct title', () => {
const { queryByText } = render(
<Card title="Title test" />
);
const titleText = queryByText('Title test');
expect(titleText).not.toBeNull();
});
The test verifies that passing the title prop results in the correct text being rendered. The Card component may contain child components, which will be executed and rendered in the test. This is why this is classed as an integration test rather than a unit test.
Now that we understand the different types of test, we are going to start implementing them on our QandA app. We'll start by unit testing the .NET backend.
In this section, we are going to implement some backend unit tests on our question controller using a library called xUnit. Before we do this, we are going to become familiar with xUnit by implementing some unit tests on a class with no dependencies.
In this section, we are going to create a new project in our backend Visual Studio solution and start to implement simple unit tests to get comfortable with xUnit, which is the tool we are going to use to run our backend tests. So, let's open our backend project and carry out the following steps:

Figure 13.2 – Creating a new xUnit project
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net5.0</TargetFramework>
...
</PropertyGroup>
...
</Project>
using System;
namespace BackendTests
{
{
public static decimal Add(decimal a, decimal b)
{
return a + b;
}
}
}
The class contains a method called Add, which simply adds two numbers together that are passed in its parameters. Add is a pure function, which means the return value is always consistent for a given set of parameters and it doesn't give off any side effects. Pure functions are super easy to test, as we'll see next.
using Xunit;
namespace BackendTests
{
{
[Fact]
public void
Add_When2Integers_ShouldReturnCorrectInteger()
{
// TODO - call the Calc.Add method with 2
// integers
// TODO - check the result is as expected
}
}
}
We have named our test method Add_When2Integers_ShouldReturnCorrectInteger.
Important Information
It is useful to have a good naming convention for tests. When we look at a failed test report, we can start to get an understanding of the problem immediately if the name of the test describes what is being tested. In this case, the name starts with the method we are testing, followed by a brief description of the conditions for the test and what we expect to happen.
Note that the test method is decorated with the Fact attribute.
Important Information
The Fact attribute denotes that the method is a unit test for xUnit. Another attribute that denotes a unit test is called Theory. This can be used to feed the method a range of parameter values.
[Fact]
public void Add_When2Integers_ShouldReturnCorrectInteger()
{
var result = Calc.Add(1, 1);
Assert.Equal(2, result);
}
We call the method we are testing and put the return value in a result variable. Then, we use the Assert class from xUnit and its Equal method to check that the result is equal to 2.

Figure 13.3 – Debugging a test
Figure 13.4 – Test result
As we expected, the test passes. Congratulations – you have just created your first unit test!
We used the Equal method in the Assert class in this test. The following are some other useful methods we can use in this class:
Now, we are starting to understand how to write unit tests. We haven't written any tests on our Q and A app yet, but we will do so next.
In this section, we are going to create tests for some question controller actions.
Our API controller has dependencies for a cache and a data repository. We don't want our tests to execute the real cache and data repository because we require the data in the cache and data repository to be predicable. This helps us get predicable results that we can check. In addition, if the tests are running on the real database, the test execution will be much slower. So, we are going to use a library called Moq to help us replace the real cache and data repository with fake implementations that give predicable results.
Let's get started:

Figure 13.5 – Adding a project reference

Figure 13.6 – Adding a reference to the QandA project
Figure 13.7 – Installing Moq
The BackendTests project is now set up, ready for our first test to be implemented.
Follow these steps to implement a couple of tests on the GetQuestions method:
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
using Xunit;
using Moq;
using QandA.Controllers;
using QandA.Data;
using QandA.Data.Models;
namespace BackendTests
{
public class QuestionsControllerTests
{
}
}
[Fact]
public async void GetQuestions_WhenNoParameters_ReturnsAllQuestions()
{
var mockQuestions = new
List<QuestionGetManyResponse>();
for (int i = 1; i <= 10; i++)
{
mockQuestions.Add(new QuestionGetManyResponse
{
QuestionId = 1,
Title = $"Test title {i}",
Content = $"Test content {i}",
UserName = "User1",
Answers = new List<AnswerGetResponse>()
});
}
}
Notice that the method is flagged as asynchronous with the async keyword because the action method we are testing is asynchronous.
[Fact]
public async void GetQuestions_WhenNoParameters_ReturnsAllQuestions()
{
...
var mockDataRepository = new
Mock<IDataRepository>();
mockDataRepository
.Setup(repo => repo.GetQuestions())
.Returns(() => Task.FromResult(mockQuestions.
AsEnumerable()));
}
We can create a mock object from the IDataRepository interface using the Mock class from Moq. We can then use the Setup and Returns methods on the mock object to define that the GetQuestions method should return our mock questions. The method we are testing is asynchronous, so we need to wrap the mock questions with Task.FromResult in the mock result.
[Fact]
public async void GetQuestions_WhenNoParameters_ReturnsAllQuestions()
{
...
var mockConfigurationRoot = new
Mock<IConfigurationRoot>();
mockConfigurationRoot.SetupGet(config =>
config[It.IsAny<string>()]).Returns("some
setting");
}
The preceding code will return any string when appsettings.json is read, which is fine for our test.
[Fact]
public async void GetQuestions_WhenNoParameters_ReturnsAllQuestions()
{
...
var questionsController = new QuestionsController(
mockDataRepository.Object,
null,
null,
mockConfigurationRoot.Object
);
}
The Object property on the mock data repository definition gives us an instance of the mock data repository to use.
Notice that we can pass in null for cache and HTTP client factory dependencies. This is because they are not used in the action method implementation we are testing.
[Fact]
public async void GetQuestions_WhenNoParameters_ReturnsAllQuestions()
{
...
var result = await
questionsController.GetQuestions(null, false);
}
We pass null in as the search parameter and false as the includeAnswers parameter. The other parameters are optional, so we don't pass these in.
[Fact]
public async void GetQuestions_WhenNoParameters_ReturnsAllQuestions()
{
...
Assert.Equal(10, result.Count());
mockDataRepository.Verify(
mock => mock.GetQuestions(),
Times.Once()
);
}
Here, we have checked that the 10 items are returned.
We have also checked that the GetQuestions method in the data repository is called once.

Figure 13.8 – Running a test in Test Explorer
The test passes, as we expected.
[Fact]
public async void GetQuestions_WhenHaveSearchParameter_ReturnsCorrectQue stions()
{
var mockQuestions = new List<QuestionGetManyResponse>();
mockQuestions.Add(new QuestionGetManyResponse
{
QuestionId = 1,
Title = "Test",
Content = "Test content",
UserName = "User1",
Answers = new List<AnswerGetResponse>()
});
var mockDataRepository = new
Mock<IDataRepository>();
mockDataRepository
.Setup(repo =>
repo.GetQuestionsBySearchWithPaging("Test", 1,
20))
.Returns(() =>
Task.FromResult(mockQuestions.AsEnumerable()));
var mockConfigurationRoot = new
Mock<IConfigurationRoot>();
mockConfigurationRoot.SetupGet(config =>
config[It.IsAny<string>()]).Returns("some
setting");
var questionsController = new QuestionsController(
mockDataRepository.Object,
null,
null,
mockConfigurationRoot.Object
);
var result = await questionsController.GetQuestions("Test", false);
Assert.Single(result);
mockDataRepository.Verify(mock =>
mock.GetQuestionsBySearchWithPaging("Test", 1,
20),
Times.Once());
}
This follows the same pattern as the previous test, but this time, we're mocking the GetQuestionsBySearchWithPaging method in the data repository and checking that this is called. If we run the test, it will pass as expected.
That completes the tests on the GetQuestions method.
Follow these steps to implement a couple of tests on the GetQuestion method:
[Fact]
public async void GetQuestion_WhenQuestionNotFound_Returns404()
{
var mockDataRepository = new
Mock<IDataRepository>();
mockDataRepository
.Setup(repo => repo.GetQuestion(1))
.Returns(() => Task.FromResult(default(QuestionGetSingleResponse)));
var mockQuestionCache = new Mock<IQuestionCache>();
mockQuestionCache
.Setup(cache => cache.Get(1))
.Returns(() => null);
var mockConfigurationRoot = new
Mock<IConfigurationRoot>();
mockConfigurationRoot.SetupGet(config =>
config[It.IsAny<string>()]).Returns("some
setting");
var questionsController = new QuestionsController(
mockDataRepository.Object,
mockQuestionCache.Object,
null,
mockConfigurationRoot.Object
);
var result = await
questionsController.GetQuestion(1);
var actionResult =
Assert.IsType<
ActionResult<QuestionGetSingleResponse>
>(result);
Assert.IsType<NotFoundResult>(actionResult.Result);
}
This follows the same pattern as the previous tests. A difference in this test is that we mock the cache in this test because this is used in the GetQuestion method. Our mock will return null from the fake cache, which is what we expect when the question isn't in the cache.
Here, we checked that the result is of the NotFoundResult type.
[Fact]
public async void GetQuestion_WhenQuestionIsFound_ReturnsQuestion()
{
var mockQuestion = new QuestionGetSingleResponse
{
QuestionId = 1,
Title = "test"
};
var mockDataRepository = new
Mock<IDataRepository>();
mockDataRepository
.Setup(repo => repo.GetQuestion(1))
.Returns(() => Task.FromResult(mockQuestion));
var mockQuestionCache = new Mock<IQuestionCache>();
mockQuestionCache
.Setup(cache => cache.Get(1))
.Returns(() => mockQuestion);
var mockConfigurationRoot = new
Mock<IConfigurationRoot>();
mockConfigurationRoot.SetupGet(config =>
config[It.IsAny<string>()]).Returns("some
setting");
var questionsController = new QuestionsController(
mockDataRepository.Object,
mockQuestionCache.Object,
null,
mockConfigurationRoot.Object
);
var result = await
questionsController.GetQuestion(1);
var actionResult =
Assert.IsType<
ActionResult<QuestionGetSingleResponse>
>(result);
var questionResult =
Assert.IsType<QuestionGetSingleResponse>(actionResult.
Value);
Assert.Equal(1, questionResult.QuestionId);
}
This time, we checked that the result is of the QuestionGetSingleResponse type and that the correct question is returned by checking the question ID.
That completes the tests we are going to perform on our GetQuestion action method.
The same approach and pattern can be used to add tests for controller logic we haven't covered yet. We can do this using Moq, which mocks out any dependencies that the method relies on. In the next section, we'll start to implement tests on the frontend.
In this section, we are going to turn our attention to creating automated tests for the frontend with Jest. Jest is the de facto testing tool in the React community and is maintained by Facebook. Jest is included in Create React App (CRA) projects, which means that it has already been installed and configured in our project.
We are going to start by testing a simple function so that we can get familiar with Jest before moving on to testing a React component.
We'll start to get familiar with Jest by adding some unit tests to the mapQuestionFromServer function in QuestionsData.ts. So, let's open our frontend project in Visual Studio Code and carry out the following steps:
import { mapQuestionFromServer } from './QuestionsData';
test('When mapQuestionFromServer is called with question, created should be turned into a Date', () => {
});
Notice that the extension of the file is test.ts.
Important Information
The test.ts extension is important because Jest automatically looks for files with this extension when searching for tests to execute. Note that if our tests contained JSX, we would need to use the test.tsx extension.
The test function in Jest takes in two parameters:
The test is going to check that mapQuestionFromServer functions correctly and maps the created property to a question object.
test('When mapQuestionFromServer is called with question, created should be turned into a Date', () => {
const result = mapQuestionFromServer({
questionId: 1,
title: "test",
content: "test",
userName: "test",
created: "2021-01-01T00:00:00.000Z",
answers: []
});
});
test('When mapQuestionFromServer is called with question, created should be turned into a Date', () => {
const result = mapQuestionFromServer({
questionId: 1,
title: "test",
content: "test",
userName: "test",
created: "2021-01-01T00:00:00.000Z",
answers: []
});
expect(result).toEqual({
questionId: 1,
title: "test",
content: "test",
userName: "test",
created: new Date(Date.UTC(2021, 0, 1, 0, 0, 0,
0)),
answers: []
});
});
We pass the result variable we are checking into the Jest expect function. Then, we chain a toEqual matcher function onto this, which checks that the result object has the same property values as the object we passed into it.
toEqual is one of many Jest matcher functions we can use to check a variable's value. The full list of functions can be found at https://jestjs.io/docs/en/expect.
test('When mapQuestionFromServer is called with
question and answers, created should be turned into
a Date', () => {
const result = mapQuestionFromServer({
questionId: 1,
title: "test",
content: "test",
userName: "test",
created: "2021-01-01T00:00:00.000Z",
answers: [{
answerId: 1,
content: "test",
userName: "test",
created: "2021-01-01T00:00:00.000Z"
}]
});
expect(result).toEqual({
questionId: 1,
title: "test",
content: "test",
userName: "test",
created: new Date(Date.UTC(2021, 0, 1, 0, 0, 0,
0)),
answers: [{
answerId: 1,
content: "test",
userName: "test",
created: new Date(Date.UTC(2021, 0, 1, 0, 0, 0,
0)),
}]
});
});
> npm test
Jest will run the tests that it finds in our project and output the results:
Figure 13.9 – Jest test results
So, Jest found our two tests and they both passed – that's great news!
The mapQuestionFromServer function is straightforward to test because it has no dependencies. But how do we test a React component that has lots of dependencies, such as the browser's DOM and React itself? We'll find out in the next section.
In this section, we are going to implement tests on the Page, Question, and HomePage components. React component tests can be challenging because they have dependencies, such as the browser's DOM and sometimes HTTP requests. Due to this, we are going to leverage the React Testing Library and Jest's mocking functionality to help us implement our tests.
Carry out the following steps to test that the Page component renders correctly:
import React from 'react';
import { render, cleanup } from '@testing-library/react';
import { Page } from './Page';
test('When the Page component is rendered, it should contain the correct title and content', () => {
});
We imported React with our Page component, along with some useful functions from the React Testing Library.
The React Testing Library was installed by Create React App when we created the frontend project. This library will help us select elements that we want to check, without using internal implementation details such as element IDs or CSS class names.
test('When the Page component is rendered, it should contain the correct title and content', () => {
const { queryByText } = render(
<Page title="Title test">
<span>Test content</span>
</Page>,
);
});
We use the render function from React Testing Library to render the Page component by passing in JSX.
The render function returns various useful items. One of these items is the queryByText function, which will help us select elements that we'll use and understand in the next step.
test('When the Page component is rendered, it should contain the correct title and content', () => {
const { queryByText } = render(
<Page title="Title test">
<span>Test content</span>
</Page>,
);
const title = queryByText('Title test');
expect(title).not.toBeNull();
});
Here, we used the queryByText function from the React Testing Library, which was returned from the render function, to find the element that has "Title test" in the text's content. Notice how we are using something that the user can see (the element text) to locate the element rather than any implementation details. This means that our test won't break if implementation details such as the DOM structure or DOM IDs change.
Having located the title element, we then used Jest's expect function to check that the element was found by asserting that it is not null.
test('When the Page component is rendered, it should contain the correct title and content', () => {
const { queryByText } = render(
<Page title="Title test">
<span>Test content</span>
</Page>,
);
const title = queryByText('Title test');
expect(title).not.toBeNull();
const content = queryByText('Test content');
expect(content).not.toBeNull();
});
afterEach(cleanup);
Figure 13.10 – Jest test results
Our tests pass as expected, which makes three passing tests in total.
Carry out the following steps to test that the Question component renders correctly:
import React from 'react';
import { render, cleanup } from '@testing-library/react';
import { QuestionData } from './QuestionsData';
import { Question } from './Question';
import { BrowserRouter } from 'react-router-dom';
afterEach(cleanup);
test('When the Question component is rendered, it should contain the correct data', () => {
});
This imports all the items we need for our test. We have also implemented the cleanup function, which will run after the test.
test('When the Question component is rendered, it should contain the correct data', () => {
const question: QuestionData = {
questionId: 1,
title: 'Title test',
content: 'Content test',
userName: 'User1',
created: new Date(2019, 1, 1),
answers: [],
};
const { queryByText } = render(
<Question data={question} />,
);
});
We render the Question component using the render function by passing in a mocked data prop value.
There's a problem, though. If we run the test, we will receive an error message stating Error: useHref() may be used only in the context of a <Router> component. The problem here is that the Question component uses a Link component, which expects the Router component to be higher up in the component tree. However, it isn't present in our test.
test('When the Question component is rendered, it should contain the correct data', () => {
...
const { queryByText } = render(
<BrowserRouter>
<Question data={question} />
</BrowserRouter>
);
});
test('When the Question component is rendered, it should contain the correct data', () => {
...
const titleText = queryByText('Title test');
expect(titleText).not.toBeNull();
const contentText = queryByText('Content test');
expect(contentText).not.toBeNull();
const userText = queryByText(/User1/);
expect(userText).not.toBeNull();
const dateText = queryByText(/2019/);
expect(dateText).not.toBeNull();
});
We are using the queryByText method again here to locate rendered elements and check that the element that's been found isn't null. Notice that, when finding the element that contains the username and date, we pass in a regular expression to do a partial match.
The final component we are going to implement tests for is the HomePage component. Carry out the following steps to do so:
import React from 'react';
import { render, cleanup } from '@testing-library/react';
import { HomePage } from './HomePage';
import { BrowserRouter } from 'react-router-dom';
afterEach(cleanup);
test('When HomePage first rendered, loading indicator should show', async () => {
const { findByText } = render(
<BrowserRouter>
<HomePage />
</BrowserRouter>,
);
const loading = await findByText('Loading...');
expect(loading).not.toBeNull();
});
The test verifies that a Loading... message appears in the HomePage component when it is first rendered. We use the findByText function to wait and find the element that contains the loading text.
test('When HomePage data returned, it should render questions', async () => {
const { findByText } = render(
<BrowserRouter>
<HomePage />
</BrowserRouter>,
);
expect(await findByText('Title1
test')).toBeInTheDocument();
expect(await findByText('Title2
test')).toBeInTheDocument();
});
We use the findByText function again to wait for the questions to be rendered. We then use the toBeInTheDocument function to check that the found elements are in the document.
However, the test fails. This is because the HomePage component is making an HTTP request to get the data but there is no REST API to handle the request.
jest.mock('./QuestionsData', () => ({
getUnansweredQuestions: () => {
return Promise.resolve([
{
questionId: 1,
title: 'Title1 test',
content: 'Content2 test',
userName: 'User1',
created: new Date(2019, 1, 1),
answers: [],
},
{
questionId: 2,
title: 'Title2 test',
content: 'Content2 test',
userName: 'User2',
created: new Date(2019, 1, 1),
answers: [],
},
]);
},
}));
test('When HomePage first rendered, loading indicator should show', async () => ...
The mock function returns two questions that we use in the test assertions.
Now, the test will pass when it runs.
That completes our component tests.
As we've seen, tests on components are more challenging to write than tests on pure functions, but the React Testing Library and Jest mocks make life fairly straightforward.
In the next section, we are going to complete our test suite by implementing an end-to-end test.
Cypress is an end-to-end testing tool that works really well for single-page applications (SPAs) like ours. Cypress can run the whole application, simulate a user interacting with it, and check the state of the user interface along the way. So, Cypress is ideal for producing end-to-end tests on a SPA.
In this section, we are going to implement an end-to-end test for signing in and asking a question.
Cypress executes in our frontend, so let's carry out the following steps to install and configure Cypress in our frontend project:
> npm install cypress --save-dev
"scripts": {
...,
"cy:open": "cypress open"
},
> npm run cy:open
After a few seconds, Cypress will open, showing a list of example test files that have just been installed:

Figure 13.11 – Cypress example tests
These examples can be found in the cypress/integration/examples folder in our project. If we open one of these test files, we'll see that they are written in JavaScript. These examples are a great source of reference as we learn and get up to speed with Cypress.

Figure 13.12 – Test output in Cypress
We can see the tests on the left and check whether they have passed or failed with the app that is being tested on the right.

Figure 13.13 – Cypress test result step details
> npm install @testing-library/cypress --save-dev
The Cypress Testing Library is similar to the React Testing Library in that it helps us select elements to check without using internal implementation details.
import '@testing-library/cypress/add-commands';
{
"baseUrl": "http://localhost:3000",
"chromeWebSecurity": false
}
The baseUrl setting is the root URL of the app we are testing.
Our test will be using Auth0 and our app, so it will be working on two different origins. We need to disable Chrome security using the chromeWebSecurity setting to allow the test to work across different origins.
Cypress runs our app and Auth0 in an IFrame. To prevent clickjacking attacks, running in an IFrame is disabled by default in Auth0.

Figure 13.14 – Disable clickjacking protection option in Auth0
{
...,
"globals": {
"cy": true
}
}
Now, Cypress has been installed and configured so that we can implement a test on our Q and A app.
In this section, we are going to implement a test on our app using Cypress; the test signs in and then asks a question. Carry out the following steps to do so:
describe('Ask question', () => {
beforeEach(() => {
cy.visit('/');
});
it('When signed in and ask a valid question, the
question should successfully save', () => {
});
});
The describe function allows us to group a collection of tests on a feature. The first parameter is the title for the group, while the second parameter is a function that contains the tests in the group.
The it function allows us to define the actual test. The first parameter is the title for the test, while the second parameter is a function that contains the steps in the test.
The beforeEach function allows us to define steps to be executed before each test runs. In our case, we are using the visit command to navigate to the root of the app. Remember that the root URL for the app is defined in the baseUrl setting in the cypress.json file.
it('When signed in and ask a valid question, the
question should successfully save', () => {
cy.contains('Q & A');
});
Here, we are checking that the page contains the Q & A text using the contains Cypress command. We can access Cypress commands from the global cy object.
Cypress commands are built to fail if they don't find what they expect to find. Due to this, we don't need to add an assert statement. Neat!
> npm run cy:open

Figure 13.15 – QandA test in Cypress

Figure 13.16 – Our test passing in Cypress
The test successfully executes and passes. We'll leave the test runner open because it will automatically rerun as we implement and save our test.
cy.contains('UNANSWERED QUESTIONS');
Here, we are checking that the page contains the correct title. If we save the test and look at the test runner, we'll see that the test has failed:

Figure 13.17 – Failing test in Cypress
This is because the title's text isn't actually in capitals – a CSS rule transformed the text into capitals.
Notice the message Cypress uses to inform us of the failing test: Timed out retrying. Cypress will keep trying commands until they pass or a timeout occurs. This behavior is really convenient for us because it allows us to write synchronous style code, even though the operations we are testing are asynchronous. Cypress abstracts this complexity from us.
cy.contains('Unanswered Questions');
cy.contains('Sign In').click();
cy.url().should('include', 'auth0');
Here, we use the Cypress contains command to locate the Sign In button and chain a click command to this to click the button.
Then, we use the url command to get the browser's URL and chain a should command to this statement to verify that it contains the correct path.
If we look at the test runner, we'll see that the test managed to navigate to Auth0 correctly.
Let's think about these steps that Cypress is executing. The navigation to Auth0 is an asynchronous operation, but our test code doesn't appear to be asynchronous. We haven't added a special wait function to wait for the page navigation to complete. Cypress makes testing single-page apps that have asynchronous user interfaces a breeze because it deals with this complexity for us!
Next, we'll implement some steps so that we can fill in the sign-in form:
cy.findByLabelText('Email')
.type('your username')
.should('have.value', 'your username');
cy.findByLabelText('Password')
.type('your password')
.should('have.value', 'your password');
Here, we use the findByLabelText command from the Cypress Testing Library to locate our input. It does this by finding the label containing the text we specified and then finding the associated input (referenced in the label's for attribute). This is another neat function that frees the tests from implementation details such as element IDs and class names.
We chain the Cypress type command so that we can enter characters into input and the should command to verify that the input's value property has been set correctly.
Important Information
Substitute your test username and password appropriately.
cy.get('form').submit();
cy.contains('Unanswered Questions');
We use the Cypress get command to locate the form and then submit it. Then, we check that the page contains the Unanswered Questions text to verify we are back in the Q and A app. Cypress takes care of the asynchronicity of these steps for us.
cy.contains('Ask a question').click();
cy.contains('Ask a question');
var title = 'title test';
var content = 'Lots and lots and lots and lots and lots of content test';
cy.findByLabelText('Title')
.type(title)
.should('have.value', title);
cy.findByLabelText('Content')
.type(content)
.should('have.value', content);
We fill in the title and content fields by using the same commands that we did on the sign-in form. The title must be at least 10 characters, and the content must be at least 50 characters, to satisfy the validation rules.
cy.contains('Submit Your Question').click();
cy.contains('Your question was successfully submitted');
cy.contains('Sign Out').click();
cy.contains('You successfully signed out!');
If we look at the test runner, we'll discover that our test runs and passes successfully:
Figure 13.18 – Test run
If the test is failing, it may be because the user was signed into the browser session before the test started. If this is the case, click the Sign Out button and rerun the test.
That completes our end-to-end test and all the tests we are going to create in this chapter. Now that we've written the appropriate unit tests, integration tests, and end-to-end tests, we have a feel for the benefits and challenges of each type, as well as how to implement them.
End-to-end tests with Cypress allow us to quickly cover areas of our app. However, they require a fully operational frontend and backend, including a database. Cypress abstracts away the complexity of the asynchronous nature of single-page applications, making our tests nice and easy to write.
Unit tests can be written using xUnit in .NET and can be placed in a xUnit project, separate from the main app. xUnit test methods are decorated with the Fact attribute, and we can use the Assert class to carry out checks on the item that we are testing.
Unit tests can be written using Jest for React apps and are contained in files with test.ts or test.tsx extensions. Jest's expect function gives us many useful matcher functions, such as toBe, that we can use to make test assertions.
Unit tests often require dependencies to be mocked. Moq is a popular mocking tool in the .NET community and has a Mock class, which can be used to mock dependencies. On the frontend, Jest has a range of powerful mocking capabilities that we can use to mock out dependencies, such as REST API calls.
A page is often composed of several components and sometimes, it is convenient to just write integration tests on the page component without mocking the child components. We can implement these tests using Jest in exactly the same way as we can implement a unit test.
The React Testing Library and the Cypress Testing Library help us write robust tests by allowing us to locate elements in a way that doesn't depend on implementation details. This means that if the implementation changes while its features and the behavior remain the same, the test is unlikely to break. This approach reduces the maintenance cost of our test suite.
Now that our app has been built and we've covered automated tests, it's time to deploy it to Azure. We'll do this in the next chapter.
The following questions will test your knowledge of the topics that were covered in this chapter:
public void Minus_When2Integers_ShouldReturnCorrectInteger()
{
var result = Calc.Add(2, 1);
Assert.Equal(1, result);
}
interface Person {
id: number;
firstName: string;
surname: string
}
We want to check that the person variable is { id: 1, firstName: "Tom", surname: "Smith" }. What Jest matcher function can we use?
expect(result).not.toBeNull();
expect(person).toEqual({
id: 1,
firstName: "Tom",
surname: "Smith"
});
cy.contains('Sign In');
cy.contains('Loading...');
cy.contains('Loading...').should('not.exist');
The first command will check that the page renders Loading... on the initial render. The second command will wait until Loading... disappears – that is, the data has been fetched.
The following resources are useful if you want to find out more about testing with xUnit and Jest:
In this chapter, we'll deploy our app into production in Microsoft Azure so that all of our users can start to use it. We will focus on the backend to start with, making the necessary changes to our code so that it can work in production and staging environments in Azure. We will then deploy our backend application programming interfaces (APIs), along with the Structured Query Language (SQL) database, to both staging and production from within Visual Studio. After the first deploy, subsequent deploys will be able to be done with the click of a button in Visual Studio.
We will then turn our attention to the frontend, again making changes to our code to support development, staging, and production environments. We will then deploy our frontend to Azure to both the staging and production environments.
In this chapter, we'll cover the following topics:
We'll use the following tools and services in this chapter:
All of the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. To restore code from a chapter, the source code repository can be downloaded and the relevant folder opened in the relevant editor. If the code is frontend code, then npm install can be entered in the Terminal to restore the dependencies.
Check out the following video to see the Code in Action: https://bit.ly/34u28bd
In this section, we are going to sign up for Azure if we haven't already got an account. We'll then have a quick look around the Azure portal and understand the services we are going to use to run our app.
If you already have an Azure account, there's never been a better time to sign up and give Azure a try. At the time of writing this book, you can sign up to Azure and get 12 months of free services at the following link: https://azure.microsoft.com/en-us/free/.
We'll need a Microsoft account to sign up for Azure, which is free to create if you haven't already got one. You are then required to complete a sign-up form that contains the following personal information:
You then need to go through two different verification processes. The first is verification via a text message or a call on your phone. The second is to verify your credit card details.
Important Note
The last step in the sign-up process is to agree to the terms and conditions.
After we have an Azure account, we can sign in to the Azure portal using our Microsoft account. The Uniform Resource Locator (URL) for the portal is https://portal.azure.com.
When we log in to the Azure portal, we'll see that it contains a wide variety of services, as illustrated in the following screenshot:
Figure 14.1 – Azure home page
We are going to use just a couple of these fantastic services, as follows:
We are going to put all of these resources into what's called a resource group. Let's create the resource group now, as follows:

Figure 14.2 – Resource groups page

Figure 14.3 – Creating a resource group
Figure 14.4 – Resource groups list with our new resource group
Important Note
Our resource group is now ready for the other services to be provisioned. Before we provision any other services, we'll configure our backend for production in the next section.
In this section, we are going to create separate appsettings.json files for staging and production as well as for working locally in development. Let's open our backend project in Visual Studio and carry out the following steps:

Figure 14.5 – The appsettings files in Solution Explorer
Notice that two settings files start with the word appsettings.
Important Note
We can have different settings files for different environments. The appsettings.json file is the default settings file and can contain settings common to all environments. appsettings.Development.json is used during development when we run the backend in Visual Studio and overrides any duplicate settings that are in the appsettings.json file. The middle part of the filename needs to match an environment variable called ASPNETCORE_ENVIRONMENT, which is set to Development in Visual Studio by default and Production by default in Azure. So, appsettings.Production.json can be used for settings specific to the production environment in Azure.
{
"ConnectionStrings": {
"DefaultConnection":
"Server=localhost\\SQLEXPRESS;Database=
QandA;Trusted_Connection=True;"
},
"Frontend": "http://localhost:3000"
}
We will leave the Auth0 settings in the default appsettings.json file because these will apply to all environments.

Figure 14.6 – Adding an appsettings file for production
{
"Frontend": "https://your-
frontend.azurewebsites.net"
}
So, this contains the production frontend URL that we will create in Azure. Take note of this setting because we will need it when we provision the frontend in Azure.
{
"Frontend": "https://your-frontend-
staging.azurewebsites.net"
}
We haven't specified the production or staging connection strings because we will store these in Azure. This is because these connection strings store secret usernames and passwords, which are more secure in Azure than our source code.
We are now ready to start to create Azure services and deploy our backend. We'll do this in the next section.
In this section, we are going to deploy our database and backend API to Azure using Visual Studio. We will create publish profiles for deployment to a production environment as well as a staging environment. During the process of creating the profiles, we will create the required Azure app services and SQL databases. At the end of this section, we will have two profiles that we can use to quickly deploy to our staging and production environments.
Let's carry out the following steps to create a production deployment profile and use it to deploy our backend to production:

Figure 14.7 – Selecting Azure as the publish target

Figure 14.8 – Selecting Azure App Service as the publish specific target

Figure 14.9 – Selecting or creating a new app service

Figure 14.10 – Creating the app service

Figure 14.11 – Selecting the app service to deploy to

Figure 14.12 – Summary of publish configuration

Figure 14.13 – App Service URL with a site that isn't deployed

Figure 14.14 – Selecting Azure SQL Database

Figure 14.15 – Option to create a new SQL database

Figure 14.16 – New SQL Server dialog

Figure 14.17 – New SQL Database dialog

Figure 14.18 – SQL database list

Figure 14.19 – Connection string configuration
This connection string will now be stored in the Application Settings section in our Azure App Service.

Figure 14.20 – Summary of publish configuration

Figure 14.21 – Setting deployment mode to Self-Contained
Figure 14.22 – Our REST API in Azure
We will see the default questions from our database. Congratulations! We have just deployed our first SQL database and ASP.NET Core app in Azure!
Let's go to the Azure portal by navigating to https://portal.azure.com. Select the All resources option, which results in the following screen:
Figure 14.23 – Provisioned services in Azure
As expected, we see the services that we have just provisioned.
Excellent! We have just successfully deployed our backend in Azure!
As our backend is further developed, we can return to this profile and use the Publish button to quickly deploy our updated backend.
Next, let's follow a similar process to deploy to a staging environment.
Let's carry out the following steps to deploy our backend to a staging environment:
Figure 14.24 – Azure App Service application settings
That completes the deployment of our app to a staging environment.
That's great progress! Azure works beautifully with Visual Studio. In the next section, we are going to turn our attention to the frontend and make changes so that it will work in the Azure staging and production environments, as well as in development.
In this section, we are going to change our frontend so that it makes requests to the correct backend APIs in staging and production. At the moment, the REST API has a hardcoded path set to the localhost. We are going to make use of environment variables as we did in our backend, to differentiate between the different environments. Let's open our frontend project in Visual Studio Code and carry out the following steps:
> npm install cross-env --save-dev
"scripts": {
...,
"build": "react-scripts build",
"build:production": "cross-env
REACT_APP_ENV=production npm run build",
"build:staging": "cross-env REACT_APP_ENV=staging
npm run build",
...
},
So, npm run build:staging will execute a staging build and npm run build:production will execute a production build.
export const server =
process.env.REACT_APP_ENV === 'production'
? 'https://your-backend.azurewebsites.net'
: process.env.REACT_APP_ENV === 'staging'
? 'https://your-backend-staging.azurewebsites.net'
: 'http://localhost:17525';
We use a ternary expression to set the correct backend location, depending on the environment the app is running in. The production server is set to https://your-backend.azurewebsites.net, and the staging server is set to https://your-backend-staging.azurewebsites.net.
Make sure the staging and production locations you enter match the location of your deployed backends:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="React Routes"
stopProcessing="true">
<match url=".*" />
<conditions logicalGrouping="MatchAll">
<add input="{REQUEST_FILENAME}"
matchType="IsFile" negate="true" />
</conditions>
<action type="Rewrite" url="/"
appendQueryString="true" />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>
<Link
to="/"
css={ ... }
>
Q & A
<span
css={css`
margin-left: 5px;
font-size: 14px;
font-weight: normal;
`}
>
{process.env.REACT_APP_ENV || 'dev'}
</span>
</Link>
If the environment variable isn't populated, we assume we are in the development environment.
That completes the changes we need to make to our frontend. In the next section, we are going to deploy the frontend to Azure.
In this section, we are going to deploy our React frontend to Azure, to both staging and production environments.
Let's carry out the following steps to publish our frontend to a production environment:
> npm run build:production
After the build has finished, the production build will consist of all of the files in the build folder.

Figure 14.25 – Azure App Service extension in Visual Studio Code

Figure 14.26 – Azure App Service panel

Figure 14.27 – Deploying an app to an Azure app service

Figure 14.28 – Deployment confirmation

Figure 14.29 – Confirmation of deployment completion
Figure 14.30 – Q&A app running in production
Our frontend is now deployed nicely to the production environment. We won't be able to sign in successfully yet—we'll resolve this after we have published our frontend to the staging environment.
Let's carry out the following steps to deploy our frontend to a staging environment:
> npm run build:staging
After the build has finished, the staging build will consist of all of the files in the build folder overwriting the production build.

Figure 14.31 – Q&A app running in staging
Allowed Callback URLs—This is shown in the following screenshot:
Figure 14.32 – Auth0 allowed callback URLs
Allowed Web Origins—This is shown in the following screenshot:
Figure 14.33 – Auth0 allowed web origins
Allowed Logout URLs—This is shown in the following screenshot:
Figure 14.34 – Auth0 allowed logout URLs
We can find these settings by clicking on the Applications item in the left-hand navigation menu and then clicking on the Q and A application. We add the additional URLs for both the staging and production environments after the development environment URLs. The URLs for the different environments need to be separated by a comma.
We should now be able to sign in to our production and staging Q&A apps successfully.
That completes the deployment of our frontend to both production and staging environments.
Azure works beautifully with both React and ASP.NET Core apps. In ASP.NET Core, we can have different appsettings.json files to store the different settings for the different environments, such as the frontend location for CORS. In our React code, we can use an environment variable to make requests to the appropriate backend. We also need to include a web.config file in our React app so that deep links are redirected to the index.html page and then handled by React Router. The environment variable can be set in specific build npm scripts for each environment. We used three environments in this chapter, but both the frontend and backend could easily be configured to support more environments.
Azure has integration from both Visual Studio and Visual Studio Code that makes deploying React and ASP.NET Core apps a breeze. We use the built-in Publish... option in Visual Studio to provision the SQL database with app services and then perform the deployment. We can also provision app services in the Azure portal, which we did for our frontend. We can then use the Azure App Service Visual Studio Code extension to deploy the frontend to an app service.
Although deploying our app to Azure was super-easy, we can make it even easier by automating the deployment when we check code into source control. We'll do this in the next chapter.
The following questions will test what we have learned in this chapter:
"build:qa": "cross-env REACT_APP_ENV=qa npm run build"
Which npm command would we use to produce a QA build?
The following resources are useful for finding more information on deploying ASP.NET Core and React apps to Azure:
In this chapter, we are going to implement Continuous Integration (CI) and Continuous Delivery (CD) for our Q&A app using Azure DevOps. We'll start by understanding exactly what CI and CD are before getting into Azure DevOps.
In Azure DevOps, we'll implement CI for the frontend and backend using a build pipeline. The CI process will be triggered when developers push code to our source code repository. Then, we'll implement CD for the frontend and backend using a release pipeline that will be automatically triggered when a CI build completes successfully. The release pipeline will do a deployment to the staging environment automatically, run our backend integration tests, and then promote the staging deployment to production.
By the end of this chapter, we'll have a robust process of delivering features to our users incredibly quickly with a great level of reliability, thus making our team very productive.
In this chapter, we'll cover the following topics:
We'll use the following tools and services in this chapter:
All the code snippets in this chapter can be found online at https://github.com/PacktPublishing/ASP.NET-Core-5-and-React-Second-Edition. In order to restore code from a chapter, the source code repository can be downloaded and the relevant folder opened in the relevant editor. If the code is frontend code, then npm install can be entered in the Terminal to restore the dependencies.
Check out the following video to see the code in action: https://bit.ly/3mE6Qta.
In this section, we'll start by understanding what CI and CD are before making a change in our frontend code to allow the frontend tests to work in CI. Then, we'll create our Azure DevOps project, which will host our build and release pipelines.
CI is the process of developer working copies being merged to a shared master branch of code in a source code system several times a day, automatically triggering what is called a build. A build is the process of automatically producing all the artifacts that are required to successfully deploy, test, and run our production software. The benefit of CI is that it automatically gives the team feedback on the quality of the changes that are being made.
CD is the process of getting changes that developers make to the software into production, regularly and safely, in a sustainable way. So, it is the process of taking the build from CI and getting that deployed to the production environment. The CI build may be deployed to a staging environment, where the end-to-end tests are executed and passed before deployment is made to the production environment. At its most extreme, the CD process is fully automated and triggered when a CI build finishes. Often, a member of the team has to approve the final step of deploying the software to production, which should have already passed a series of automated tests in staging. CD is also not always triggered automatically when a CI build finishes; sometimes, it is automatically triggered at a particular time of day. The benefit of CD is that the development team delivers value to users of the software faster and more reliably.
The following diagram shows the high-level CI and CD flow that we are going to set up:
Figure 15.1 – High-level CI and CD flow
When code is pushed to our source code repository, we are going to build all the backend and frontend artifacts and execute the xUnit and Jest tests. If the builds and tests are successful, this will automatically kick off a staging deployment. The Cypress tests will execute on the staging deployment and, if they pass, a production deployment will be triggered.
We need to make some changes to the configuration of the frontend tests and end-to-end tests so that they execute correctly in the build and deployment pipelines. Let's open the frontend project in Visual Studio Code and make the following changes:
...
"scripts": {
...
"test": "react-scripts test",
"test:ci": "cross-env CI=true react-scripts test",
...
},
...
This script sets an environment variable called CI to true before running the Jest tests.
{
"baseUrl": "https://your-frontend-
staging.azurewebsites.net",
"integrationFolder": "integration",
"pluginsFile": "plugins/index.js",
"supportFile": "support/index.js",
"chromeWebSecurity": false
}
This is going to be the cypress.json file that runs the tests on the staging app after it has been deployed. Here's an explanation of the settings we have added:
{
"name": "cypress-app-tests",
"version": "0.1.0",
"private": true,
"scripts": {
"cy:run": "cypress run"
},
"devDependencies": {
"@testing-library/cypress": "^7.0.1",
"cypress": "^5.4.0"
}
}
The key items in this file are declaring Cypress and the Cypress Testing Library as development dependencies and the cy:run script, which we'll use later to run the Cypress tests.
Now, our Jest and Cypress tests will be able to execute during a build and deployment.
Azure DevOps can be found at https://dev.azure.com/. We can create an account for free if we haven't got one already.
To create a new project, click the New project button on the home page and enter a name for the project in the panel that appears. We can choose to make our project public or private before clicking the Create button, as illustrated in the following screenshot:
Figure 15.2 – Creating a new Azure DevOps project
That's our Azure DevOps project created. In the next section, we will create a build pipeline in our Azure DevOps project for our Q&A app.
In this section, we are going to implement CI for our Q&A app using a build pipeline in Azure DevOps. We will start by creating a build pipeline from a template and add extra steps to build all the artifacts of the Q&A app. We'll also observe the build trigger when code is pushed to our source code repository.
Let's carry out the following steps to create a build pipeline from a template:

Figure 15.3 – Selecting the code repository host for the new build pipeline

Figure 15.4 – Selecting code repository for the new build pipeline

Figure 15.5 – Build pipeline code review step

Figure 15.6 – Build pipeline list
Figure 15.7 – Build pipeline page
That's our basic build pipeline created. In the next section, we'll fully implement the build pipeline for our Q&A app.
We are now going to change the build pipeline so that it builds and publishes all the artifacts in our Q&A app. The published artifacts that we require are as follows:
Let's carry out the following steps:
Important Note
YAML is commonly used for configuration files because it is a little more compact than JavaScript Object Notation (JSON) and can contain comments.
The following YAML file was generated by the ASP.NET Core build pipeline template:
# ASP.NET Core
# Build and test ASP.NET Core projects targeting .NET Core.
# Add steps that run tests, create a NuGet package, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
buildConfiguration: 'Release'
steps:
- script: dotnet build --configuration $(buildConfiguration)
displayName: 'dotnet build $(buildConfiguration)'
Important Note
The steps in a build are defined after the steps: keyword. Each step is defined after a hyphen (-). The script: keyword allows a command to be executed, while the displayName: keyword is the description of the step that we'll see in the log file. The variables that are used in the steps are declared after the variables: keyword. The trigger: keyword determines when a build should be started.
So, the build contains a single step, which executes the dotnet build command with Release passed into the --configuration parameter.
steps:
- script: dotnet build --configuration $(buildConfiguration)
workingDirectory: backend
displayName: 'backend build'
We have specified that the working directory is the backend folder and changed the step name slightly.
steps:
- task: UseDotNet@2
inputs:
packageType: 'sdk'
version: '5.0.100'
- script: dotnet build --configuration $(buildConfiguration)
workingDirectory: backend
displayName: 'backend build'
If you are using a different version of .NET Core, then change the version as required.
trigger:
- master

Figure 15.8 – Build pipeline save confirmation

Figure 15.9 – Successful build pipeline execution
steps:
- task: UseDotNet@2
inputs:
packageType: 'sdk'
version: '5.0.100'
- script: dotnet build --configuration $(buildConfiguration)
workingDirectory: backend
displayName: 'backend build'
- script: dotnet test
workingDirectory: backend
displayName: 'backend tests'
Here, we use the dotnet test command to run the automated tests.
steps:
...
- script: dotnet publish -c $(buildConfiguration) --self-contained true -r win-x86
workingDirectory: backend
displayName: 'backend publish'
Here, we use the dotnet publish command in order to publish the code. What's the difference between dotnet build and dotnet publish? Well, the dotnet build command just outputs the artifacts from the code we have written and not any third-party libraries such as Dapper.
We are deploying the backend in self-contained mode under the win-86 architecture, like we did in the last chapter with Visual Studio.
steps:
...
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: 'backend/bin/Release/net5.0/win-
x86/publish'
includeRootFolder: false
archiveType: zip
archiveFile: '$(Build.ArtifactStagingDirectory)/
backend/$(Build.BuildId).zip'
replaceExistingArchive: true
displayName: 'backend zip files'
steps:
...
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)/
backend'
artifactName: 'backend'
displayName: 'backend publish to pipeline'
Here, we use the PublishBuildArtifacts@1 task to publish the ZIP file to the pipeline. We named it backend.
This completes the build configuration for the backend. Let's move on to the frontend now.
steps:
...
- script: npm install
workingDirectory: frontend
displayName: 'frontend install dependencies'
Here, we use the npm install command to install the dependencies. Notice that we have set the working directory to frontend, which is where our frontend code is located.
steps:
...
- script: npm run test:ci
workingDirectory: frontend
displayName: 'frontend tests'
Here, we use the npm run test:ci command to run the tests rather than npm run test, because the CI environment variable is set to true, meaning that the tests will run correctly in our build.
steps:
...
- script: npm run build:staging
workingDirectory: frontend
displayName: 'frontend staging build'
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: 'frontend/build'
includeRootFolder: false
archiveType: zip
archiveFile: '$(Build.ArtifactStagingDirectory)/
frontend-staging/build.zip'
replaceExistingArchive: true
displayName: 'frontend staging zip files'
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: 'frontend/cypress'
includeRootFolder: false
archiveType: zip
archiveFile: '$(Build.ArtifactStagingDirectory)/
frontend-staging/tests.zip'
replaceExistingArchive: true
displayName: 'frontend cypress zip files'
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)/
frontend-staging'
artifactName: 'frontend-staging'
displayName: 'frontend staging publish to pipeline'
Here, we use the npm run build:staging command to produce the staging build, which sets the REACT_APP_ENV environment variable to staging. We use the ArchiveFiles@2 task we used previously to zip up the frontend build and Cypress tests, and then the PublishBuildArtifacts@1 task to publish the ZIP file to the pipeline.
steps:
...
- script: npm run build:production
workingDirectory: frontend
displayName: 'frontend production build'
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: 'frontend/build'
includeRootFolder: false
archiveType: zip
archiveFile: '$(Build.ArtifactStagingDirectory)/
frontend-production/build.zip'
replaceExistingArchive: true
displayName: 'frontend production zip files'
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)/
frontend-production'
artifactName: 'frontend-production'
displayName: 'frontend production publish to pipeline'
Here, we use the npm run build:production command to produce the build, which sets the REACT_APP_ENV environment variable to production. We use the ArchiveFiles@2 task we used previously to zip up the build and the PublishBuildArtifacts@1 task to publish the ZIP file to the pipeline.

Figure 15.10 – Another successful pipeline execution
Figure 15.11 – Pipeline step execution details
That is our build pipeline complete. We will use the published build artifacts in the next section when we deploy these to Azure using a release pipeline.
In this section, we are going to implement a release pipeline in Azure DevOps by implementing a CD process for our app. This process will consist of deploying to the staging environment, followed by the Cypress end-to-end tests being executed before the deployment is promoted to production.
Carry out the following steps in the Azure DevOps portal to deploy a build to the staging environment:

Figure 15.12 – Release pipelines

Figure 15.13 – Release pipeline template selection

Figure 15.14 – Visual representation of release pipeline

Figure 15.15 – QandA pipeline

Figure 15.16 – Adding an artifact

Figure 15.17 – Unlinking parameters

Figure 15.18 – Backend staging release

Figure 15.19 – Adding the Azure App Service deploy task

Figure 15.20 – Frontend staging release

Figure 15.21 – Adding the Extract files task

Figure 15.22 – Extracting Cypress tests

Figure 15.23 – Adding the Command line task
> npm install
We need to set the working directory to be $(System.DefaultWorkingDirectory)/cypress, as illustrated in the following screenshot:

Figure 15.24 – Task for installing the Cypress tests
> npm run cy:run
We need to set the working directory to be $(System.DefaultWorkingDirectory)/cypress, as illustrated in the following screenshot:

Figure 15.25 – Task for running the Cypress tests
That completes the staging deployment configuration and execution of the end-to-end tests. Next, we will add tasks to carry out the production deployment.
Carry out the following steps in the release pipeline to deploy build artifacts to the production environment:

Figure 15.26 – Cloning a release pipeline stage

Figure 15.27 – Naming the production stage

Figure 15.28 – Selecting the production tasks

Figure 15.29 – Removing a task

Figure 15.30 – Changing the production backend app service

Figure 15.31 – Changing the production frontend app service and package

Figure 15.32 – Enabling continuous deployment
That completes the production deployment configuration. Next, we will test our automated deployment.
We are now going to make a code change and push it to our source code repository. This should trigger a build and deployment. Let's give this a try, as follows:
<Link ... >
Q & A!
…
</Link>

Figure 15.33 – Build in progress

Figure 15.34 – Release in progress
Figure 15.35 – Successfully completed release
That completes our CD pipeline.
In this final chapter, we learned that CI and CD are automated processes that get code changes that developers make into production. Implementing these processes improves the quality of our software and helps us deliver value to users of the software extremely quickly.
Implementing CI and CD processes in Azure DevOps is ridiculously easy. CI is implemented using a build pipeline, and Azure DevOps has loads of great templates for different technologies to get us started. The CI process is scripted in a YAML file where we execute a series of steps, including command-line commands and other tasks such as zipping up files. The steps in the YAML file must include tasks that publish the build artifacts to the build pipeline so that they can be used in the CD process.
The CD process is implemented using a release pipeline and a visual editor. Again, there are lots of great templates to get us started. We define stages in the pipeline, which execute tasks on the artifacts that are published from the build pipeline. We can have multiple stages deploying to our different environments. We can make each stage automatically execute, or execute only when a trusted member of the team approves it. There are many task types that can be executed, including deploying to an Azure service such as an app service and running .NET tests.
So, we have reached the end of this book. We've created a performant and secure REpresentational State Transfer (REST) application programming interface (API) that interacts with a SQL Server database using Dapper. Our React frontend interacts beautifully with this API and has been structured so that it scales in complexity by using TypeScript throughout.
We've learned how to manage simple as well as complex frontend state requirements and learned how to build reusable components to help speed up the process of building frontends. We completed the development of our app by adding automated tests, and deployed it to Azure with CI and CD processes using Azure DevOps.
The following questions will test your knowledge of the topics that were covered in this chapter:
The following resource is useful if you want to find out more about implementing CI and CD with Azure DevOps: https://docs.microsoft.com/en-us/azure/devops/pipelines/?view=azure-devops.
If you enjoyed this book, you may be interested in these other books by Packt:
An Atypical ASP.NET Core 5 Design Patterns Guide Carl-Hugo Marcotte
ISBN: 978-1-78934-609-1
C# 9 and .NET 5 – Modern Cross-Platform Development - Fifth Edition Mark J. Price
ISBN: 978-1-80056-810-5
Please share your thoughts on this book with others by leaving a review on the site that you bought it from. If you purchased the book from Amazon, please leave us an honest review on this book's Amazon page. This is vital so that other potential readers can see and use your unbiased opinion to make purchasing decisions, we can understand what our customers think about our products, and our authors can see your feedback on the title that they have worked with Packt to create. It will only take a few minutes of your time, but is valuable to other potential customers, our authors, and Packt. Thank you!