
To my wife, who’s been an amazing rock throughout this entire process: You make me a better person.
To my kids: You managed to fill a place in my heart I didn’t know I had empty. I love you.
For the past decade or more, there’s only been one way of reliably working with JavaScript in the back end, and that’s been through Node.js.
In May of 2020, however, that changed—not only did we (the development community) see the birth of a new back-end development technology, but one that was envisioned and created by none other than the father of Node.js: Ryan Dahl.
In this book, I want to cover everything known so far, both stable and experimental, about Deno, Ryan’s new brainchild, and how it was designed to overthrow the current reigning champion. Although new and still unstable in some aspects, my hope is that by the end and thanks to the follow-along examples I’ll provide, you’ll see how much potential this new runtime brings.
I’d like to thank the amazing technical reviewer involved in the project, Alexander Nnakwue, whose great feedback was a crucial contribution to the making of this book.
I’d also like to thank the rest of the Apress editorial team, whose guidance helped me through the process of writing this book in record time, thus allowing us to release the first book about this new programming language to the public.
Thank you!


For the past 10 years, when back-end developers heard the words “JavaScript in the back end,” everyone instantly thought about Node.js.
Maybe not immediately at the start of those 10 years, but eventually it got to a point where the name was known to everyone as yet another available back-end technology based on JavaScript. And with its async I/O capabilities out of the box (because while other technologies also supported this, Node was the first one to have it as a core mechanic), it carved a portion of the market for itself.
More specifically, Node.js became almost the de facto choice for writing APIs, given the insane performance a developer could have while doing so and the great results you could achieve with very little effort.
So, why, after 10 years of evolution of the Node.js core and its surrounding ecosystem of tools and libraries, are we getting a new JavaScript runtime that is not only very similar to Node but is also meant as a better approach at solving the same problem?
The answer to that and an overview of this new project are what await you in the following chapters, so buckle up and let’s talk about Deno, shall we?
Deno’s version 1.0 was officially released on May 13, 2020, but the idea for Deno wasn’t born in 2020. In fact, although it was originally presented by its creator Ryan Dahl1 (who also wrote the original version of Node, by the way) in 2018 during a conference talk called “10 Things I Regret About Node.js,”2 by that time, he had been working on a prototype of Deno for a while.
And the motivation for this was simple: he considered Node to have some fundamental flaws that couldn’t be solved from within the project, so instead, a better solution would be to start over. Not to redesign the language, by any means, after all, the issues between Ryan and Node weren’t about JavaScript, but rather about the internal architecture of Node and how it managed to solve some of the requirements.
The first thing he changed, however, was the tech stack. Instead of relying on his old and trusted set of tools such as C++ and libuv,3 he moved away from them into a newer approach, using Rust4 as the main language (which is like a modern approach at writing C++ without a garbage collector) and Tokio,5 an async library that works on top of Rust. This is, in fact, the portion of the architecture that provides Deno with its event-driven, asynchronous behavior. And although not really part of the tech stack, we should also mention Go, since it wasn’t just what Ryan used for the initial prototype (the one presented back in 2018), but it’s also been a big inspiration for Deno regarding some of its mechanics (like we’ll see in the following).
Other than a potentially outdated tech stack, what else was Ryan trying to solve when he designed Deno?
In his mind, Node had several shortcomings that were not addressed in time and then became a permanent technical debt.
To him, Node was an insecure platform where an unaware developer could potentially leave an open security hole, either because of an unnecessary privileged execution or because of code accessing a service of a system that is not correctly protected.
In other words, with Node.js you can write a script that sends requests over TCP uncontrollably to a specific URL causing potential problems on the receiving end. This is because there is nothing stopping you from using the network services of your host computer. At least, nothing on Node’s side.
Likewise, in 2018, a very popular Node.js module’s repo was socially hacked6 (i.e., its creator was duped into giving a hacker access to its code), and the hacker added code that would steal your bitcoin wallet if you had one. Because there is no inherent security in Node, this module was able to access a certain path on your computer that it wasn’t originally meant to access. This would’ve never been a threat if there was a way to notice the read access on that path and the user had to manually allow for it to happen.
The module system was also something he wasn’t happy with. In his own words,7 its internal design was an afterthought compared to the amount of consideration other sections, such as async I/O or event emitters, received. He regretted making npm the de facto standard for package management for the Node ecosystem. He didn’t appreciate it being a centralized and privately controlled repository. Ryan considered the way browsers import dependencies to be much cleaner and easier to maintain.
instead of having to write a new entry into a manifesto file (i.e., package.json) and then install it yourself (because let’s be honest, npm will install it, but you have to run the command at one point).
In fact, the entire package.json file was something he wasn’t very happy with. Back when it was defined, he actually changed the logic for the require function to make sure it would take its content into consideration. But the added “noise” provided by the file’s syntax (i.e., author information, licensing, repository URL, etc.) was something he thought could’ve been better handled.
In a similar note , the folder where the modules are saved (node_modules) is something he would get rid of if he could. This one is probably one that most of the Node community agrees with since everyone has complained at least once about the size of this folder, especially if they have several active projects at once. That being said, the original intent of having this folder living locally on your project was to avoid confusion as to what you were installing. Of course, that was a very naive take on the solution, and the end result proves it.
There were other minor issues he had with Node, such as the ability to require local modules without having to specify their extension; this was meant to help improve the developer’s experience, but it ended up creating an overcomplicated logic that had to check several extensions in order to understand what exactly needs to be required.
Or the implicit behavior associated with the index.js files (the fact that you can require a folder and it’ll default to require the index.js file inside it). As stated in his presentation, this was a “cute” feature Ryan thought of adding in order to improve the experience by simulating the behavior of index.html files for the Web. In the end, the feature didn’t add that much to the experience and caused a pattern that I don’t think was intended by the creator.
All in all, these were all his decisions or decisions he was a part of, and in his mind, there was a better way to do it, which is what triggered the creation of Deno and the design direction that he took for this new runtime.
Next, we’re going to go into more detail about exactly that: the decisions he took and how they translated into a set of features that aim not only to differentiate Deno from Node but also to provide the secure runtime Ryan wanted to give developers originally with Node.
Now that we’ve covered the basics behind the reasons why Deno was created, it’s time to understand something very basic: how to install it and use it.
Lucky for you, if you’re interested in just dipping your toes into the Deno waters to understand what it looks like, but you don’t really want to get wet just yet, there are options available. And if you want to go all the way into Deno, you can install it into all major operating systems very easily as well.
If all you need is a quick little REPL for you to test a language feature or simply get used to how using TypeScript with Deno feels like, you might want to check out either one of the online playgrounds currently available for you (and everyone with an Internet connection) for free.
Execute code samples both in JavaScript and TypeScript and see the results on the right side of the screen.
You can enable support for unstable features (see Figure 1-1 for an example).
Auto-format your code, which is especially useful if you’re copy-pasting it from somewhere else.
And finally, you can share it with others. This feature allows you to share your code snippets with others using a permalink generated once you click the Share button.

Deno playground with unstable features enabled
Another important thing to note about this playground is that it’s already using the latest available version of Deno: version 1.0.1.
If you’re using another playground and want to make sure you’re using the latest version, you can simply use the following code snippet:
The other playground worth mentioning is Deno.town;10 although not as feature-reach as the previous one, it has a very simple interface and it works just as good.

Deno.town, online playground for Deno

IntelliSense at work while writing code using Deno.town
Figure 1-3 shows a very familiar sight, especially if you’re a VS Code11 user, since it resembles the default theme and overall behavior of that IDE. This is definitely a great feature, especially if you’re looking for help on a particular API. It’ll provide quick help on what your options are and how you can use it.
Finally , to close up this chapter, we’re going to do what you probably have been looking to do since you began reading it: we’re going to install Deno (about time, don’t you think?).
Installing Deno is actually very straightforward; depending on your operating system, you’ll need one of the following options. Either way, they’re just one liners that pull the binaries from different places (or install from source code in some cases):
If you’re a Mac user:
If you’re a Windows user:
If you’re a Linux user:
In the end, the result should be the same: you should be able to execute the deno command from the terminal window of your OS, and that should open the CLI REPL as seen in Listing 1-1.
When designing the new runtime, Ryan tried to solve as many of the original concerns he had with Node as possible, while, at the same time, taking advantage of the latest version of ECMAScript and TypeScript.
In the end, Deno ended up being a secure runtime that is not only compatible with JavaScript but also with TypeScript (that’s right, if you’re a TS fan, you’re in for a treat!).
Let’s cover the basic improvements introduced by Deno.
This is definitely one of the most liked features since the official release, mainly because TypeScript has been gaining more followers within the JavaScript community, especially with React developers, although as you probably know, it can be used with any framework.
So far, using TypeScript as part of your projects required you to set up a build process, which, before execution, turned TS code into JS code so the runtime could take it and interpret it. After all, we all know that JavaScript is the one being executed. With Deno, that is not exactly true; in fact, you have the ability to write either JavaScript or TypeScript code and just ask the interpreter to execute it. If you’re using TS, then internally the code will load up the TypeScript compiler and turn that code into JavaScript.
The process is essentially the same, but it’s completely transparent for the developer; from your point of view, you’re simply executing TypeScript. And that is definitely a plus; there is no more need for that build process, and the compilation times are optimized inside the interpreter, which means your boot times are as fast as possible.
Notice the first line shown on the previous code snippet; you can see how the first thing Deno is doing is compiling your code into JavaScript without you having to do anything.
Did you notice how sometimes when you install an app on your phone, you get asked for permissions when they try to access the camera or a particular folder inside the disk? That is to ensure you’re not installing an application that attempts to access sensitive information without you knowing about it.
With Node, the code you execute is not under your control. In fact, we normally tend to blindly trust the modules that are uploaded to npm, but how can you be sure they actually do what they say they do? You can’t! Unless, of course, you inspect their source code directly, which is not a realistic approach for big modules with tens of thousands of lines of code.
The only layer of security protecting your data right now is your OS; that would help a regular user from accessing OS-sensitive data (like the /etc folder on a Linux box), but access to other resources, such as sending requests over the network or reading potentially sensitive information from environment variables, is completely allowed. So you could technically write a Node CLI tool to do something as basic as the cat command does (reading the content of a file and then outputting it to the standard output) and then add some extra code to read your AWS credential files (if there is one) and send it over HTTP to another server where you receive it and store it.
Listing 1-4 shows exactly what’s happening and how you’re not just accessing the file you wanted but also a file you thought was private.
The code from Listing 1-5 does exactly the same as the Node code before; it shows you the content of the file you’re trying to see, and at the same time, it copies sensitive AWS credentials over to an external server.
As you can see, we can’t even open the file we’re actually trying to see if we don’t directly allow for access to the file.
And that is just plain strange; we can get past it again, allowing for network access, but as a user, why would you need the Cat command to access the network interface? We’ll look at more examples like this one and cover all the security flags in Chapter 3.
From the moment Node added support for the async/await clauses , developers everywhere started switching their promise-based approaches into this new mechanic. The problem was every await clause had to be part of an async function. In other words, top-level awaits—the ability to await for the result of an async function right there on the main file of your project—were not supported yet.
And to this day, even though V8 has already added support for it, we’re still waiting for Node to catch up, forcing developers to work around this limitation by using IIFEs (otherwise known as Immediately Invoked Function Expressions) declared as async.
But thanks to Deno, that is no longer true; you get top-level await right off the bat with version 1.0. What benefits can you get from this?
For starters, your start-up code can be cleaned up thanks to this. Ever had to connect to the database and start the web server at the same time? Having the ability to await those actions directly from the top level, instead of wrapping them into a function just to have it working, is a definitive plus.
That is a much easier syntax than either relying on the one provided by promises or somehow wrapping the entire thing into an async function and then somehow exporting that dependency back to the global space.
This is definitely not one of the big improvements brought to us by Deno, but it’s definitely one that is worth mentioning since the Node community has been asking for the ability to do this for quite some time already.
JavaScript’s standard library or even Node’s has never been something to really be proud of. A few years ago, this was even worse by requiring us developers to add external libraries that became almost standards such as jQuery14 back then helping everyone understand how AJAX worked and providing several helper methods to iterate over objects, later Underscore15 and more recently Lodash16 providing quite a handful of methods dealing with arrays and objects. Over time, these methods (or similar versions to them) have been incorporated into the standard library of JavaScript.
That being said, here is still a long way to go before we can safely say that we can build something without having to start requiring external modules for the most basics of operations. After all, that has become the standard of Node: a basic building block that requires you to start adding modules in order to have the tools you need.
With that in mind, Deno’s standard library was designed to provide more than just basic building blocks; in fact, Deno’s core team made sure these functions were reviewed and deemed of enough quality before releasing them to the public. This is to ensure that you, as a Deno developer, get the proper tools you need, following the internal standards of the language and having the highest code quality possible. That being said, there are some of these libraries that are being released while still under development (with the proper warning messages) in order to get feedback from the community. If you decide to go ahead and try them, you should use them with the proper care and understanding that those API might change based on the response they get.
Another interesting bit of information about this set of functions is that just like the entire module system (which we’ll talk about next), it was highly influenced by Go’s standard library . Although there is no one-to-one relationship between both sides, you can see the influence by looking at the names of the packages or even the function names. And bear in mind, Deno’s standard library is constantly growing; version 1 only contains everything the team was able to port over from Go, but the effort is ongoing and future versions will see this list growing.

Deno and Go documentation on the printf function
Figure 1-4 illustrates the similarities I’m talking about. Although the Deno function is still under development and its developer is actively requesting feedback, the source for concepts such as “verbs” can be already seen coming from Go’s side.
The final and probably biggest change introduced by Deno in regard to the Node ecosystem is the fact that it’s dropping the de facto package manager every Node developer has come to hate and love at one point in their careers.
This is not to say Deno is bringing its own package manager; in fact, Deno is rethinking its entire package management strategy for a simpler (and yet again, Go-inspired) approach.
When discussing the problems Deno is trying to solve at the start of this chapter, I mentioned that Ryan considered npm and everything about it to be wrong. On one side, because it ended up being too verbose to even use (considering how much boilerplate code goes into the package.json file) and how he didn’t like the fact that every module resided inside a privately controlled centralized repository. So he took this opportunity to go in an entirely different direction.
Right out of the box, Deno allows you to import code from URLs just as if they were locally installed modules. In fact, you don’t need to think about installing the modules; Deno will take care of that when you execute your script.
But let’s back up for a second; the whole point behind this improvement over npm and others alike is to have a simpler approach, as I said before, one that resembles Go’s yes, but also one that is very much like how browsers and front-end JavaScript work. If you’re working on some front-end JavaScript, you don’t manually install your modules; you just require them using the script tag, and the browser takes care of going out to look for them and not only downloading them but also caching them. Well, Deno is taking a very similar approach.
You can keep importing local modules. After all, your files are also considered modules; that hasn’t changed, but all third-party code, including the standard library officially provided by Deno, will be available online for you to import.

Output from the first execution of a TypeScript file importing an external module
Figure 1-5 shows the output from executing that simple script. As you can appreciate, we’re seeing two actions taking place before the execution of our code happens. This is because Deno is first downloading and caching the script. As a second step, it is also compiling our TypeScript code into JavaScript (this step wouldn’t have happened if the example was a .js file), so it can finally execute it.
The best part about this process is that next time you execute the code, it will directly do it, without having to download or compile anything, since everything is cached. Of course, if you decide to change the code, then the compilation step needs to happen again.
As for the external module, since it’s now cached, you won’t have to download it again unless, of course, you specifically tell the CLI tool to do so. Not only that, but any other new project you start working on from now on that needs to import that same file will be able to pick it up from the shared cache. That is a huge improvement over having several huge node_modules folders throughout your hard drive.
I’ll go into more detail about this subject in Chapter 4, so just keep in mind we’re now free of the black hole that is the node_modules folder, and we no longer need to worry about package.json . I’ll go over how to deal with the lack of a centralized module registry and the different patterns that have developed around that.
You’re now up to date in regard to why Deno was initially created and what kind of problems it is trying to solve. If you’re a Node developer, you should already have enough information to start playing around with the online REPLs or even with the CLI REPL that installs with Deno.
However, if you’re coming from other languages or even from the front end, hang tight and keep reading, since we’re going to be doing a quick intro to what TypeScript is and how you can use it with Deno for back-end development.
Given that TypeScript is the language of choice for Deno’s creator and that he took advantage of the fact that this was a brand-new project to add native support of it, I deemed it convenient to have a full chapter dedicated to it. If you’re new to TypeScript, here you’ll learn the basics of it, everything you’ll need to understand the code samples that will follow in the next chapters. If, on the other hand, you’re already versed in this language, then maybe just skip ahead to Chapter 3 or read through this one for a quick refresher on the key concepts before moving forward.
And that is where TypeScript comes into play; you’re essentially writing JavaScript code with an added layer giving you static type check and an improved development experience thanks to having type definition that can be picked up by code editors and provide you with full IntelliSense.
Up until this point, the way you’d work with TypeScript is you’d set up a build process using some automation tool , such as webpack1 or, back in the day, Gulp2 or Grunt.3 Either way, that tool would let you create a process to transpile your code (in other words, to translate it into JavaScript) before it could be executed. This is such a common task that there are tools already that automatically configure that process for you when you start a new project. For example, take the create-react-app4 application, designed to help you create a new React project; if you choose to use it with TypeScript, it’ll set up that transpilation step for you.
As I mentioned before, TypeScript tries to expand the notion of types that you carry from JavaScript into something more fleshed out, like what you’d get on a language such as C or C#; this is why when it comes to picking the right type for your variables, it’s important to know the full extent of what TypeScript has to offer you.
Some of the types available to you are the ones coming from JavaScript; after all, if they’re already defined, what’s the point of reinventing the wheel?
Of course, you can’t execute the code from that script. TypeScript won’t let you compile it into JS and it makes sense; you literally specified a type for your variable and then go on and assign it a value of another type.
The code is clearly declaring an array of numbers, and if you try to add anything that’s not a number, you won’t be able to compile it.
The end result is the same, so it’s really up to you to decide which one to use.
Declaring any of the other types is just straightforward and really nothing too complicated, so let’s go into the good stuff: the new types you get thanks to TS.
Other than the basic and already known types inherited from JavaScript, TypeScript provides other, more interesting types, such as tuples, enums, any type (which we’ll cover in a second), and void.
These extra types, coupled with other constructs which we’ll see in a minute, help in providing a better experience for developers and a more robust structure to your code.
We’ve already covered arrays, and tuples are very similar, but unlike arrays, where you have the ability to add an unlimited number of elements of the same type, tuples allow you to predefine a limited number of elements, but you can pick their type.
This is where TypeScript shines because it provides checks in places you’re not inclined to mentally check.
While tuples are a rehash of an old concept (i.e., arrays), enums are a completely new concept to JavaScript. Even though you could’ve created them using vanilla JS code, TypeScript now gives you an added construct that you can use to define them.
And now you can see how TS is helping you write code that makes sense and using fewer keywords. Basically, here Direction.Up has the value of “0”, Direction.Down a value of “1”, and the rest keep going up by one.
As you can see, TypeScript will auto-assign values to your constants, so it makes very little sense to force a custom value on them unless your logic requires it.
You can see how once the variable’s been defined with the enum as its type, you can only assign one of the values of that enum to it; otherwise, you’ll get an error like the one seen earlier.
You’ll be getting an error like that because the TS compiler is noticing your IF statement is covering all possible options of the value of variable x (this is because it was defined as an enum, so it has that information available).
One of the main things TypeScript adds to the language is static type checking along with several new types that enhance the experience of developers. And so far, I’ve shown you how to define variables of one particular type. This is great when you have control over that; when you know exactly the type of data you’ll be dealing with, defining types helps quite a lot.
The problem with that approach? That with an inherently dynamic language, you don’t always know the data types you’ll have to handle, and one example of that is arrays. JavaScript by default allows you to define a dynamically sized array where you can add anything you need. And that is a very powerful tool. TypeScript, however, is forcing us to declare the type of the elements of every array we define, so how do we mix both worlds?
This type is great, but you have to be careful with how you use it, because if you abuse it, you’re literally ignoring one of the benefits of the language: static type checking . One of the main two use cases for when you’d want to use the any type is when you’re mixing JavaScript and TypeScript, because it allows you to take advantage of some of the benefits of the latter without having to rewrite the whole JS code (or potentially, the logic behind it). The other use case is when you honestly can’t tell the type of the data you’ll be using before you access it; otherwise, it’s advisable you actually declare the type and let TypeScript check it for you.
All the types I’ve mentioned so far will not allow you to assign null as a valid value to them—that is, of course, all of them except any type; that one will let you assign anything to the variable.
So how do you tell TypeScript to let you assign null to your variables as well? (In other words, how can you make them nullable?) The answer is by using union types.
The preceding code shows how you can make that union of types, by making use of the | character. You can also read it as an “OR” operator for types like “either number OR null” or “either string OR null.”
Either if you’re returning something other than a string or if you have extra parameters, the type definition is strict, so they will fail.
That means you can then assign that type alias to a variable, and that variable will only be able to have one of those three values assigned. This is a literal enum, meaning you can use them like that, but you can’t reference its members like with proper enums.
With the basics of types out of the way, we can get into other new constructs that will be making our development experience a breeze. In this case, we’ll cover classes and interfaces.
It’s important to note that by adding the concepts we’re about to see, TypeScript is proposing a more defined and mature version of the object-oriented paradigm that vanilla JS follows. That being said, there is nothing written anywhere that says you should follow it too when using TS; after all, this is just another flavor of JavaScript, and thus you can also take advantage of its innate functional programming capabilities.
In a similar way to types, interfaces allow us to define what I like to call “types for objects.” Of course, that is a one-line definition of the concept, which is leaving a lot of other important things out.
That being said, with interfaces in TypeScript, you can define the shape your objects are going to have without having to implement a single thing, and that is what TS is using to check and validate assignments.

Autocomplete working for a function’s parameter thanks to the interface being defined
Figure 2-1 shows, in fact, one of the added benefits of having interfaces: IntelliSense is fully aware of the shape your objects will have without having to implement their classes (which, one would assume, would require you to also implement method logic).
When working on modules, be it for internal or public consumption, instead of just exporting the relevant data structures, you can also export interfaces to provide developers with shape information about parameters and return types for your methods, without having to overshare sensitive data structures that could potentially be modified and tempered with.
But this is not all that interfaces can do; in fact, that is just the beginning.
A very interesting behavior that interfaces in TypeScript allow you to define is the fact that some properties on your objects will always need to be present, while others might be optional.
This is a very JavaScript thing to do since we never really have to care about the structure of our objects since it was completely dynamic. In fact, that was one of the beauties of the language, and TS can’t really ignore it, so instead, it is providing us with a way for us to give structure to the chaos.
That will tell TS’s compiler to not error out if the city property is missing whenever assigning an object to a variable you declared as IMyProps.
Another interesting definition you can add to your properties is the fact that they’re read-only; just like using const with variables, you can now have read-only properties whenever you need them.
The preceding example shows you how you can’t really modify the properties once initialized if they’re marked as “readonly.”
It is also known as a contract for functions. Interfaces not only allow you to define the shape an object can take but also you can use it to define the contract a function needs to follow.
This is especially useful when dealing with callback functions, and you need to make sure the right function with the right parameters and the right return type is passed.
Notice how the asyncOp function can only take a Greeter function as a parameter; there is no way for you to pass a valid callback that doesn’t comply with the contract specified by the interface.
Since the approval of ES6, JavaScript has incorporated the concept of classes into the language, although they’re not exactly your standard OOP classes with your method overrides, your private and public properties, abstract constructs, and what not. Instead, the current state of classes in JavaScript only allows you to group properties and functions into a single entity (a class) which you can instantiate later on. It’s more syntactic sugar over the good old prototypal inheritance model than an actual new way of handling and working with objects.
TypeScript, however, takes that into the next level, trying to provide a more robust model for you to actually build an object-oriented architecture with it.
Now, thanks to TS, we can do more interesting things such as declaring private properties or private methods, implementing interfaces, and more; let me show you.
Like many OOP-based languages, TypeScript provides three different visibility modifiers for your class properties and methods. Let’s quickly review how you’d implement those here.
This is the classic one , the one everyone was requesting from JavaScript and the one ES6 failed to provide when it included classes, since there is no visibility modifier in the language at this point (that is, of course, on the currently released version of JavaScript, but a proposal is already approved for the next version).
TypeScript, however, implemented two versions, one following the classic standards using the private keyword and one following the way it’ll be implemented in the next version of ECMAScript, using the # character.
If you’re actually looking for that kind of behavior, then you’ll have to work with protected properties .
The protected modifier allows you to hide properties and methods from the outside world, just like the previous one, but you can still access them from within derived classes.
The preceding code works, and you’re able to share properties between different classes without making them public and accessible to anyone.
You can also add a special kind of property called “accessors ,” which allow you to wrap a property around a function and define different behaviors for assignment and retrieval operations. These accessors are also known as getters and setters in other languages.
Essentially, these are methods that give you the ability to work with them as if they were the actual property they’re wrapping. You can always create one or two methods to do the same thing, but you need to address them like normal methods.
Notice how I added extra logic around the assignment of the side property; now instead of just assigning a value to a property, I’m also updating the area value . This is the main benefit of using accessors; you’re keeping your syntax clean while adding extra logic around the action. These are extra useful for adding validation logic on assignments or side effects, like I just showed you. For retrieval operations, you can also add a default behavior, like returning a 0 if your numeric property is not yet set. Imagination is the limit; just make sure you take advantage of them.
The last bit about classes I wanted to cover are these two: the static and abstract modifiers. These two are what you would expect if you’re already familiar with these concepts coming from the OOP world, but just in case you don't, we’ll do a quick overview of them here.
Notice how the origin property is used here; since the origin is the same for all instances, it makes no sense to have different instances of the same property being created every time you instantiate a new object. So instead, by declaring it as static, you’re making sure only one version of this property exists. The only catch is you need to reference it by using the class name; that’s all.
And the same reasoning applies to static methods; they contain logic that is of interest to all instances of the class, but the catch here is you can’t access the this keyword from within it, since there is no instance to reference.
Abstract classes, however, are a whole different animal; they are used to define behavior that has to be inherited by other classes but can’t be instantiated directly. That is actually quite close to the definition of interfaces, although these are limited to only defining the signature of all methods, while abstract classes actually provide implementations that can be inherited and used.
This implementation leaves no doubt about the fact that if you’re using it in your project, you can’t really rely on directly instantiating Geometry; rather, you either rely on the Square class or you create your own and extend Geometry.
These are all optional constructs; of course, you can easily just rely on classes and do everything with the default visibility modifier (i.e., public), but if you’re to take advantage of all these tools TS is providing, you’d be adding an extra layer of security to make sure the compiler enforces you or others to use your code as originally intended.
The last thing I want to cover as an advanced topic about TypeScript and something that might come in handy if you’ve decided to go all the way down the OOP rabbithole is mixins.
One of the limitations imposed by TypeScript when it comes to class inheritance is that you can only extend a single class at a time. In most normal cases, this is not a problem, but if you’re working with a complex enough architecture, you might find yourself a bit constrained by the language.
Let’s look at an example: pretend you have the need to encapsulate two different behaviors into two different abstract classes, Callable and Activable.
In order to fix this, we can do a simple (and really nasty) workaround, which is to chain the inheritance. Or in other words, make Callable extend Activable and MyClass extend Callable. That would definitely solve our little problem, but at the same time, it would force Callable to always extend Activable. This is a very bad design pattern and one you should avoid at all costs; there is a reason why you wanted to have both behaviors separate, so it makes no sense to force them together like that.
Declaration merging : Which is a very strange and implicit behavior you need to be aware of
Interface class extension : Which means interfaces in TS can extend several classes at the same time, unlike classes themselves
Add the method signatures from the parent classes to our derived class.
Iterate over the methods of the parent classes, and for every method that both the parent and the derived class have, manually link them together.
I know it sounds complicated, but actually it’s not THAT hard; all you have to remember is how to achieve both of those points and you’re done.
And in order to understand what’s going on, let’s start by number two first:
The preceding function is just iterating over the parent classes and, for each one, iterating over its list of properties and defining those properties into the derived class. Essentially, we’re manually linking all methods and properties from the parents into the child.
Notice how when we have to deal with the inner workings of classes, we’re actually referencing the prototype chain directly. This is a clear sign that the new class model of JavaScript is, like I mentioned before, more syntactic sugar than anything else.
The MyClass definition is now only a single class definition that is not really extending anything.
I’ve added a new interface definition, with the exact same name as the class we’re creating. This is crucial because this interface is extending both abstract classes, thus merging their method definition into a single construct (the interface) which, at the same time, is getting merged into the class definition because they have the same name (i.e., declaration merging,6 meaning interfaces can be merged into classes—and other constructs—if they have the same name).
This code would result in our expected output. Remember, now that you’ve endured the process of understanding how mixins work, I’ve given you a completely reproducible formula you can use for all your classes. Just copy and paste the function, and remember to properly declare the interface and you’ll be done!
This is where I stop talking about TypeScript in this book. I've already covered everything you needed to know about it in order to continue moving forward learning about Deno. If you liked TypeScript and want to know more about it, I encourage you to check out their official documentation. After all, there are some aspects of the language I haven’t mentioned, not because they’re not really useful but, rather, because this chapter was meant as an introduction to the language, rather than as a full guide to it.
With the understanding of how types and the OOP model of TS work, you can keep reading without fear of not understanding what I’ll be talking about.
The next chapter is going to cover how security works on Deno and why so much effort was put into making it an explicit thing for devs to worry about. See you on the next page!
It’s time now to talk about one of the new features incorporated by Deno, something that Node.js never tried to tackle and that it would’ve prevented some of the major issues npm has had: security.
Even though they haven’t been that many, we’ve seen a few security issues creep up over the years with npm, and most of them related to the fact that any code executed with the Node runtime automatically has the same security privileges as the user executing the script.
In this chapter, we’ll see how Deno attempts to solve this problem by forcing users to specify which permissions to add.
Instead of leaving it up to the OS to care for the security of the scripts being executed, Deno is forcing users to directly specify which privileges they want their scripts to have.
This is not a new practice; in fact, if you own a mobile phone, you’ve probably seen an alert asking you for permission to access your contacts or your camera or other parts of your system when installing or executing a new app for the first time. This is done specifically so you, as a user, are aware of exactly what the application is trying to do, and this lets you decide if you want it to access it or not.
Here, Deno is doing exactly the same, forcefully asking you to allow (or deny) access to different features (such as reading from the disk or accessing the network interface).
There are currently seven subsystems you can allow or deny access to for your Deno scripts, and they range from something as basic as allowing them to read from the disk or passing by enabling access to the network interface in order to send outgoing requests up to other more complex features, such as getting a high-resolution time measurement.
As a back-end developer, I think I can already hear some of you asking: “wait, do I really need to remember to allow my back-end service to access the network interface? Isn’t that kind of basic at this moment?”
Well, yes and no, to be honest. While it is true that if you use Deno the same way you were using Node, developing back-end services will be a big part of your effort, you might also be using Deno for other tasks, and here is where Ryan and the team behind it decided to choose security over developer comfort.
And don’t get me wrong, I don’t say that in a bad way. The trade-off to me is a small one; all you have to do, as the developer of a microservice (to put an example here), is to remember to add a certain flag to the start-up line of your script. However, in return, you are fully aware you added that permission because you needed that access. Whoever is executing that same service somewhere else will see that flag and will automatically know it will require network access.
Now, take that same example, but think about a simple automation script someone else might’ve published—maybe something in the lines of what Grunt1 or webpack2 would do. But now you notice that in order to execute them, you also need to provide them with access to your network interface; wouldn’t that race a flag in your head? If they’re tools to exclusively work on your local disk, why would they need that kind of access? And that’s exactly the type of question Deno is trying to have you ask yourself to avoid security issues that can easily be prevented. Think about these flags as a type system for security. Just like TypeScript is able to prevent a lot of bugs simply by forcing you to use the right types all the time, these flags will help avoid a lot of security issues in the future.
Are they the ultimate security solution? Of course not, just like TypeScript isn’t the ultimate tool to get rid of bugs. But they both help in avoiding the simple mistakes that can lead to big problems.
It’s time now to have a closer look at the flags in question and understand what each of them does and when you should or shouldn’t use them. And although, like I said before, there are seven subsystems you can restrict or allow access to, there are, in fact, eight flags for you to use, and I’ll explain why in a second.
The first one I want to cover is that extra flag I mentioned. The point of this flag is not to allow access to one particular subsystem, but instead to basically disable all security measures.
As you can probably guess, this is not a flag you should be using unless you know exactly what you’re trying to do. Adding flags to the execution line of your scripts is not an expensive or time-consuming task, so think about the trade-offs before you decide to use this one.
This will effectively disable every ounce of security you could’ve expected to have been provided by the runtime, or to go back to my TypeScript analogy, this would be like using the any type everywhere. Just make sure that if you’re using it, you have a very good reason to.
The issue here is that literally anyone with access to a system can set environment variables, and there is a lot of information that is stored there that can potentially be misused. For example, the AWS CLI tool expects several environment variables to point to folders with sensitive data, such as AWS_SHARED_CREDENTIALS_FILE which should indicate where your secret AWS credentials are stored. Now think what an attacker would be able to do by adding a little bit of code to access those variables and read the files (or the data they contain). This is definitely information you don’t want others to know unless they have to, and that is why Deno is limiting access to it.
This flag will allow your scripts to both read and write into environment variables, so make sure you’re giving that kind of access to code you trust.
High-resolution time measurement is actually something that can be used in several types of attacks, especially those dealing with cryptography in order to gain information about secured targets.
But at the same time, it’s a great tool to use when debugging or even trying to optimize your code, especially in critical systems where performance is a big issue. This is why you need to consider this flag, especially because its effects aren’t exactly like the others; let me explain.
With other flags, if you don’t allow a particular feature, you get an UnheldException and the execution ends. That is a very clear sign that you either need to add permissions to your script or that the script you’re executing is doing something you weren’t aware of.
Now, if you execute Listing 3-1 without the proper high-resolution flag, you’d get something like "Reading this file took: 10 ms"; however, if instead you add the --allow-hrtime flag, the result changes to "Reading this file took: 10.551857 ms".
The difference is considerable , only if you need a high level of detail; otherwise, you’re good with the default behavior.
This one is big, mainly because access to the network interface is both an often required feature and a very open security hole. With permissions to send requests out, an ill-intended script can send information out without you knowing anything about it, and yet again, what kind of microservice would you be able to create if you can’t send and receive HTTP requests?
Worry not, there is a way around that conundrum: allow-lists.
So far, the flags I’ve shown you have been direct boolean flags; you use them to either allow or disallow something. However, some of the flags that are still pending (this one included) also allow you to provide a list as part of the allow flag. This feature creates a whitelist for elements you allow the particular feature for, and anything that falls outside of it is automatically denied.
You can use these flags without the list of course, but given how basic of a resource some of these are, you’ll more than likely find yourself having to allow them almost all the time.

Error while using a script that sends information to a non-whitelisted domain
Without having created the whitelist for the flag, executing the script would’ve ended in a seemingly normal execution, but we all know how true that statement actually is , so remember to always whitelist your domains if you can.
Although an experimental feature, plugins allow users to extend the interface of Deno using Rust. Right now and because this is not a finished feature, the interface is constantly changing, which is why there is not a lot of documentation available either. Plugins are definitely right now a very advanced topic and one only meant for developers interested in playing with experimental features.
However, if, by any chance, you’re one of those developers trying to play around with plugins, you’ll need a particular flag: --allow-plugin .
Without it, your code will not be able to use the external plugins, so remember it! The fact that by default you can’t really mess with the language is also a benefit; this means you can’t be fooled into using a third-party extension with malicious intent that imports an unwanted plugin without you knowing about it.
That’s right, two of the most basic operations you can perform in your code is reading a file and writing into one, and as you might’ve already gathered based on the examples shown so far, by default, you’re not allowed to.
And it makes sense if you think about it; reading from the disk of the host computer can be a dangerous operation if you combine it with other permissions, such as reading environment variables (like I’ve shown already). And writing into its disk is even worse; you can do pretty much anything if you’re not restricted. You can overwrite important files, leave part of your malicious code inside the computer, and more; your imagination is the limit here, really.
The catch, though, is that since allowing your scripts to perform one of these actions or not is too wide of a permission, you can provide a whitelist to enable reading and writing but only from (and to) a predefined list of folders (or even files).
This is a particularly useful feature if your code, for example, reads configuration options from a config file. In a case like that, you can give specific read access to that file and nothing else, providing an extra level of comfort to whomever needs to use your code that it will not read anything it’s not supposed to.
If you, as an external user, see that execution line, you can rest assured that whatever is the script doing, it’s not trying anything funny on your system.
Spawning subprocesses is a useful task if you intend to do things such as interacting with other OS commands; the problem, though, is that the concept itself is very dangerous from a security point of view.
The script from Listing 3-2 is proof that you need to use the allow-run flag with care; otherwise, you could potentially be allowing a privilege escalation incident inside your computer without knowing .
After reviewing all the security flags you can and need to use in order to make your scripts work, a potential new pattern for the back end can be seen: checking for available permissions or, as I like to call it, CAP.
The point of CAP is that if you keep working like you’ve been working so far for your back-end projects, the moment someone tries to execute your code without enough permissions, the entire application will collapse. With the exception of HRTime, Deno is not gracefully downgrading the fact that you don’t have enough privileges to access one of the other features and directly throws exceptions of type PermissionDenied .
What if, instead of just exploding, your code could be able to check if you actually have been granted the permissions before trying to execute the code that requires them? Of course, there might be cases where you won’t be able to do anything without them, and you’ll have to stop the execution, but in others you might be able to gracefully degrade the logic into something capable of still functioning. For example, maybe you haven’t been granted write permissions, so your logger module just outputs everything out into STDOUT. Perhaps ENV access wasn’t provided, yet you can try and read those values from a default config location.
As it currently stands, the code required for this pattern to work is experimental, and it could potentially change in future updates, so you’ll have to use the --unstable flag to execute it. I’m referring of course to the API inside Deno.permissions , which I already briefly showed in Listing 3-2.

Requesting permission from the user
You can even add the extra parameter to verify if a particular place or resource within that group is accessible. Remember we’ve seen that the permissions that currently support whitelisting are read, write, and net.
Although the –allow-net flag doesn’t require you to specify the protocol part of the URL when whitelisting domains, here in order to request access to them, you’ll have to provide a full URL; otherwise, you’ll get an error.
Whatever you answer in the first question will later be returned for both resources.
Finally, the last thing the permissions API is letting you do is revoking your own access to a particular resource.
The same object can be provided as an argument, just like with the other two methods, and the result is that of removing access to a resource you could’ve been given access to. Although a bit contradictory, it might be of use if you’re building some automation code that needs to react to configuration changes or maybe some kind of process management system that needs to provide and revoke permissions to different services or even to overwrite whatever permissions the script has been given from the command line.
That last part is important, since both the request and the revoke method will override whatever flags were used during execution.
It doesn’t matter if you’re using the --allow-env flag when calling the script from Listing 3-8; you’re not going to access that environment variable.
Security is definitely a big issue when building software that others will use in order to provide that extra layer of “peace of mind” to them and, if you’re on the other side of the fence, using software built by others.
And although the security flag mechanics might seem a little clumsy or awkward to a back-end developer who’s never had to worry about that before, they provide a tried and tested approach that combined with the CAP (oh yes, I’m going with my name here) gives a fairly good user experience.
In the next chapter, we’ll see how Deno changes the game of dependency management by simply getting rid of everything and going back to the basics, so see you in the next chapter!
This is, arguably, the most controversial change that Deno has introduced into the JavaScript-for-the-back-end landscape: the lack of a package manager. Let’s be honest—this is not about them dropping support for NPM, which, if you don’t know about it, is the de facto package manager for Node.js. This is about them dropping the entire concept of a package manager altogether and letting back-end developers handle dependencies like browsers do.
Is this a good approach? Will it break the entire ecosystem and make the Deno community collapse? I’m not telling you now; you’ll have to read and see for yourself!
There is a lot to unpack in this chapter, so let’s get to it, shall we?
First things first: external modules are still a thing, and just because there is no package manager, it doesn’t mean they’ll go away; you still have to deal with them, somehow. This is not just about your own external modules; after all, any self-respecting language (or rather runtime in this case) can’t hope that developers will suddenly decide to reinvent the wheel every time they start a new project. Externally developed modules exist, and you should take advantage of that fact.
with functionname being one of several things, depending on what you need to extract from the module, and package-url being a fully qualified URL or local path to one file, including its extension. That’s right; Ryan, the creator of Deno and Node, decided to drop that little syntactic sugar cube he had given us back in the Node days, because now you can actually directly import TypeScript modules.
You read that right. Thanks to the fact that TS is now a first-class citizen in Deno land, you no longer have to worry about compiling your modules in order to import them; you just link to them directly and Deno’s internals will take care of the rest.
As for the functionname being imported, there are several ways of writing it, again, depending on what you’re looking for and how the module is exporting its functions.
This allows you to keep the current namespace clean from who knows how many names you could be importing and not using. It is also a great way of letting others clearly understand what you’re expecting to get from the use of that external library.
Notice also how in my previous two examples, I am importing from an external URL. This is something crucial, since it’s the first time a JavaScript runtime for the back end is letting us do this. We’re not referencing a local module with those URLs, but rather something that’s potentially outside of our domain of control, something that someone else published somewhere and we’re now using.
This here is the key to Deno not needing a package manager. It not only allows you to import modules from any URL, but it also caches them locally during your first execution. This is all done automatically for you, so you don’t really need to worry about it.
Now, I know what you’re thinking: “Importing modules from the middle of nowhere? Who’s going to ensure I get the version I need? What happens if the URL goes down?”
They are all very valid questions actually and, in fact, questions we all asked ourselves when the announcement was made, but fear not, there are answers!
If you’re coming from Node.js, then the fact that there is no centralized package repository might sound a bit scary. But if you think about it, a decentralized repository is removing any chances of it being unavailable due to technical problems. And trust me, during the first days of npm, there were times where the entire registry would go down, and if you had to deploy something into production and depended on it, then you were in trouble.
Of course, that is not the case anymore, but it’s also true that it is a private repository that could potentially one day be closed down, and it would affect every single project out there. Deno, instead, tried to remove that potential problem from the get-go and decided to opt for the browser route. After all, if you’ve ever written some front-end code or if you ever inspected a website’s code, you would’ve noticed the script tags at the top of the page, essentially importing third-party code from different locations.
Now, so far, this sounds interesting at the least, but consider a big project with hundreds (if not more) of files, which import modules from different locations. What happens then if, for some reason, some of them suddenly change location (maybe they’re migrated into a different server)? Then you’d have to go file by file updating the URLs from the import statements. This is far from ideal, which is why Deno provides a solution.
Versioning is also a valid concern here, since when importing you’re only specifying the URL of the file, not really its version. Or are you? Look again at Listing 4-3; in there, you can see the second export statement has a version as part of the URL.
This is how you’d handle versioning in this URL-based scheme. Of course, this is not some obscure feature from URLs or HTTP; this is just about publishing your modules under a URL that has the version as a part of it or using some form of load balancing rule to parse the version from the URL and redirecting the request to the correct file.
There is really no standard or hard requirement for you to implement while publishing Deno modules; all you have to be sure of is to provide some kind of versioning scheme. Otherwise, your users will not be able to lock to a particular one, and instead they’ll always download the latest version, whether it works for them or not.
As you can see, Deno’s packaging scheme is considerably simpler than Node’s, and it’s a very valid attempt at copying an approach that’s been used for years now on the front end. That being said, most back-end languages have a more explicit and arguably convoluted packaging system, so switching to Deno’s if you’re expecting to share your code with others, you’ll have to remember to include the version as part of the URL somehow, or you’ll provide a very poor service to your consumers.
Although that sounds understandable, the question now raises: do you really have to have your own web server and configure it in a way that allows you to add a versioning scheme right into the URL so you can serve your Deno modules in a reasonable fashion? No, you don’t. In fact, there is already a platform that will do that for you if you allow it to: GitHub.1
In case you’re not familiar with it, GitHub allows you to publish your code and share it with others for free; it works with the version control system known as Git, and it’s pretty much an industry standard in many places. They even have an enterprise version, so you could even be using it for your company’s internal repositories already.
The interesting thing about GitHub is that they publish your content using a URL scheme that includes the Git tag or the Git commit hash as part of it. And although the commit hash is not that “human friendly” as one would like (i.e., b265e725845805d0c6691abbe7169f1ada8c4645), you can definitely use the tag name as the package’s version.

List of tags for the sample module on GitHub

Selecting the version you want of the file

Getting the raw URL of our file on GitHub
Doing this will open a URL similar to https://raw.githubusercontent.com/deleteman/versioned-deno-module/4.0/hello.ts (notice the bold section is where GitHub adds the tag name; you can change this to reference other versions without having to change anything else), and then you can use that in your code to import the code.
Notice how at the top of the code in Figure 4-3, I’m importing a local file. That file also gets versioned, and thus you don’t have to worry about any local dependencies you might have; they’ll all get correctly referenced if you link to the right version of the main module’s file.
With this process, you’re essentially publishing your Deno modules into a free-to-use CDN that is sure to be available all the time. No need to configure it or pay for anything, just worry about your code and nothing else. In fact, thanks to all other GitHub features, you also gain things like ticket management for when users want to report problems, Pull Request control for when others want to contribute to your modules, and a lot more. Although there are other alternatives out there and you might have your preferred CDN, going with GitHub in this case might be a great way of killing several birds with a single (free-to-use) stone.
A big part of understanding how Deno handles packages’ versions is understanding how to lock them. You see, with any packaging scheme, you’ll want to lock your dependencies’ versions in order to make sure no matter where you deploy your code, you’ll always be using the same code. Otherwise, you could potentially have problems by downloading a new version of a module that has breaking changes when deploying to production.
This is actually a very common case with inexperienced developers thinking that it’s always best to link to the latest version of a package; after all, latest always means “more bugs fixed and more features published.” Of course, this is a very naive and potentially dangerous approach; after all, who knows how the module in question might evolve over time and which features might get removed. A key aspect of a dependency tree is that it needs to be idempotent in the sense that no matter how many times you deploy it, the end result (i.e., the code you get) will always be the same.
In order to achieve this goal, Deno provides the --lock and --lock-write flags . The first flag lets you specify where the lockfile resides, while the second one tells the interpreter to also write all lock-related information to disk. Here is how you use them.
The first thing to notice here is that on the first line, I’m just updating the cache without executing a single line of code. In fact, I’m not even referencing my script file; I’m referencing the dependencies file (deps.ts). The second detail here is that although I’ve already updated the cache, I’m still telling Deno to execute the script with the lockfile, but why?
This is because there is one more thing that can go wrong, and the team behind this lockfile feature also provided you with a way of checking for it: what if the code of the version of the module you’re trying to deploy changed since the last time you used it on your dev environment?
With a centralized module repository that controls everything for you (i.e., ala NPM), that wouldn’t be a problem because versions get updated automatically, but that is not the case here. With Deno, we’re giving module developers every single ounce of freedom to do whatever they want with their creations, including, of course, updating code without automatically bumping version numbers.
And that mixed with a cache update operation without the lockfile provided (i.e., a deno cache --reload without using the --lock flag) would result in a local cache that is not exactly like the one you used to develop with. In other words, the code inside the local cache of the box where you’ve just deployed is not exactly the same as the one on your local cache, and it should be (at least that of the modules you both share).
The error shown in Listing 4-7 clearly states there is an integrity issue with one of the dependencies in the lockfile, and then it gives you the URL for it. In this case, it’s showing a problem with the colors module.
Up to this point, everything shown works out of the box with the currently published version of Deno. But for this feature, we’ll have to use the --unstable flag since this is not fully done, and it’s an experimental feature.
Import maps allow you to redefine the way you handle imports. Remember the deps.ts file I mentioned earlier? Well there is another way of simplifying the imports so you don’t have to use URLs everywhere, and that is defining a mapping between these URLs and specific keywords you can then use.
If you're coming from Node, this approach must feel very familiar, since it’s very similar to what you’d do with the package.json file.
A simplified way of shortening a URL into a simple prefix
A way of getting rid of extensions by mapping directly to the preferred version of the module
A way to simplify local folder structure by mapping a short prefix to a potentially long path inside our directory structure
The only downside to using import maps, aside from the obvious fact that it’s not yet 100% stable, is that because of that reason, IDEs such as VS Studio and its plugins will not take it into account, thus showing errors of missing imports when there are none in reality.
This concludes Chapter 4; hopefully by now, you’ve gathered that the lack of a centralized module repository is not actually a bad thing. There are easy workarounds that provide many of the functionalities other systems such as NPM provided for Node developers with the added freedom of letting you do whatever you want with your modules.
That, of course, comes with the added risk of letting developers do whatever they want with their modules, so if you’re planning on sharing your work with the Deno community, please take that into account and take all the possible precautions before publishing your work.
The next chapter will cover the standard library of Deno, some of the most interesting modules, and how you can reuse the work of others from the Node community inside your Deno code without having to reinvent the wheel.
By now, we’ve covered every major change introduced by Deno into the JavaScript ecosystem, and now it’s time to review the things you can already do with it.
Don’t get me wrong; you can use it for pretty much anything that you’d use Node.js. This is not about the runtime but rather about the state of its surrounding module ecosystem. As you probably know, NPM has literally millions of modules published by almost as many users, and while that code is JavaScript, it’s not 100% compatible with Deno, so we can’t just reuse that work like we’re just starting with an 11 years head start.
That being said, Deno’s standard library is already pretty beefy, and from day one, there’s already been a contingency of users porting modules from Node into Deno, in order to make them compatible, so we do have a lot of tools to work with.
In this chapter, I’m going to cover some of them in order to show you that although this new runtime is not even a year old, you can use it to do some very interesting projects.
Let’s start with the modules that were provided to us from day one with the installation of Deno: the standard library . This is actually very important, because to Ryan, Node.js had a very poor standard library and lacked most of the basic tools anyone would need to start doing something relevant. And as someone who started using Node on version 0.10 back around 2012, I can confirm that after 3 years of existing, Node.js had no real standard library. Its focus had been on providing asynchronous I/O for back-end developers, but that was it; the entire developer experience was not great, especially if you compare it to today’s standards.
That wasn’t a problem though, because the more popular Node became, the more users were just compiling the basic building blocks they had available into more usable libraries that they started sharing through NPM. And although there is quite the community for Deno already and they’re starting to either write new libraries or port existing ones over to this side, the numbers can’t be compared, not yet at least.
Module | Description |
|---|---|
Archive | Archiving functions, as of the writing of this book, it provides you with the ability to TAR and UNTAR files. |
async | Set of tools to deal with asynchronous behavior. I’m not talking about promises or async/await; those are part of the language itself. Here you have things such as a delay function to choose how much to delay your code’s execution or a way to add the resolve and reject functions as methods to a promise. |
bytes | Low-level set of functions to manipulate bytes. Binary objects will require more work if you don’t treat them as such; these functions will help you simplify that task. |
datetime | A few helper functions and some string parsing functions to help you bridge the gap between strings and the Date object. |
encoding | Very useful module to deal with external data structures. Remember how for JSON structures you get the JSON global object? Well, here you can add support for YAML, CSV, and several others. |
flags | A command-line argument parser. If you’re building a CLI tool, there is no longer a need to import a module that will do this for you; you already have it available. This here shows the power of a well-thought-out standard library. |
fmt | Text formatting functions. If console.log wasn’t enough for you, this module has everything you need to add that extra drop of life to your console messages. |
fs | Extra file system functionality. We’re not talking about just read and write a file; that can be done directly from the Deno namespace. We’re talking about the ability to use wildcard characters as part of the path, copying files and folders, moving them, and more. Note: This module is marked as unstable as of this writing, so you’ll need the --unstable flag to access it. |
hash | Library that adds support to create and deal with over 10+ algorithms used to create a hash. |
http | HTTP-related functions. This is where you get everything you’ll need if you’re trying to create a web server (i.e., working on a microservice, a monolithic web app, or something in between). |
Io | This is Deno’s module to deal with streams, including, of course, the one for the standard input. This means this is the module you’d use if you’re looking to request input from the user (among other things, of course). |
Log | This module is proof that Deno has a very complete standard library. How many times did you have to implement your own loggers in Node? Or look for the best logger out there for your project? Instead, Deno already has a very complete and flexible one at your disposal without having to go out looking for anything. |
mime | A set of functions dedicated to deal with multipart form data, both for reading it and writing it. |
node | This is a work-in-progress compatibility module with Node.js’s standard library. It provides polyfills for some of the most common Node functions such as require or events. This is a module you’d want to review if you’re trying to port code from Node to Deno; otherwise, it’s not really of use. |
Path | Set of classic functions meant to deal with a path, such as getting the folder name from a path or extracting the common path of a set of different ones and so on. |
permissions | Small module meant to grant permissions to your scripts. It requires the --unstable flag to be used. It’s very similar to the Deno.permissions API described in the previous chapter. |
signal | Provides an API to deal with process signaling. This is quite a low-level API, but it allows you to deal with signals such as SIGINT and SIGTSTP. |
Testing | Just like with the log module, this time Deno also provides you with everything you’d need to create a test suite. |
uuid | Ever needed to create a unique ID before? This module will help you create one using one of the different versions supported (1, 3, 4, and 5) of the UUID standard. |
wasi | An implementation of the WebAssembly System Interface (WASI) which you can use to compile WASM code. |
Ws | The thing that we’ve been missing from the list: WebSocket support. |
Table 5-1 has a quick rundown of the standard modules; they’re all under constant development, but at the same time, because of the crucial role they play as part of Deno’s ecosystem, they’re reviewed by the core team directly. Just like with any open source project, you can definitely send your contributions; just understand that they can’t have any external dependencies.
As I’ve already mentioned, Deno’s ecosystem of user-made modules can’t yet compare to Node’s, given the amount of time it’s been out. That being said, there is quite a lot of work being done by the community to bridge that gap.
After all, every Node module that exists out there is just written in JavaScript, just in a slightly different flavor of it, so the translation is doable. It just takes time, especially if the module you’re translating has dependencies, since you’d have to translate those as well.
Since the release of Deno, a few solutions have been deployed in order to have some form of single place to browse through and find modules (ala NPM website), either by just keeping track of URLs or by directly storing everything.
I’m going to quickly cover two of the major repositories that have been deployed recently and which you can use to find out what’s already available for you to work with.
Deno’s site (http://deno.land) is providing a free-to-use URL rewrite service which you can contribute to and add your links to the list. Basically, they will list your modules on their site (currently, there are over 700 modules already being shown) and redirect to them. The database for this registry is currently a JSON file which you have to edit and send a Pull Request for.
Personally, I don’t see this being very scalable, so I’m assuming that in the near future they will provide another way of updating the list and adding your own modules to it.
But right now, the way of contributing to that list is by sending a Pull Request to this repository: https://github.com/denoland/deno_website2, specifically a modification to the file called database.json which can be found directly in the root folder of that repo.

Deno’s official module repository
The way it was created, this redirect rule also takes into account if you add the branch name as part of the URL. If you do, it’ll send traffic toward that branch; otherwise, it’ll assume you’re targeting the master branch. As an alternative to using tags as I mentioned in Chapter 4, you can also use branch names as your version number, which could also work out in your favor thanks to this useful redirect. With it, you can write something like http://deno.land/x/your-module@1.4/, and this will redirect the traffic to your account at GitHub (assuming this is your module we’re talking about) and inside it, to that module’s folder and within it, the specific branch called 1.4.
The cool part about this is that you can use this method to import modules from within your code. Remember, this is just a redirect service; the actual file is stored wherever you put it, and in this case it would be GitHub’s own servers.
Again, this is not a replacement for a centralized repository, but simply a great tool to search through the decentralized sea of modules that will be ever growing.
The second, most promising platform where you can find Deno modules is nest.land. Although unlike the previous service, this one is also storing your code, but instead of using a regular platform for this, it uses a blockchain network.
That is correct; by using the power of blockchain , this platform is not only creating a distributed storage for your modules, but a permanent one at that. Through this platform and by publishing your modules, you’re storing them in the Arweave permaweb1 where they’ll live, technically, forever. So removing modules is not possible, which already provides a great advantage over any other options when publishing modules, since the fact that a module can be removed unexpectedly is one of the big risks of relying on external packages.

The Gallery, listing published modules
In order to import the modules stored in this platform, you’ll get a URL from the website which you can use from your code, and they all follow the same pattern: https://x.nest.land/<module-name>@<module-version>/mod.ts
For example, the module Drash,2 which is an HTTP microframework, can be imported using the following URL: https://x.nest.land/deno-drash@1.0.7/mod.ts.
Notice you’re providing all privileges (using the -A flag) and also you’re giving permission to use unstable features through the use of the --unstable flag .
Once that is installed, you’ll have to link your API key (which you should’ve gotten, downloaded, and stored in your local storage after signing up) using eggs link --key [your key].
That concludes the generic installation instructions; after that, you’ll have to go to your module’s folder and initialize it (just like you would’ve with npm init) using egg init.
During the initialize process, you’ll get asked several questions about your project, such as the name, a description if it’s an unstable version of the module, the list of files to publish, and the format for the config file (either JSON or YAML).
Although this might seem like a copy of the beloved package.json from Node, it, in fact, is not. This file is required to simplify the task of showing information and managing the packages, but it does not include extra information such as a list of dependencies nor project-wide configurations or commands. So although it’s still adding a configuration file, you’re not centralizing everything into a single file full of unrelated stuff.
With that file out of the way, all you have left to do is to publish your module, and you can do that with the command egg publish. After that, you’ll be able to see your module in the library , where it’ll live forever (or at least until the permaweb gets taken down).
In order to close this chapter, I would like to cover some modules that might be of interest to you depending on what you’re trying to achieve with Deno.
Of course, there is nothing preventing you from using other modules, but at least it’ll give you a starting point.
Probably one of the most common tasks related to any runtime with async I/O on the back end is to develop APIs or any web-based project for that matter. This is why Node.js has gained so much track on the microservices projects.
In the case of Deno, there are already some very interesting frameworks available.
With this module, you can create either a straight API or a web application; you decide which one based on the generator script you chose. Essentially, Drash provides you with a generator script giving you the ability to create all the basic boilerplate code required.

Project structure after executing Drash’s generator
As for its documentation, their website3 contains a very detailed set of examples that take you from the most basic use case up to the most complex ones. As a developer coming from frameworks such as Express4 or Restify,5 the approach taken by Drash is fresh and interesting, considering it focuses heavily on TypeScript and several of the features we covered in Chapter 2.
If you’re looking to get some work done quickly and setting up an API using Deno, consider taking a look at this fresh attempt instead of going with a migrated Node module.
Whatever kind of application you’re working on, you’re most likely going to require the use of a database. Whether it’s a SQL-based one or a NoSQL one, if you need one, Deno has you covered.
If you’re thinking on using SQL (specifically SQLite, MySQL, or Postgre), then Cotton6 is your go-to; similar to what sequelize7 did for Node, this module is trying to provide a database-agnostic approach for the developer. You worry about using the right method and it’ll write the queries for you. And the best part is, if you need to, you can also write your own raw queries, which, granted, would break that ORM pattern, but it also gives you the flexibility you need for the most complex use cases.
If, on the other hand, you’re looking to interact with a NoSQL database, the task of recommending a module becomes a bit more complex, since due to the nature of NoSQL databases, you’ll be hard pressed to find a single module that works for all of them.
Instead, you’ll have to look for something designed specifically for your database. Here, I’m going to recommend something for MongoDB and Redis, since they’re two of the main NoSQL databases out there.
Document-based databases are a classic NoSQL go-to, and in particular, MongoDB , given its integration with JavaScript, is a great fit for our favorite runtime.
DenoDB8 is one of the few modules that provide support for MongoDB other than deno_mongo9 which is a direct wrapper on top of the Mongo driver written in Rust. Interestingly enough, this module also supports some of the major SQL-based databases, so it covers all the basics.
The only drawback of this module is the apparent lack of support for raw queries. So if you find yourself in need of operations the module’s API is not giving you, remember that inside it’s simply using deno_mongo to handle the connection, so you can directly access that object through the getConnector method .
Redis is a completely different type of database, and since it deals with key-value pairs instead of actual document-like records, following the same ORM-based approach makes little sense.
Of course, you’ll normally want to use the methods provided by the API, but this allows you to access features that are not yet part of the stable API. Use this only for extreme cases; otherwise, stick to the standard methods .
In order to make your code work with this module, you’ll need to provide network privileges using the --allow-net flag.
As another classic use case for a runtime such as Deno, considering how dynamic JavaScript is, it’s very easy to use it for development tooling, which is where CLI tools come into place.
And although Deno as part of its standard library already provides a very comprehensive argument parsing module, there are other things to take care of when creating a command-line tool.
And for that, the module Cliffy10 provides a complete set of packages that deal with all aspects involved in the creation of one of these tools.
ansi-escape:11 Allows you to interact with the CLI cursor by moving it around or hiding when required.
command:12 You can use this module to create commands for your CLI tool. It provides a very easy-to-use API that autogenerates help messages and helps you parse the CLI arguments.
flags:13 Think of this module as Deno’s flag parsing package on steroids. It allows you to provide a very detailed schema for your flags, specifying things such as aliases, whether they’re mandatory or not, dependency with other flags, and a lot more. It helps you take your CLI tool from a basic version to a fully thought-out and professionally designed tool.
keycode:14 If you’re trying to request user input other than normal text (i.e., pressing the CTRL key), this module will help you parse those signals.
prompt:15 Requesting input from the user can be as simple as using a console.log with a message and then relying on Deno’s Stdin reader, or you can use this package and give your user one heck of an experience. Other than just requesting free text input, you can also provide drop-downs , checkboxes, numeric inputs, and more.
table:16 If you need to display tabular data on the terminal, this module is your go-to. It allows you to set format options such as padding, border width, max cell width, and more.
As an example of what this library can do, I’ll show you how to display the content of a CSV file on a nicely formatted table, using the last of the modules I just mentioned.

Basic CSV file

Output from the script showing the data inside a table
There are a lot of other modules already out there ready for you to start writing quality software in Deno right now. The community is constantly publishing and porting packages either from Node or from Go or just taking the opportunity to bring fresh ideas to this new ecosystem, so it’s really up to you to start browsing and testing the ones that seem more interesting.
The aim of this chapter was to give you an idea of how mature Deno’s ecosystem is already, and as you can see, not only has the community responded to the lack of a package manager providing a way to browse and reliably store the code, but they’ve also been producing content like there is no tomorrow.
If you were wondering if there would be enough of a user base for this new runtime to be actually used in production, this chapter should give you the answer considering all the content that has been published in but a few months since its release.
And things are just getting started, so in the next and final chapter, I’ll show you a few examples of how to use some of the modules covered in this chapter and some new ones to create fully fledged applications.
This is the last chapter, and by now we’ve not only covered the language and the runtime but also the amazing work the community has been doing since the release date (and before that to be honest) of building tools and modules to help move the technology forward.
Throughout this chapter, I’ll showcase a few very different projects I’ve built using Deno in order to show you how everything covered so far fits together. They are all sample projects and of course not completely production ready, but they should cover all areas of interest, and if you take the GitHub project as a starting point (all these projects will be available on a GitHub account), you should be able to customize it and make it your own in no time.
So without further ado, let’s start going through the projects.
The first project we’ll tackle is a simple yet quite useful one. From what we’ve covered so far, every time you execute a Deno script, you need to specify the permission flags in order to provide those privileges to the script. That’s a fact and a design decision by the team behind this runtime.
However, there can be another way; if you create a tool that reads those permissions from a preset file and then executes the intended script as a subprocess, then you provide a better experience to your users. And that is the aim of this project: to simplify the user experience of executing a Deno script without having to worry about a very long command line that, although explicit, can also be convoluted and scary for novice users.
Much, much simpler, if you think about it, if the script you’re trying to execute comes with the flags file incorporated, you can see how using this tool would be much more friendly, especially to a newcomer.
Build an entry point that receives the name of the script to execute as parameter.
Make sure you can find the flags file (the one containing the security flags for the script).
Using the flags inside the flags file and the name of the script, create the command required to execute it.
And then, just execute it using Deno’s run method.
In order to make this work, we’ll only be using the standard library; in a way, this also serves as proof of the power promised by Deno’s creator regarding its standard library.
The main script, the so-called entry point, is the one that will be executed by the user, and it’s the one that will parse the CLI parameters.
All external dependencies will be imported from within the deps.ts file, following the already covered pattern in order to have easy access to any future updates or inclusions we might need.
The three functions we’ll be writing will live inside a utils.ts file, simply to separate the code of the entry point from these support functions.
Finally, the script required to bundle the code into a single file and make it executable in the end will be a simple bash script. This is due to the fact that we’ll need to run a few terminal commands, and using bash for that is much easier than doing it in JS.
The full source code for this small project is located here1 in case you need to go over any other detail or even clone the repository.
The script is capturing the command-line arguments located at Deno.args , and thanks to the parse method that (as you’ll see in the deps.ts file) is coming from the flags module belonging to the standard library. We then read the flags file and catch it if the script can’t find it. With that content, we parse it, turning it into a list of strings, and then simply ask to run it.
Notice the detect function used in order to understand which end of line character is being used. We then do that for the split method . The rest is just a matter of making sure the flag read from the file is a valid one, and if it’s not, we just ignore it.
In this function, we have an extra iteration over the list of flags, simply to notify the user which permissions are being granted to the script being executed. But the real meat of this code is how we can use array destructuring to merge the array into another one.
The very first line of this script is called shebang , and in case you’ve never seen it, it tells the interpreter where the actual binary that will execute this script is located. It allows you to execute a script without having to explicitly call the interpreter from the command line; instead, the current bash will do it for you. It’s important to understand that, because it can be done with any scripting language, not just bash, and as you’re about to see in a second, we’re trying to do the same thing for our script.
We will then proceed to use the deno bundle command , which will take all of our external and internal dependencies and create a single file. This is perfect for distributing our applications, because it allows you to just simplify that task. Now instead of having to ask your users to download a potentially very large project, you only need to ask them to just download one file and use that instead.
Our problem, though, is that we need to have our final bundle be an auto-executable file, so we need to understand where your deno installation is in order to create the proper shebang line. With our bundle code inside our CODE variable and our shebang line inside SHEBANG, we then proceed to output both strings into a single file (our final bundle) inside the bundle folder . We then provide execution permissions to our file so that you can directly call it from the command line, and the shebang will take effect.
The example from Listing 6-6 will only work on Linux and Mac systems; if you have a Windows box, you’ll have to do a search for how to update your PATH. It can be done; it’s not hard, but it’ll take a few clicks instead of a command line to do it. Also, the example assumes you’re using the default bash command line; if you’re using something else, such as Zsh,2 you’ll have to update the snippet accordingly.
For the next example of what you can achieve with Deno, I wanted to cover another powerful module from the standard library: testing.3
As I’ve already mentioned, Deno already provides a testing suite for you to work with. Granted, you’ll probably need extra juice if you intend to do more complex things like creating stubs or mocks, but for the basic setup, you have more than enough with Deno’s testing module.
And for that, we’ll go back to the first example, and we’ll add a few example tests so you can see how easy it actually is.
Adding a test is just as simple as creating a file that ends with either _test.ts or .test.ts (or change the extension if you’re directly writing JavaScript); with that, Deno should be able to pick it up and run the test when you execute it using the test command as follows: deno test.
With a simple line, you’re able to substitute the original method with one you have control over. In the example from Listing 6-10, you’re controlling the output from the miniAdd method , thus helping you test the rest of the logic associated with the add method (i.e., making sure the returned value is the error object in this case).
Finally, building a chat server usually entails dealing with sockets, since they allow you to open a two-way connection that remains open until closed, unlike normal HTTP connections, which are only alive for a very short period of time and really only allow for a single request and its corresponding response to be sent between client and server.
If you’re coming from Node, you’ve probably seen similar examples of socket-based chat clients and servers, essentially working on top of events emitted by the socket library. With Deno, however, the architecture is a bit different, since instead of depending on event emitters, Deno is using streams to handle sockets.
This function is meant to be called once the socket connection is established (more on that in a second). As you can see, the gist of it is a main for loop , iterating over the elements of the socket (which essentially are the new messages arriving). Any and all text messages received will be sent back to the client and all other open sockets through the socket.send method inside the asynchronous for loop (notice the bolded section of the code).
The server is started using the serve function , which in turn creates a stream of requests, one that we’re also iterating over with the asynchronous for loop . On every new request received (i.e., a new socket connection is opened), we’re calling the acceptWebSocket function . The full code for this server and the client (which I’ll be covering in a minute) can be found on GitHub,6 so make sure to check it out to understand how everything fits together.
A server can’t do anything without a proper client, so just to close this example, I’ll show you how you can use the same module from the standard library to create a client application that will connect to the server from before and send (and receive) messages.
Notice the two asynchronous functions I mentioned before (messages and cli); they both return a promise, and because of that, we can use Promise.race to have both functions being executed at the same time. With this method, the execution will end once either one of the promises resolves or fails. The cli function will read input from the standard input and send it over the socket connection using the socket.send method .
On the other hand, the messages function is, just like on the server side, iterating over the socket’s elements, essentially reacting to messages arriving over the connection.
By connecting instances of this client to the server, you can send messages among them all. The server will take care of broadcasting the message to everyone, and clients will show in yellow the text received from the server. Please refer to the full code7 if you want to test this project.
This is not only the end of Chapter 6 but also the end of the book. Hopefully by now, you’ve managed to understand the motivation behind the creation of Deno, why the same person who came up with Node and left a mark in the back-end development industry decided to start over and try harder.
Deno is far from being done; in fact, when I started working on this book, its first version had just been released, and not even two months later, version 1.2.0 is already out, causing some issues due to breaking changes.
But fear not; in fact, that is the proof you need if you still had doubts about the team behind Deno. This is not just one person hoping to overthrow the JavaScript king on the back end, this is a full team working on addressing the needs of a growing community that is actively providing feedback and support to help the ecosystem grow every day.
If you’re just going to take one thing from this book, I hope you take away the curiosity of playing around with a brand-new piece of technology, and hopefully you’ll fall in love with it.
Thanks for reading up to this point; see you on the next one!