
This Apress imprint is published by the registered company APress Media, LLC part of Springer Nature.
The registered company address is: 1 New York Plaza, New York, NY 10004, U.S.A.
To my wife, Clover, and our girls, Jean and Ruth, for completing my life.
—Kelvin Sung
To my family, for their eternal support throughout my life.
—Jebediah Pavleas
To my mom, Linda, for showing me the value of having fun at work.
—Jason Pace
Welcome to Build Your Own 2D Game Engine and Create Great Web Games. Because you have picked up this book, you are likely interested in the details of a game engine and the creation of your own games to be played over the Internet. This book teaches you how to build a 2D game engine by covering the involved technical concepts, demonstrating sample implementations, and showing you how to organize the large number of source code and asset files to support game development. This book also discusses how each covered technical topic area relates to elements of game design so that you can build, play, analyze, and learn about the development of 2D game engines and games. The sample implementations in this book are based on HTML5, JavaScript, and WebGL2, which are technologies that are freely available and supported by virtually all web browsers. After reading this book, the game engine you develop and the associated games will be playable through a web browser from anywhere on the Internet.
This book presents relevant concepts from software engineering, computer graphics, mathematics, physics, game development, and game design—all in the context of building a 2D game engine. The presentations are tightly integrated with the analysis and development of source code; you’ll spend much of the book building game-like concept projects that demonstrate the functionality of game engine components. By building on source code introduced early on, the book leads you on a journey through which you will master the basic concepts behind a 2D game engine while simultaneously gaining hands-on experience developing simple but working 2D games. Beginning from Chapter 4, a “Design Considerations” section is included at the end of each chapter to relate the covered technical concepts to elements of game design. By the end of the book, you will be familiar with the concepts and technical details of 2D game engines, feel competent in implementing functionality in a 2D game engine to support commonly encountered 2D game requirements, and capable of considering game engine technical topics in the context of game design elements in building fun and engaging games.
The key additions to the second edition include JavaScript language and WebGL API update and dedicated chapters with substantial details on physics and particle systems components.
All examples throughout the entire book are refined for the latest features of the JavaScript language. While some updates are mundane, for example, prototype chain syntax replacements, the latest syntax allows significant improvements in overall presentation and code readability. The new and much cleaner asynchronous support facilitated a completely new resource loading architecture with a single synchronization point for the entire engine (Chapter 4). The WebGL context is updated to connect to WebGL 2.0. The dedicated chapters allow more elaborate and gradual introduction to the complex physics and particle systems components. Detailed mathematical derivations are included where appropriate.
This book is targeted toward programmers who are familiar with basic object-oriented programming concepts and have a basic to intermediate knowledge of an object-oriented programming language such as Java or C#. For example, if you are a student who has taken a few introductory programming courses, an experienced developer who is new to games and graphics programming, or a self-taught programming enthusiast, you will be able to follow the concepts and code presented in this book with little trouble. If you’re new to programming in general, it is suggested that you first become comfortable with the JavaScript programming language and concepts in object-oriented programming before tackling the content provided in this book.
You should be experienced with programming in an object-oriented programming language, such as Java or C#. Knowledge and expertise in JavaScript would be a plus but are not necessary. The examples in this book were created with the assumption that you understand data encapsulation and inheritance. In addition, you should be familiar with basic data structures such as linked lists and dictionaries and be comfortable working with the fundamentals of algebra and geometry, particularly linear equations and coordinate systems.
This book is not designed to teach readers how to program, nor does it attempt to explain the intricate details of HTML5, JavaScript, or WebGL2. If you have no prior experience developing software with an object-oriented programming language, you will probably find the examples in this book difficult to follow.
On the other hand, if you have an extensive background in game engine development based on other platforms, the content in this book will be too basic; this is a book intended for developers without 2D game engine development experience. However, you might still pick up a few useful tips about 2D game engine and 2D game development for the platforms covered in this book.
This book teaches how to develop a game engine by describing the foundational infrastructure, graphics system, game object behaviors, camera manipulations, and a sample game creation based on the engine.
Chapters 2–4 construct the foundational infrastructure of the game engine. Chapter 2 establishes the initial infrastructure by separating the source code system into folders and files that contain the following: JavaScript-specific core engine logics, WebGL2 GLSL–specific shader programs, and HTML5-specific web page contents. This organization allows ongoing engine functionality expansion while maintaining localized source code system changes. For example, only JavaScript source code files need to be modified when introducing enhancements to game object behaviors. Chapter 3 builds the drawing framework to encapsulate and hide the WebGL2 drawing specifics from the rest of the engine. This drawing framework allows the development of game object behaviors without being distracted by how they are drawn. Chapter 4 introduces and integrates core game engine functional components including game loop, keyboard input, efficient resource and game-level loading, and audio support.
Chapters 5–7 present the basic functionality of a game engine: drawing system, behavior and interactions, and camera manipulation. Chapter 5 focuses on working with texture mapping, including sprite sheets, animation with sprite sheets, and the drawing of bitmap fonts. Chapter 6 puts forward abstractions for game objects and their behaviors including per-pixel-accurate collision detection. Chapter 7 details the manipulation and interactions with the camera including programming with multiple cameras and supporting mouse input.
Chapters 8–11 elevate the introduced functionality to more advanced levels. Chapter 8 covers the simulation of 3D illumination effects in 2D game scenes. Chapter 9 discusses physically based behavior simulations. Chapter 10 presents the basics of particle systems that are suitable for modeling explosions. Chapter 11 examines more advanced camera functionality including infinite scrolling through tiling and parallax.
Chapter 12 summarizes the book by leading you through the design of a complete game based on the game engine you have developed.
Every chapter in this book includes examples that let you interactively experiment with and learn the new materials. You can access the source code for all the projects, including the associated assets (images, audio clips, or fonts), by clicking the Download Source Code button located at www.apress.com/9781484273760. You should see a folder structure that is organized by chapter numbers. Within each folder are subfolders containing Visual Studio Code (VS Code) projects that correspond to sections of this book.
This book project was a direct result of the authors learning from building games for the Game-Themed CS1/2: Empowering the Faculty project, funded by the Transforming Undergraduate Education in Science, Technology, Engineering, and Mathematics (TUES) Program, National Science Foundation (NSF) (award number DUE-1140410). We would like to thank NSF officers Suzanne Westbrook for believing in our project and Jane Prey, Valerie Bar, and Paul Tymann for their encouragements.
This second edition is encouraged by many students and collaborators. In particular, students from CSS452: Game Engine Development (see https://myuwbclasses.github.io/CSS452/) at the University of Washington Bothell have been the most critical, demanding, and yet supportive. Through the many games and API extension projects (see https://html5gameenginegroup.github.io/GTCS-Engine-Student-Projects/), it became clear that updates are required of the JavaScript and WebGL (Web Graphics Library) versions, the bottom-line synchronization mechanism, and, most significantly, the coverage of the physics engine. Fernando Arnez, our co-author from the first edition, taught us JavaScript. Yaniv Schwartz pointed us toward JavaScript async/await and promise. The discussions and collaborations with Huaming Chen and Michael Tanaya contributed directly to the chapter on game engine physics. Akilas Mebrahtom and Donald Hawkins constructed the extra example at the end of Chapter 9 illustrating potential presets for commonly encountered physical materials. The audio volume control was first investigated and integrated by Kyla NeSmith. Nicholas Carpenetti and Kyla NeSmith developed a user interface API for the initial game engine, which unfortunately did not make it into this edition. These and countless other feedbacks have contributed to the quality and improvements of the book’s content.
The hero character Dye and many of the visual and audio assets used throughout the example projects of the book are based on the Dye Hard game, designed for teaching concepts of objects and object-oriented hierarchy. The original Dye Hard development team members included Matthew Kipps, Rodelle Ladia, Chuan Wang, Brian Hecox, Charles Chiou, John Louie, Emmett Scout, Daniel Ly, Elliott White, Christina Jugovic, Rachel Harris, Nathan Evers, Kasey Quevedo, Kaylin Norman-Slack, David Madden, Kyle Kraus, Suzi Zuber, Aina Braxton, Kelvin Sung, Jason Pace, and Rob Nash. Kyle Kraus composed the background music used in the Audio Support project from Chapter 4, originally for the Linx game, which was designed to teach loops. The background audio for the game in Chapter 12 was composed by David Madden and arranged by Aina Braxton. Thanks to Clover Wai for the figures and illustrations.
We also want to thank Spandana Chatterjee for believing in our ideas, her patience, and continual efficient and effective support. A heartfelt thank-you to Mark Powers, for his diligence and lightning-fast email responses. Mark should learn about and consider the option of sleeping some of the time. Nirmal Selvaraj organized everything and ensured proper progress was ongoing.
Finally, we would like to thank Yusuf Pisan for his insightful, effective, and, above all, quick turnaround for the technical review.
All opinions, findings, conclusions, and recommendations in this work are those of the authors and do not necessarily reflect the views of the sponsors.
is a Professor with the Computing and Software Systems Division at the University of Washington Bothell (UWB). He received his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign. Kelvin’s background is in computer graphics, hardware, and machine architecture. He came to UWB from Alias|Wavefront (now part of Autodesk), where he played a key role in designing and implementing the Maya Renderer, an Academy Award–winning image generation system. At UWB, funded by Microsoft Research and the National Science Foundation, Kelvin’s work focuses on the intersection of video game mechanics, solutions to real-world problems, and supports for remote collaboration. Together with his students and colleagues, Kelvin has co-authored five books: one on computer graphics (Essentials of Interactive Computer Graphics: Concepts and Implementation, A.K. Peters, 2008) and the others on 2D game engines (Learn 2D Game Development with C#, Apress, 2013; Build Your Own 2D Game Engine and Create Great Web Games, Apress, October 2015; Building a 2D Game Physics Engine, Apress, 2016; and Basic Math for Game Development with Unity 3D, Apress, 2019).

is a graduate student in the Computer Science and Software Engineering program at the University of Washington Bothell. He received undergraduate degrees in Computer Science and Software Engineering and Mechanical Engineering at the University of Washington Bothell in 2020. Matthew is interested in operating system development, networking, and embedded systems. As a research assistant, Matthew used cloud computing to analyze years of audio data recorded by hydrophones off the Oregon coast. This data was used to study the effects of climate change and shipping noise on marine mammals. Currently, Matthew is working on a networked augmented reality library that focuses on allowing users to view the same virtual scene from different perspectives.
contributed to a wide range of games as a producer, designer, and creative director over 15 years in the interactive entertainment industry, from ultra-casual puzzlers on mobile to Halo on Xbox. As a designer, Jason builds game mechanics and systems that start from a simple palette of thoughtful interactions (known as the core gameplay loop), progressively introducing variety and complexity to create interactive experiences that engage and delight players while maintaining focus on what makes each e-game uniquely fun.

His research interests include enabling technologies for computer games, the design of virtual environments that support collaborative work, and computer science education. He founded the Australasian Conference on Interactive Entertainment conference series and helped foster the Australian games community. His list of publications can be found at Google Scholar.
Yusuf has a Ph.D. in Artificial Intelligence from Northwestern University. Before moving to Seattle in 2017, Yusuf lived in the Chicago area for 10 years and Sydney for 20 years.
For more information, see https://pisanorg.github.io/yusuf/.

He also has hands-on experience in technologies such as AWS, IoT, Python, J2SE, J2EE, NodeJS, VueJs, Angular, MongoDB, and Docker.
He constantly explores technical novelties, and he is open-minded and eager to learn about new technologies and frameworks. He has reviewed several books and video courses published by Packt.
Video games are complex, interactive, multimedia software systems. They must, in real time, process player input, simulate the interactions of semiautonomous objects, and generate high-fidelity graphics and audio outputs, all while trying to keep players engaged. Attempts at building a video game can quickly become overwhelming with the need to be well versed in software development as well as in how to create appealing player experiences. The first challenge can be alleviated with a software library, or game engine, that contains a coherent collection of utilities and objects designed specifically for developing video games. The player engagement goal is typically achieved through careful gameplay design and fine-tuning throughout the video game development process. This book is about the design and development of a game engine; it will focus on implementing and hiding the mundane operations of the engine while supporting many complex simulations. Through the projects in this book, you will build a practical game engine for developing video games that are accessible across the Internet.
A game engine relieves game developers from having to implement simple routine tasks such as decoding specific key presses on the keyboard, designing complex algorithms for common operations such as mimicking shadows in a 2D world, and understanding nuances in implementations such as enforcing accuracy tolerance of a physics simulation. Commercial and well-established game engines such as Unity , Unreal Engine , and Panda3D present their systems through a graphical user interface (GUI). Not only does the friendly GUI simplify some of the tedious processes of game design such as creating and placing objects in a level, but more importantly, it ensures that these game engines are accessible to creative designers with diverse backgrounds who may find software development specifics distracting.
This book focuses on the core functionality of a game engine independent from a GUI. While a comprehensive GUI system can improve the end-user experience, the implementation requirements can also distract and complicate the fundamentals of a game engine. For example, issues concerning the enforcement of compatible data types in the user interface system, such as restricting objects from a specific class to be assigned as shadow receivers, are important to GUI design but are irrelevant to the core functionality of a game engine.
This book approaches game engine development from two important aspects: programmability and maintainability. As a software library, the interface of the game engine should facilitate programmability by game developers with well-abstracted utility methods and objects that hide simple routine tasks and support complex yet common operations. As a software system, the code base of the game engine should support maintainability with a well-designed infrastructure and well-organized source code systems that enable code reuse, ongoing system upkeep, improvement, and expansion.
This chapter describes the implementation technology and organization of this book. The discussion leads you through the steps of downloading, installing, and setting up the development environment, guides you to build your first HTML5 application, and uses this first application development experience to explain the best approach to reading and learning from this book.
The goal of building a game engine that allows games to be accessible across the World Wide Web is enabled by freely available technologies.
JavaScript is supported by virtually all web browsers because an interpreter is installed on almost every personal computer in the world. As a programming language, JavaScript is dynamically typed, supports inheritance and functions as first-class objects, and is easy to learn with well-established user and developer communities. With the strategic choice of this technology, video games developed based on JavaScript can be accessible by anyone over the Internet through appropriate web browsers. Therefore, JavaScript is one of the best programming languages for developing video games for the masses.
While JavaScript serves as an excellent tool for implementing the game logic and algorithms, additional technologies in the form of software libraries, or application programming interfaces (APIs), are necessary to support the user input and media output requirements. With the goal of building games that are accessible across the Internet through web browsers, HTML5 and WebGL provide the ideal complementary input and output APIs.
HTML5 is designed to structure and present content across the Internet. It includes detailed processing models and the associated APIs to handle user input and multimedia outputs. These APIs are native to JavaScript and are perfect for implementing browser-based video games. While HTML5 offers a basic Scalable Vector Graphics (SVG) API, it does not support the sophistication demanded by video games for effects such as real-time lighting, explosions, or shadows. The Web Graphics Library (WebGL) is a JavaScript API designed specifically for the generation of 2D and 3D computer graphics through web browsers. With its support for OpenGL Shading Language (GLSL) and the ability to access the graphics processing unit (GPU) on client machines, WebGL has the capability of producing highly complex graphical effects in real time and is perfect as the graphics API for browser-based video games.
This book is about the concepts and development of a game engine where JavaScript, HTML5, and WebGL are simply tools for the implementation. The discussion in this book focuses on applying the technologies to realize the required implementations and does not try to cover the details of the technologies. For example, in the game engine, inheritance is implemented with the JavaScript class functionality which is based on object prototype chain; however, the merits of prototype-based scripting languages are not discussed. The engine audio cue and background music functionalities are based on the HTML5 AudioContext interface, and yet its range of capabilities is not described. The game engine objects are drawn based on WebGL texture maps, while the features of the WebGL texture subsystem are not presented. The specifics of the technologies would distract from the game engine discussion. The key learning outcomes of the book are the concepts and implementation strategies for a game engine and not the details of any of the technologies. In this way, after reading this book, you will be able to build a similar game engine based on any comparable set of technologies such as C# and MonoGame, Java and JOGL, C++ and Direct3D, and so on. If you want to learn more about or brush up on JavaScript, HTML5, or WebGL, please refer to the references in the “Technologies” section at the end of this chapter.
The game engine you are going to build will be accessible through web browsers that could be running on any operating system (OS). The development environment you are about to set up is also OS agnostic. For simplicity, the following instructions are based on a Windows 10 OS. You should be able to reproduce a similar environment with minor modifications in a Unix-based environment like MacOS or Ubuntu.
IDE : All projects in this book are based on VS Code IDE. You can download and install the program from https://code.visualstudio.com/.
Runtime environment : You will execute your video game projects in the Google Chrome web browser. You can download and install this browser from www.google.com/chrome/browser/.
glMatrix math library : This is a library that implements the foundational mathematical operations. You can download this library from http://glMatrix.net/. You will integrate this library into your game engine in Chapter 3, so more details will be provided there.
Notice that there are no specific system requirements to support the JavaScript programming language, HTML5, or WebGL. All these technologies are embedded in the web browser runtime environment.
As mentioned, we chose the VS Code–based development environment because we found it to be the most convenient. There are many other alternatives that are also free, including and not limited to NetBeans, IntelliJ IDEA, Eclipse, and Sublime.
Go to https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint and click install.
You will be prompted to open VS Code and may need to click install again within the application.
For instructions on how to work with ESLint, see https://eslint.org/docs/user-guide/.
For details on how ESLint works, see https://eslint.org/docs/developer-guide/.
Go to https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer and click install.
You will be prompted to open VS Code and may need to click install again within the application.
Explorer window: This window displays the source code files of the project. If you accidentally close this window, you can recall it by selecting View ➤ Explorer.
Editor window: This window displays and allows you to edit the source code of your project. You can select the source code file to work with by clicking once the corresponding file name in the Explorer window.
Output window: This window is not used in our projects; feel free to close it by clicking the “x” icon on the top right of the window.

The VS Code IDE
Using File Explorer, create a directory in the location where you would like to keep your projects. This directory will contain all source code files related to your projects. In VS Code, select File ➤ Open Folder and navigate to the directory you created.

Opening a project folder
VS Code will open the project folder. Your IDE should look similar to Figure 1-3; notice that the Explorer window is empty when your project folder is empty.

An empty VS Code project
You can now create your first HTML file, index.html. Select File ➤ New File and name the file index.html. This will serve as the home or landing page when your application is launched.

Creating the index.html file
In the Editor window, enter the following text into your index.html:
The first line declares the file to be an HTML file. The block that follows within the <!-- and --> tags is a comment block. The complementary <html></html> tags contain all the HTML code. In this case, the template defines the head and body sections. The head sets the title of the web page, and the body is where all the content for the web page will be located.

Click the Go Live button to run a project
To run a project, the index.html file of that project must be opened in the editor when the “Go Live” button is clicked or when the Alt+L Alt+O keys are typed. This will become important in the subsequent chapters when there are other JavaScript source code files in the project.

Running the simple HTML5 project
To stop the program, simply close the web page. You have successfully run your first HTML5 project. Through the development of this very simple project, you have familiarized yourself with the IDE environment.
For debugging, we recommend the Chrome Developer tools. These tools can be accessed by typing Ctrl+Shift+I (or the F12 key) in the browser window when your project is running. To find out more about these tools, please refer to https://developer.chrome.com/docs/devtools/.
This book guides you through the development of a game engine by building projects similar to the one you have just experienced. Each chapter covers an essential component of a typical game engine, and the sections in each chapter describe the important concepts and implementation projects that construct the corresponding component. Throughout the text, the project from each section builds upon the results from the projects that precede it. While this makes it a little challenging to skip around in the book, it will give you practical experience and a solid understanding of how the different concepts relate. In addition, rather than always working with new and minimalistic projects, you gain experience with building larger and more interesting projects while integrating new functionality into your expanding game engine.
The projects start with demonstrating simple concepts, such as drawing a simple square, but evolve quickly into presenting more complex concepts, such as working with user-defined coordinate systems and implementing pixel-accurate collision detection. Initially, as you have experienced in building the first HTML5 application, you will be guided with detailed steps and complete source code listings. As you become familiar with the development environment and the technologies, the guides and source code listings accompanying each project will shift to highlight on the important implementation details. Eventually, as the complexity of the projects increases, the discussion will focus only on the vital and relevant issues, while straightforward source code changes will not be mentioned.
The final code base, which you will have developed incrementally over the course of the book, is a complete and practical game engine; it’s a great platform on which you can begin building your own 2D games. This is exactly what the last chapter of the book does, leading you from the conceptualization to design to implementation of a casual 2D game.
There are several ways for you to follow along with this book. The most obvious is to enter the code into your project as you follow each step in the book. From a learning perspective, this is the most effective way to absorb the information presented; however, we understand that it may not be the most realistic because of the amount of code or debugging this approach may require. Alternatively, we recommend that you run and examine the source code of the completed project when you begin a new section. Doing so lets you preview the current section’s project, gives you a clear idea of the end goal, and lets you see what the project is trying to achieve. You may also find the completed project code useful when you have problems while building the code yourself, because during difficult debugging situations, you can compare your code with the code of the completed project.
We have found the WinMerge program (http://winmerge.org/) to be an excellent tool for comparing source code files and folders. Mac users can check out the FileMerge utility for a similar purpose.
Finally, after completing a project, we recommend that you compare the behavior of your implementation with the completed implementation provided. By doing so, you can observe whether your code is behaving as expected.
While the focus of this book is on the design and implementation of a game engine, it is important to appreciate how different components can contribute to the creation of a fun and engaging video game. Beginning in Chapter 4, a “Game Design Considerations” section is included at the end of each chapter to relate the functionality of the engine component to elements of game design. This section presents the framework for these discussions.
It’s a complex question, and there’s no exact formula for making a video game that people will love to play, just as there’s no exact formula for making a movie that people will love to watch. We’ve all seen big-budget movies that look great and feature top acting, writing, and directing talent bomb at the box office, and we’ve all seen big-budget games from major studios that fail to capture the imaginations of players. By the same token, movies by unknown directors can grab the world’s attention, and games from small, unknown studios can take the market by storm.
Technical design : This includes all game code and the game platform and is generally not directly exposed to players; rather, it forms the foundation and scaffolding for all aspects of the game experience. This book is primarily focused on issues related to the technical design of games, including specific tasks such as the lines of code required to draw elements on the screen and more architectural considerations such as determining the strategy for how and when to load assets into memory. Technical design issues impact the player experience in many ways (e.g., the number of times a player experiences “loading” delays during play or how many frames per second the game displays), but the technical design is typically invisible to players because it runs under what’s referred to as the presentation layer or all of the audiovisual and/or haptic feedback the player encounters during play.
Game mechanic(s): The game mechanic is an abstract description of what can be referred to as the foundation of play for a given game experience. Types of game mechanics include puzzles, dexterity challenges such as jumping or aiming, timed events, combat encounters, and the like. The game mechanic is a framework; specific puzzles, encounters, and game interactions are implementations of the framework. A real-time strategy (RTS) game might include a resource-gathering mechanic, for example, where the mechanic might be described as “Players are required to gather specific types of resources and combine them to build units which they can use in combat.” The specific implementation of that mechanic (how players locate and extract the resources in the game, how they transport them from one place to another, and the rules for combining resources to produce units) is an aspect of systems design, level design, and the interaction model/game loop (described later in this section).
Systems design : The internal rules and logical relationships that provide structured challenge to the core game mechanic are referred to as the game’s systems design. Using the previous RTS example, a game might require players to gather a certain amount of metal ore and combine it with a certain amount of wood to make a game object; the specific rules for how many of each resource is required to make the objects and the unique process for creating the objects (e.g., objects can be produced only in certain structures on the player’s base and take x number of minutes to appear after the player starts the process) are aspects of systems design. Casual games may have basic systems designs. A simple puzzle game like Pull the Pin from Popcore Games, for example, is a game with few systems and low complexity, while major genres like RTS games may have deeply complex and interrelated systems designs created and balanced by entire teams of designers. Game systems designs are often where the most hidden complexity of game design exists; as designers go through the exercise of defining all variables that contribute to an implementation of a game mechanic, it’s easy to become lost in a sea of complexity and balance dependencies. Systems that appear fairly simple to players may require many components working together and balanced perfectly against each other, and underestimating system complexity is perhaps one of the biggest pitfalls encountered by new (and veteran!) game designers. Until you know what you’re getting into, always assume the systems you create will prove to be considerably more complex than you anticipate.
Level design : A game’s level design reflects the specific ways each of the other eight elements combines within the context of individual “chunks” of gameplay, where players must complete a certain chunk of objectives before continuing to the next section (some games may have only one level, while others will have dozens). Level designs within a single game can all be variations of the same core mechanic and systems design (games like Tetris and Bejeweled are examples of games with many levels all focusing on the same mechanic), while other games will mix and match mechanics and systems designs for variety among levels. Most games feature one primary mechanic and a game-spanning approach to systems design and will add minor variations between levels to keep things feeling fresh (changing environments, changing difficulty, adding time limits, increasing complexity, and the like), although occasionally games will introduce new levels that rely on completely separate mechanics and systems to surprise players and hold their interest. Great level design in games is a balance between creating “chunks” of play that showcase the mechanic and systems design and changing enough between these chunks to keep things interesting for players as they progress through the game (but not changing so much between chunks that the gameplay feels disjointed and disconnected).
Interaction model: The interaction model is the combination of keys, buttons, controller sticks, touch gestures, and so on, used to interact with the game to accomplish tasks and the graphical user interfaces that support those interactions within the game world. Some game theorists break the game’s user interface (UI) design into a separate category (game UI includes things such as menu designs, item inventories, heads-up displays [HUDs]), but the interaction model is deeply connected to UI design, and it’s a good practice to think of these two elements as inseparable. In the case of the RTS game referenced earlier, the interaction model includes the actions required to select objects in the game, to move those objects, to open menus and manage inventories, to save progress, to initiate combat, and to queue build tasks. The interaction model is completely independent of the mechanic and systems design and is concerned only with the physical actions the player must take to initiate behaviors (e.g., click mouse button, press key, move stick, scroll wheel); the UI is the audiovisual or haptic feedback connected to those actions (onscreen buttons, menus, statuses, audio cues, vibrations, and the like).
Game setting : Are you on an alien planet? In a fantasy world? In an abstract environment? The game setting is a critical part of the game experience and, in partnership with the audiovisual design, turns what would otherwise be a disconnected set of basic interactions into an engaging experience with context. Game settings need not be elaborate to be effective; the perennially popular puzzle game Tetris has a rather simple setting with no real narrative wrapper, but the combination of abstract setting, audiovisual design, and level design is uniquely well matched and contributes significantly to the millions of hours players invest in the experience year after year.
Visual design : Video games exist in a largely visual medium, so it’s not surprising that companies frequently spend as much or more on the visual design of their games as they spend on the technical execution of the code. Large games are aggregations of thousands of visual assets, including environments, characters, objects, animations, and cinematics; even small casual games generally ship with hundreds or thousands of individual visual elements. Each object a player interacts with in the game must be a unique asset, and if that asset includes more complex animation than just moving it from one location on the screen to another or changing the scale or opacity, the object most likely will need to be animated by an artist. Game graphics need not be photorealistic or stylistically elaborate to be visually excellent or to effectively represent the setting (many games intentionally utilize a simplistic visual style), but the best games consider art direction and visual style to be core to the player experience, and visual choices will be intentional and well matched to the game setting and mechanic.
Audio design : This includes music and sound effects, ambient background sounds, and all sounds connected to player actions (select/use/swap item, open inventory, invoke menu, and the like). Audio design functions hand in hand with visual design to convey and reinforce game setting, and many new designers significantly underestimate the impact of sound to immerse players into game worlds. Imagine Star Wars, for example, without the music, the light saber sound effect, Darth Vader’s breathing, or R2D2’s characteristic beeps; the audio effects and musical score are as fundamental to the experience as the visuals.
Meta-game: The meta-game centers on how individual objectives come together to propel players through the game experience (often via scoring, unlocking individual levels in sequence, playing through a narrative, and the like). In many modern games, the meta-game is the narrative arc or story; players often don’t receive a “score” per se but rather reveal a linear or semi-linear story as they progress through game levels, driving forward to complete the story. Other games (especially social and competitive games) involve players “leveling up” their characters, which can happen as a result of playing through a game-spanning narrative experience or by simply venturing into the game world and undertaking individual challenges that grant experience points to characters. Other games, of course, continue focusing on scoring points or winning rounds against other players.
The magic of video games typically arises from the interplay between these nine elements, and the most successful games finely balance each as part of a unified vision to ensure a harmonious experience; this balance will always be unique to each individual effort and is found in games ranging from Nintendo’s Animal Crossing to Rockstar’s Red Dead Redemption 2. The core game mechanic in many successful games is often a variation on one or more fairly simple, common themes (Pull the Pin, for example, is a game based entirely on pulling virtual pins from a container to release colored balls), but the visual design, narrative context, audio effects, interactions, and progression system work together with the game mechanic to create a unique experience that’s considerably more engaging than the sum of its individual parts, making players want to return to it again and again. Great games range from the simple to the complex, but they all feature an elegant balance of supporting design elements.
Marschner and Shirley. Fundamentals of Computer Graphics, 4th edition. CRC Press, 2016.
Angle and Shreiner. Interactive Computer Graphics: A Top Down Approach with WebGL, 7th edition. Pearson Education, 2014.
Sung and Smith. Basic Math for Game Development with Unity 3D: A Beginner’s Guide to Mathematical Foundations. Apress, 2019.
Johnson, Riess, and Arnold. Introduction to Linear Algebra, 5th edition. Addison-Wesley, 2002.
Anton and Rorres. Elementary Linear Algebra: Applications Version, 11th edition. Wiley, 2013.
JavaScript: www.w3schools.com/js
WebGL: www.khronos.org/webgl
OpenGL: www.opengl.org
Visual Studio Code: https://code.visualstudio.com/
Chrome: www.google.com/chrome
glMatrix: http://glMatrix.net
ESLint: www.eslint.org
Create a new JavaScript source code file for your simple game engine
Draw a simple constant color square with WebGL
Define JavaScript modules and classes to encapsulate and implement core game engine functionality
Appreciate the importance of abstraction and the organization of your source code structure to support growth in complexity
Drawing is one of the most essential functionalities common to all video games. A game engine should offer a flexible and programmer-friendly interface to its drawing system. In this way, when building a game, the designers and developers can focus on the important aspects of the game itself, such as mechanics, logic, and aesthetics.
WebGL is a modern JavaScript graphical application programming interface (API) designed for web browser–based applications that offers quality and efficiency via direct access to the graphics hardware. For these reasons, WebGL serves as an excellent base to support drawing in a game engine, especially for video games that are designed to be played across the Internet.
This chapter examines the fundamentals of drawing with WebGL, designs abstractions to encapsulate irrelevant details to facilitate programming, and builds the foundational infrastructure to organize a complex source code system to support future expansion.
The game engine you will develop in this book is based on the latest version of WebGL specification: version 2.0. For brevity, the term WebGL will be used to refer to this API.
To draw, you must first define and dedicate an area within the web page. You can achieve this easily by using the HTML canvas element to define an area for WebGL drawing. The canvas element is a container for drawing that you can access and manipulate with JavaScript.

Running the HTML5 Canvas project
To learn how to set up the HTML canvas element
To learn how to retrieve the canvas element from an HTML document for use in JavaScript
To learn how to create a reference context to WebGL from the retrieved canvas element and manipulate the canvas through the WebGL context
Create a new project by creating a new folder named html5_canvas in your chosen directory and copying and pasting the index.html file you created in the previous project in Chapter 1.
From this point on, when asked to create a new project, you should follow the process described previously. That is, create a new folder with the project’s name and copy/paste the previous project’s files. In this way, your new projects can expand upon your old ones while retaining the original functionality.
Open the index.html file in the editor by opening the html5_canvas folder, expanding it if needed and clicking the index.html file, as illustrated in Figure 2-2.

Editing the index.html file in your project
Create the HTML canvas for drawing by adding the following lines in the index.html file within the body element:
The code defines a canvas element named GLCanvas with the specified width and height attributes. As you will experience later, you will retrieve the reference to the GLCanvas to draw into this area. The text inside the element will be displayed if your browser does not support drawing with WebGL.
The lines between the <body> and </body> tags are referred to as “within the body element.” For the rest of this book, “within the AnyTag element” will be used to refer to any line between the beginning (<AnyTag>) and end (</AnyTag>) of the element.
Create a script element for the inclusion of JavaScript programming code, once again within the body element:
Retrieve a reference to the GLCanvas in JavaScript code by adding the following line within the script element:
The let JavaScript keyword defines variables.
The first line, “use strict”, is a JavaScript directive indicating that the code should be executed in “strict mode”, where the use of undeclared variables is a runtime error. The second line creates a new variable named canvas and references the variable to the GLCanvas drawing area.
All local variable names begin with a lowercase letter, as in canvas.
Retrieve and bind a reference to the WebGL context to the drawing area by adding the following code:
Clear the canvas drawing area to your favorite color through WebGL by adding the following:
Add a simple write command to the document to identify the canvas by inserting the following line:
You can refer to the final source code in the index.html file in the chapter2/2.1.html5_canvas project. Run the project, and you should see a light green area on your browser window as shown in Figure 2-1. This is the 640×480 canvas drawing area you defined.
You can try changing the cleared color to white by setting the RGBA of gl.clearColor() to 1 or to black by setting the color to 0 and leaving the alpha value 1. Notice that if you set the alpha channel to 0, the canvas color will disappear. This is because a 0 value in the alpha channel represents complete transparency, and thus, you will “see through” the canvas and observe the background color of the web page. You can also try altering the resolution of the canvas by changing the 640×480 values to any number you fancy. Notice that these two numbers refer to the pixel counts and thus must always be integers.
In the previous project, you created an HTML canvas element and cleared the area defined by the canvas using WebGL. Notice that all the functionality is clustered in the index.html file. As the project complexity increases, this clustering of functionality can quickly become unmanageable and negatively impact the programmability of your system. For this reason, throughout the development process in this book, after a concept is introduced, efforts will be spent on separating the associated source code into either well-defined source code files or classes in an object-oriented programming style. To begin this process, the HTML and JavaScript source code from the previous project will be separated into different source code files.

Running the JavaScript Source File project
To learn how to separate source code into different files
To organize your code in a logical structure
Create a new HTML5 project titled javascript_source_file. Recall that a new project is created by creating a folder with the appropriate name, copying files from the previous project, and editing the <Title> element of the index.html to reflect the new project.
Create a new folder named src inside the project folder by clicking the new folder icon while hovering over the project folder, as illustrated in Figure 2-4. This folder will contain all of your source code.

Creating a new source code folder
Create a new source code file within the src folder by right-clicking the src folder, as illustrated in Figure 2-5. Name the new source file core.js.

Adding a new JavaScript source code file
In VS Code, you can create/copy/rename folders and files by using the right-click menus in the Explorer window.
Open the new core.js source file for editing.
Define a variable for referencing the WebGL context, and add a function which allows you to access the variable:
Variables that are accessible throughout a file, or a module, have names that begin with lowercase “m”, as in mGL.
Define the initWebGL() function to retrieve GLCanvas by passing in the proper canvas id as a parameter, bind the drawing area to the WebGL context, store the results in the defined mGL variable, and clear the drawing area:
Notice that this function is similar to the JavaScript source code you typed in the previous project. This is because all you are doing differently, in this case, is separating JavaScript source code from HTML code.
All function names begin with a lowercase letter, as in initWebGL().
Define the clearCanvas() function to invoke the WebGL context to clear the canvas drawing area:
Define a function to carry out the initialization and clearing of the canvas area after the web browser has completed the loading of the index.html file:
Open the index.html file for editing.
Create the HTML canvas, GLCanvas, as in the previous project.
Load the core.js source code by including the following code within the head element:
With this code, the core.js file will be loaded as part of the index.html defined web page. Recall that you have defined a function for window.onload and that function will be invoked when the loading of index.html is completed.
You can refer to the final source code in the core.js and index.html files in the chapter2/2.2.javascript_source_file project folder. Although the output from this project is identical to that from the previous project, the organization of your code will allow you to expand, debug, and understand the game engine as you continue to add new functionality.
Recall that to run a project, you click the “Go Live” button on the lower right of the VS Code window, or type Alt+L Alt+O keys, while the associated index.html file is opened in the Editor window. In this case, the project will not run if you click the “Go Live” button while the core.js file is opened in the Editor window.
Examine your index.html file closely and compare its content to the same file from the previous project. You will notice that the index.html file from the previous project contains two types of information (HTML and JavaScript code) and that the same file from this project contains only the former, with all JavaScript code being extracted to core.js. This clean separation of information allows for easy understanding of the source code and improves support for more complex systems. From this point on, all JavaScript source code will be added to separate source code files.
In general, drawing involves geometric data and the instructions for processing the data. In the case of WebGL, the instructions for processing the data are specified in the OpenGL Shading Language (GLSL) and are referred to as shaders. In order to draw with WebGL, programmers must define the geometric data and GLSL shaders in the CPU and load both to the drawing hardware, or the graphics processing unit (GPU). This process involves a significant number of WebGL function calls. This section presents the WebGL drawing steps in detail.
It is important to focus on learning these basic steps and avoid being distracted by the less important WebGL configuration nuances such that you can continue to learn the overall concepts involved when building your game engine.
In the following project, you will learn about drawing with WebGL by focusing on the most elementary operations. This includes the loading of the simple geometry of a square from the CPU to the GPU, the creation of a constant color shader, and the basic instructions for drawing a simple square with two triangles.

Running the Draw One Square project
To understand how to load geometric data to the GPU
To learn about simple GLSL shaders for drawing with WebGL
To learn how to compile and load shaders to the GPU
To understand the steps required to draw with WebGL
To demonstrate the implementation of a singleton-like JavaScript module based on simple source code files
To draw efficiently with WebGL, the data associated with the geometry to be drawn, such as the vertex positions of a square, should be stored in the GPU hardware. In the following steps, you will create a contiguous buffer in the GPU, load the vertex positions of a unit square into the buffer, and store the reference to the GPU buffer in a variable. Learning from the previous project, the corresponding JavaScript code will be stored in a new source code file, vertex_buffer.js.
A unit square is a 1×1 square centered at the origin.
Create a new JavaScript source file in the src folder and name it vertex_buffer.js.
Import all the exported functionality from the core.js file as core with the JavaScript import statement:
With the JavaScript import and, soon to be encountered, export statements, features and functionalities defined in a file can be conveniently encapsulated and accessed. In this case, the functionality exported from core.js is imported in vertex_buffer.js and accessible via the module identifier, core. For example, as you will see, in this project, core.js defines and exports a getGL() function. With the given import statement, this function can be accessed as core.getGL() in the vertex_buffer.js file.
Declare the variable mGLVertexBuffer to store the reference to the WebGL buffer location. Remember to define a function for accessing this variable.
Define the variable mVerticesOfSquare and initialize it with vertices of a unit square:
Define the init() function to allocate a buffer in the GPU via the gl context, and load the vertices to the allocated buffer in the GPU:
This code first gets access to the WebGL drawing context through the core.getGL() function. After which, Step A creates a buffer on the GPU for storing the vertex positions of the square and stores the reference to the GPU buffer in the variable mGLVertexBuffer. Step B activates the newly created buffer, and step C loads the vertex position of the square into the activated buffer on the GPU. The keyword STATIC_DRAW informs the drawing hardware that the content of this buffer will not be changed.
Remember that the mGL variable accessed through the getGL() function is defined in the core.js file and initialized by the initWebGL() function. You will define an export statement in the core.js file to provide access to this function in the coming steps.
Provide access to the init() and get() functions to the rest of your engine by exporting them with the following code:
With the functionality of loading vertex positions defined, you are now ready to define and load the GLSL shaders.
The term shader refers to programs, or a collection of instructions, that run on the GPU. In the context of the game engine, shaders must always be defined in pairs consisting of a vertex shader and a corresponding fragment shader. The GPU will execute the vertex shader once per primitive vertex and the fragment shader once per pixel covered by the primitive. For example, you can define a square with four vertices and display this square to cover a 100×100 pixel area. To draw this square, WebGL will invoke the vertex shader 4 times (once for each vertex) and execute the fragment shader 10,000 times (once for each of the 100×100 pixels)!
In the case of WebGL, both the vertex and fragment shaders are implemented in the OpenGL Shading Language (GLSL). GLSL is a language with syntax that is similar to the C programming language and designed specifically for processing and displaying graphical primitives. You will learn sufficient GLSL to support the drawing for the game engine when required.
In the following steps, you will load into GPU memory the source code for both vertex and fragment shaders, compile and link them into a single shader program, and load the linked program into the GPU memory for drawing. In this project, the shader source code is defined in the index.html file, while the loading, compiling, and linking of the shaders are defined in the shader_support.js source file.
The WebGL context can be considered as an abstraction of the GPU hardware. To facilitate readability, the two terms WebGL and GPU are sometimes used interchangeably.
Define the vertex shader by opening the index.html file, and within the head element, add the following code:
Shader attribute variables have names that begin with a lowercase “a”, as in aVertexPosition.
The script element type is set to x-shader/x-vertex because that is a common convention for shaders. As you will see, the id field with the value VertexShader allows you to identify and load this vertex shader into memory.
The GLSL attribute keyword identifies per-vertex data that will be passed to the vertex shader in the GPU. In this case, the aVertexPosition attribute is of data type vec3 or an array of three floating-point numbers. As you will see in later steps, aVertexPosition will be set to reference the vertex positions for the unit square.
Define the fragment shader in index.html by adding the following code within the head element:
Note the different type and id fields. Recall that the fragment shader is invoked once per pixel. The variable gl_FragColor is the built-in variable that determines the color of the pixel. In this case, a color of (1,1,1,1), or white, is returned. This means all pixels covered will be shaded to a constant white color.
With both the vertex and fragment shaders defined in the index.html file, you are now ready to implement the functionality to compile, link, and load the resulting shader program to the GPU.
Create a new JavaScript file, shader_support.js.
Import functionality from the core.js and vertex_buffer.js files:
Define two variables, mCompiledShader and mVertexPositionRef, for referencing to the shader program and the vertex position attribute in the GPU:
Create a function to load and compile the shader you defined in the index.html:
You are now ready to create, compile, and link a shader program by defining the init() function:
Define a function to allow the activation of the shader so that it can be used for drawing the square:
Lastly, provide access to the init() and activate() functions to the rest of the game engine by exporting them with the export statement:
Notice that the loadAndCompileShader() function is excluded from the export statement. This function is not needed elsewhere and thus, following the good development practice of hiding local implementation details, should remain private to this file.
The shader loading and compiling functionality is now defined. You can now utilize and activate these functions to draw with WebGL.
Import the defined functionality from vertex_buffer.js and shader_support.js files:
Modify the initWebGL() function to include the initialization of the vertex buffer and the shader program:
Add a drawSquare() function for drawing the defined square:
Now you just need to modify the window.onload function to call the newly defined drawSquare() function:
Finally, provide access to the WebGL context to the rest of the engine by exporting the getGL() function. Remember that this function is imported and has been called to access the WebGL context in both vertex_buffer.js and simple_shader.js.
Recall that the function that is bounded to window.onload will be invoked after indexl.html has been loaded by the web browser. For this reason, WebGL will be initialized, the canvas cleared to light green, and a white square will be drawn. You can refer to the source code in the chapter2/2.3.draw_one_square project for the entire system described.
Run the project and you will see a white rectangle on a green canvas. What happened to the square? Remember that the vertex position of your 1×1 square was defined at locations (±0.5, ±0.5). Now observe the project output: the white rectangle is located in the middle of the green canvas covering exactly half of the canvas’ width and height. As it turns out, WebGL draws vertices within the ±1.0 range onto the entire defined drawing area. In this case, the ±1.0 in the x dimension is mapped to 640 pixels, while the ±1.0 in the y dimension is mapped to 480 pixels (the created canvas dimension is 640×480). The 1x1 square is drawn onto a 640x480 area, or an area with an aspect ratio of 4:3. Since the 1:1 aspect ratio of the square does not match the 4:3 aspect ratio of the display area, the square shows up as a 4:3 rectangle. This problem will be resolved later in the next chapter.
You can try editing the fragment shader in index.html by changing the color set in the gl_FragColor function to alter the color of the white square. Notice that a value of less than 1 in the alpha channel does not result in the white square becoming transparent. Transparency of drawn primitives will be discussed in later chapters.
Finally, note that this project defines three separate files and hides information with the JavaScript import/export statements. The functionality defined in these files with the corresponding import and export statements is referred to as JavaScript modules. A module can be considered as a global singleton object and is excellent for hiding implementation details. The loadAndCompileShader() function in the shader_support module serves as a great example of this concept. However, modules are not well suited for supporting abstraction and specialization. In the next sections, you will begin to work with JavaScript classes to further encapsulate portions of this example to form the basis of the game engine framework.
The previous project decomposed the drawing of a square into logical modules and implemented the modules as files containing global functions. In software engineering, this process is referred to as functional decomposition, and the implementation is referred to as procedural programming. Procedural programming often results in solutions that are well structured and easy to understand. This is why functional decomposition and procedural programming are often used to prototype concepts or to learn new techniques.
This project enhances the Draw One Square project with object-oriented analysis and programming to introduce data abstraction. As additional concepts are introduced and as the game engine complexity grows, proper data abstraction supports straightforward design, behavior specialization, and code reuse through inheritance.

Running the JavaScript Objects project
To separate the code for the game engine from the code for the game logic
To understand how to build abstractions with JavaScript classes and objects
Create separate folders to organize the source code for the game engine and the logic of the game.
Define a JavaScript class to abstract the simple_shader and work with an instance of this class.
Define a JavaScript class to implement the drawing of one square, which is the logic of your simple game for now.

Creating engine and my_game under the src folder
The src/engine folder will contain all the source code to the game engine, and the src/my_game folder will contain the source for the logic of your game. It is important to organize source code diligently because the complexity of the system and the number of files will increase rapidly as more concepts are introduced. A well-organized source code structure facilitates understanding and expansion.
The source code in the my_game folder implements the game by relying on the functionality provided by the game engine defined in the engine folder. For this reason, in this book, the source code in the my_game folder is often referred to as the client of the game engine.
A completed game engine would include many self-contained subsystems to fulfill different responsibilities. For example, you may be familiar with or have heard of the geometry subsystem for managing the geometries to be drawn, the resource management subsystem for managing images and audio clips, the physics subsystem for managing object interactions, and so on. In most cases, the game engine would include one unique instance of each of these subsystems, that is, one instance of the geometry subsystem, of the resource management subsystem, of the physics subsystem, and so on.
These subsystems will be covered in later chapters of this book. This section focuses on establishing the mechanism and organization for implementing this single-instance or singleton-like functionality based on the JavaScript module you have worked with in the previous project.
All module and instance variable names begin with an “m” and are followed by a capital letter, as in mVariable. Though not enforced by JavaScript, you should never access a module or instance variable from outside the module/class. For example, you should never access core.mGL directly; instead, call the core.getGL() function to access the variable.
Although the code in the shader_support.js file from the previous project properly implements the required functionality, the variables and functions do not lend themselves well to behavior specialization and code reuse. For example, in the cases when different types of shaders are required, it can be challenging to modify the implementation while achieving behavior and code reuse. This section follows the object-oriented design principles and defines a SimpleShader class to abstract the behaviors and hide the internal representations of shaders. Besides the ability to create multiple instances of the SimpleShader object, the basic functionality remains largely unchanged.
Module identifiers begin with lower case, for example, core or vertexBuffer. Class names begin with upper case, for example, SimpleShader or MyGame.
Create a new source file in the src/engine folder and name the file simple_shader.js to implement the SimpleShader class.
Import both the core and vertex_buffer modules:
Declare the SimpleShader as a JavaScript class:
Define the constructor within the SimpleShader class to load, compile, and link the vertex and fragment shaders into a program and to create a reference to the aVertexPosition attribute in the vertex shader for loading the square vertex positions from the WebGL vertex buffer for drawing:
Notice that this constructor is essentially the same as the init() function in the shader_support.js module from the previous project.
The JavaScript constructor keyword defines the constructor of a class.
Add a method to the SimpleShader class to activate the shader for drawing. Once again, similar to your activate() function in shader_support.js from previous project.
Add a private method, which cannot be accessed from outside the simple_shader.js file, by creating a function outside the SimpleShader class to perform the actual loading and compiling functionality:
Notice that this function is identical to the one you created in shader_support.js.
The JavaScript # prefix that defines private members is not used in this book because the lack of visibility from subclasses complicates specialization of behaviors in inheritance.
Finally, add an export for the SimpleShader class such that it can be accessed and instantiated outside of this file:
The default keyword signifies that the name SimpleShader cannot be changed by import statements.
Create a copy of your core.js under the new folder src/engine.
Define a function to create a new instance of the SimpleShader object:
Modify the initWebGL() function to focus on only initializing the WebGL as follows:
Create an init() function to perform engine-wide system initialization, which includes initializing of WebGL and the vertex buffer and creating an instance of the simple shader:
Modify the clear canvas function to parameterize the color to be cleared to:
Export the relevant functions for access by the rest of the game engine:
Finally, remove the window.onload function as the behavior of the actual game should be defined by the client of the game engine or, in this case, the MyGame class.
The src/engine folder now contains the basic source code for the entire game engine. Due to these structural changes to your source code, the game engine can now function as a simple library that provides functionality for creating games or a simple application programming interface (API). For now, your game engine consists of three files that support the initialization of WebGL and the drawing of a unit square, the core module, the vertex_buffer module, and the SimpleShader class. New source files and functionality will continue to be added to this folder throughout the remaining projects. Eventually, this folder will contain a complete and sophisticated game engine. However, the core library-like framework defined here will persist.
The src/my_game folder will contain the actual source code for the game. As mentioned, the code in this folder will be referred to as the client of the game engine. For now, the source code in the my_game folder will focus on drawing a simple square by utilizing the functionality of the simple game engine you defined.
Create a new source file in the src/my_game folder, or the client folder, and name the file my_game.js.
Import the core module as follows:
Define MyGame as a JavaScript class and add a constructor to initialize the game engine, clear the canvas, and draw the square:
Bind the creation of a new instance of the MyGame object to the window.onload function:
Finally, modify the index.html to load the game client rather than the engine core.js within the head element:
Although you’re accomplishing the same tasks as with the previous project, with this project, you have created an infrastructure that supports subsequent modifications and expansions of your game engine. You have organized your source code into separate and logical folders, organized the singleton-like modules to implement core functionality of the engine, and gained experience with abstracting the SimpleShader class that will support future design and code reuse. With the engine now comprised of well-defined modules and objects with clean interface methods, you can now focus on learning new concepts, abstracting the concepts, and integrating new implementation source code into your engine.
Thus far in your projects, the GLSL shader code is embedded in the HTML source code of index.html. This organization means that new shaders must be added through the editing of the index.html file. Logically, GLSL shaders should be organized separately from HTML source files; logistically, continuously adding to index.html will result in a cluttered and unmanageable file that would become difficult to work with. For these reasons, the GLSL shaders should be stored in separate source files.

Running the Shader Source File project
To separate the GLSL shaders from the HTML source code
To demonstrate how to load the shader source code files during runtime
Continue from the previous project, open the simple_shader.js file, and edit the loadAndCompileShader() function, to receive a file path instead of an HTML ID:
Within the loadAndCompileShader() function, replace the HTML element retrieval code in step A with the following XMLHttpRequest to load a file:
Notice that the file loading will occur synchronously where the web browser will actually stop and wait for the completion of the xmlReq.open() function to return with the content of the opened file. If the file should be missing, the opening operation will fail, and the response text will be null.
The synchronized “stop and wait” for the completion of xmlReq.open() function is inefficient and may result in slow loading of the web page. This shortcoming will be addressed in Chapter 4 when you learn about the asynchronous loading of game resources.
The XMLHttpRequest() object requires a running web server to fulfill the HTTP get request. This means you will be able to test this project from within the VS Code with the installed “Go Live” extension. However, unless there is a web server running on your machine, you will not be able to run this project by double-clicking the index.html file directly. This is because there is no server to fulfill the HTTP get requests and the GLSL shader loading will fail.
With this modification, the SimpleShader constructor can now be modified to receive and forward file paths to the loadAndCompileShader() function instead of the HTML element IDs.
Create a new folder that will contain all of the GLSL shader source code files in the src folder, and name it glsl_shaders, as illustrated in Figure 2-10.

Creating the glsl_shaders folder
Create two new text files within the glsl_shaders folder, and name them simple_vs.glsl and white_fs.glsl for simple vertex shader and white fragment shader.
All GLSL shader source code files will end with the .glsl extension. The vs in the shader file names signifies that the file contains a vertex shader, while fs signifies a fragment shader.
Create the GLSL vertex shader source code by editing simple_vs.glsl and pasting the vertex shader code in the index.html file from the previous project:
Create the GLSL fragment shader source code by editing white_fs.glsl and pasting the fragment shader code in the index.html file from the previous project:
Remove all the GLSL shader code from index.html, such that this file becomes as follows:
Modify the createShader() function in core.js to load the shader files instead of HTML element IDs:
index.html: This is the file that contains the HTML code that defines the canvas on the web page for the game and loads the source code for your game.
src/glsl_shaders: This is the folder that contains all the GLSL shader source code files that draws the elements of your game.
src/engine: This is the folder that contains all the source code files for your game engine.
src/my_game: This is the client folder that contains the source code for the actual game.
With GLSL shaders being stored in separate source code files, it is now possible to edit or replace the shaders with relatively minor changes to the rest of the source code. The next project demonstrates this convenience by replacing the restrictive constant white color fragment shader, white_fs.glsl, with a shader that can be parameterized to draw with any color.

Running the Parameterized Fragment Shader project
To gain experience with creating a GLSL shader in the source code structure
To learn about the uniform variable and define a fragment shader with the color parameter
Recall that the GLSL attribute keyword identifies data that changes for every vertex position. In this case, the uniform keyword denotes that a variable is constant for all the vertices. The uPixelColor variable can be set from JavaScript to control the eventual pixel color. The precision mediump keywords define the floating precisions for computations.
Floating-point precision trades the accuracy of computation for performance. Please follow the references in Chapter 1 for more information on WebGL.
Edit simple_shader.js and add a new instance variable for referencing the uPixelColor in the constructor:
Add code to the end of the constructor to create the reference to the uPixelColor:
Modify the shader activation to allow the setting of the pixel color via the uniform4fv() function:
The gl.uniform4fv() function copies four floating-point values from the pixelColor float array to the WebGL location referenced by mPixelColorRef or the uPixelColor in the simple_fs.glsl fragment shader.
Notice that a color value, an array of four floats, is now required with the new simple_fs.glsl (instead of white_fs) shader and that it is important to pass in the drawing color when activating the shader. With the new simple_fs, you can now experiment with drawing the squares with any desired color.
As you have experienced in this project, the source code structure supports simple and localized changes when the game engine is expanded or modified. In this case, only changes to the simple_shader.js file and minor modifications to core.js and the my_game.js were required. This demonstrates the benefit of proper encapsulation and source code organization.
By this point, the game engine is simple and supports only the initialization of WebGL and the drawing of one colored square. However, through the projects in this chapter, you have gained experience with the techniques required to build an excellent foundation for the game engine. You have also structured the source code to support further complexity with limited modification to the existing code base, and you are now ready to further encapsulate the functionality of the game engine to facilitate additional features. The next chapter will focus on building a proper framework in the game engine to support more flexible and configurable drawings.
Create and draw multiple rectangular objects
Control the position, size, rotation, and color of the created rectangular objects
Define a coordinate system to draw from
Define a target subarea on the canvas to draw to
Work with abstract representations of Renderable objects, transformation operators, and cameras
Ideally, a video game engine should provide proper abstractions to support designing and building games in meaningful contexts. For example, when designing a soccer game, instead of a single square with a fixed ±1.0 drawing range, a game engine should provide proper utilities to support designs in the context of players running on a soccer field. This high-level abstraction requires the encapsulation of basic operations with data hiding and meaningful functions for setting and receiving the desired results.
While this book is about building abstractions for a game engine, this chapter focuses on creating the fundamental abstractions to support drawing. Based on the soccer game example, the support for drawing in an effective game engine would likely include the ability to easily create the soccer players, control their size and orientations, and allow them to be moved and drawn on the soccer field. Additionally, to support proper presentation, the game engine must allow drawing to specific subregions on the canvas so that a distinct game status can be displayed at different subregions, such as the soccer field in one subregion and player statistics and scores in another subregion.
This chapter identifies proper abstraction entities for the basic drawing operations, introduces operators that are based on foundational mathematics to control the drawing, overviews the WebGL tools for configuring the canvas to support subregion drawing, defines JavaScript classes to implement these concepts, and integrates these implementations into the game engine while maintaining the organized structure of the source code.
Although the ability to draw is one of the most fundamental functionalities of a game engine, the details of how drawings are implemented are generally a distraction to gameplay programming. For example, it is important to create, control the locations of, and draw soccer players in a soccer game. However, exposing the details of how each player is actually defined (by a collection of vertices that form triangles) can quickly overwhelm and complicate the game development process. Thus, it is important for a game engine to provide a well-defined abstraction interface for drawing operations.
With a well-organized source code structure, it is possible to gradually and systematically increase the complexity of the game engine by implementing new concepts with localized changes to the corresponding folders. The first task is to expand the engine to support the encapsulation of drawing such that it becomes possible to manipulate drawing operations as a logical entity or as an object that can be rendered.
In the context of computer graphics and video games, the word render refers to the process of changing the color of pixels corresponding to an abstract representation. For example, in the previous chapter, you learned how to render a square.

Running the Renderable Objects project
To reorganize the source code structure in anticipation for functionality increases
To support game engine internal resource sharing
To introduce a systematic interface for the game developer via the index.js file
To begin the process of building a class to encapsulate drawing operations by first abstracting the related drawing functionality
To demonstrate the ability to create multiple Renderable objects
The core.js source code file contains the WebGL interface, engine initialization, and drawing functionalities. These should be modularized to support the anticipated increase in system complexity.
A system should be defined to support the sharing of game engine internal resources. For example, SimpleShader is responsible for interfacing from the game engine to the GLSL shader compiled from the simple_vs.glsl and simple_fs.glsl source code files. Since there is only one copy of the compiled shader, there only needs to be a single instance of the SimpleShader object. The game engine should facilitate this by allowing the convenient creation and sharing of the object.
As you have experienced, the JavaScript export statement can be an excellent tool for hiding detailed implementations. However, it is also true that determining which classes or modules to import from a number of files can be confusing and overwhelming in a large and complex system, such as the game engine you are about to develop. An easy to work with and systematic interface should be provided such that the game developer, users of the game engine, can be insulated from these details.
In the following section, the game engine source code will be reorganized to address these issues.
In your project, under the src/engine folder, create a new folder and name it core. Form this point forward, this folder will contain all functionality that is internal to the game engine and will not be exported to the game developers.
You can cut and paste the vertex_buffer.js source code file from the previous project into the src/engine/core folder. The details of the primitive vertices are internal to the game engine and should not be visible or accessible by the clients of the game engine.
Create a new source code file in the src/engine/core folder, name it gl.js, and define WebGL’s initialization and access methods:
Notice that the init() function is identical to the initWebGL() function in core.js from the previous project. Unlike the previous core.js source code file, the gl.js file contains only WebGL-specific functionality.
Since only a single copy of the GLSL shader is created and compiled from the simple_vs.glsl and simple_fs.glsl source code files, only a single copy of SimpleShader object is required within the game engine to interface with the compiled shader. You will now create a simple resource sharing system to support future additions of different types of shaders.
Create a new source code file in the src/engine/core folder, name it shader_resources.js, and define the creation and accessing methods for SimpleShader.
Recall from the previous chapter that the SimpleShader class is defined in the simple_shader.js file which is located in the src/engine folder. Remember to copy all relevant source code files from the previous project.
Variables referencing constant values have names that begin with lowercase “k”, as in kSimpleVS.
Since the shader_resources module is located in the src/engine/core folder, the defined shaders are shared within and cannot be accessed from the clients of the game engine.
Create index.js file in the src/engine folder; import from gl.js, vertex_buffer.js, and shader_resources.js; and define the init() function to initialize the game engine by calling the corresponding init() functions of the three imported modules:
Define the clearCanvas() function to clear the drawing canvas:
Now, to properly expose the Renderable symbol to the clients of the game engine, make sure to import such that the class can be properly exported. The Renderable class will be introduced in details in the next section.
Finally, remember to export the proper symbols and functionality for the clients of the game engine:
With proper maintenance and update of this index.js file, the clients of your game engine, the game developers, can simply import from the index.js file to gain access to the entire game engine functionality without any knowledge of the source code structure. Lastly, notice that the glSys, vertexBuffer, and shaderResources internal functionality defined in the engine/src/core folder are not exported by index.js and thus are not accessible to the game developers.
Define the Renderable class in the game engine by creating a new source code file in the src/engine folder, and name the file renderable.js.
Open renderable.js, import from gl.js and shader_resources.js, and define the Renderable class with a constructor to initialize a reference to a shader and a color instance variable. Notice that the shader is a reference to the shared SimpleShader instance defined in shader_resources.
Define a draw() function for Renderable:
Define the getter and setter functions for the color instance variable:
Export the Renderable symbol as default to ensure this identifier cannot be renamed:
Though this example is simple, it is now possible to create and draw multiple instances of the Renderable objects with different colors.
Step A initializes the engine.
Step B creates two instances of Renderable and sets the colors of the objects accordingly.
Step C clears the canvas; steps C1 and C2 simply call the respective draw() functions of the white and red squares. Although both of the squares are drawn, for now, you are only able to see the last of the drawn squares in the canvas. Please refer to the following discussion for the details.
Run the project and you will notice that only the red square is visible! What happens is that both of the squares are drawn to the same location. Being the same size, the two squares simply overlap perfectly. Since the red square is drawn last, it overwrites all the pixels of the white square. You can verify this by commenting out the drawing of the red square (comment out the line mRedSq.draw()) and rerunning the project. An interesting observation to make is that objects that appear in the front are drawn last (the red square). You will take advantage of this observation much later when working with transparency.
This simple observation leads to your next task—to allow multiple instances of Renderable to be visible at the same time. Each instance of Renderable object needs to support the ability to be drawn at different locations, with different sizes and orientations so that they do not overlap one another.
A mechanism is required to manipulate the position, size, and orientation of a Renderable object. Over the next few projects, you will learn about how matrix transformations can be used to translate or move an object’s position, scale the size of an object, and change the orientation or rotate an object on the canvas. These operations are the most intuitive ones for object manipulations. However, before the implementation of transformation matrices, a quick review of the operations and capabilities of matrices is required.
Before we begin, it is important to recognize that matrices and transformations are general topic areas in mathematics. The following discussion does not attempt to include a comprehensive coverage of these subjects. Instead, the focus is on a small collection of relevant concepts and operators from the perspective of what the game engine requires. In this way, the coverage is on how to utilize the operators and not the theories. If you are interested in the specifics of matrices and how they relate to computer graphics, please refer to the discussion in Chapter 1 where you can learn more about these topics by delving into relevant books on linear algebra and computer graphics.
The translation operator T(tx,ty), as illustrated in Figure 3-2, translates or moves a given vertex position from (x, y) to (x+tx, y+ty). Notice that T(0,0) does not change the value of a given vertex position and is a convenient initial value for accumulating translation operations.

Translating a square by T(tx, ty)
The scaling operator S(sx, sy), as illustrated by Figure 3-3, scales or resizes a given vertex position from (x, y) to (x×sx, y×sy). Notice that S(1, 1) does not change the value of a given vertex position and is a convenient initial value for accumulating scaling operations.

Scaling a square by S(sx, sy)
The rotation operator R(θ) , as illustrated in Figure 3-4, rotates a given vertex position with respect to the origin.

Rotating a square by R(θ)
The identity operator I does not affect a given vertex position. This operator is mostly used for initialization.
![$ I=\left[\begin{array}{llll}1& 0& 0& 0\\ {}0& 1& 0& 0\\ {}0& 0& 1& 0\\ {}0& 0& 0& 1\end{array}\right] $](images/334805_2_En_3_Chapter/334805_2_En_3_Chapter_TeX_Equa.png)
![$ p=\left[\begin{array}{l}x\\ {}y\\ {}z\\ {}1\end{array}\right] $](images/334805_2_En_3_Chapter/334805_2_En_3_Chapter_TeX_Equb.png)
The z component is the third dimension, or the depth information, of a vertex position. In most cases, you should leave the z component to be 0.




The M operator is a convenient and efficient way to record and reapply the results of multiple operators.

Create a new folder under the src folder, and name the new folder lib.
Go to http://glMatrix.net, as shown in Figure 3-5, and download, unzip, and store the resulting glMatrix.js source file into the new lib folder.

Downloading the glMatrix library
As a library that must be accessible by both the game engine and the client game developer, you will load the source file in the main index.html by adding the following before the loading of my_game.js:

Running the Matrix Transform project
To introduce transformation matrices as operators for drawing a Renderable
To understand how to work with the transform operators to manipulate a Renderable
As discussed, matrix transform operators operate on vertices of geometries. The vertex shader is where all vertices are passed in from the WebGL context and is the most convenient location to apply the transform operations.
Edit simple_vs.glsl to declare a uniform 4×4 matrix:
Recall from the discussion in Chapter 2 that glsl files contain OpenGL Shading Language (GLSL) instructions that will be loaded to and executed by the GPU. You can find out more about GLSL by referring to the WebGL and OpenGL references provided at the end of Chapter 1.
Recall that the uniform keyword in a GLSL shader declares a variable with values that do not change for all the vertices within that shader. In this case, the uModelXformMatrix variable is the transform operator for all the vertices.
GLSL uniform variable names always begin with lowercase “u”, as in uModelXformMatrix.
In the main() function, apply the uModelXformMatrix to the currently referenced vertex position:
Notice that the operation follows directly from the discussion on matrix transformation operators. The reason for converting aVertexPosition to a vec4 is to support the matrix-vector multiplication.
With this simple modification, the vertex positions of the unit square will be operated on by the uModelXformMatrix operator, and thus the square can be drawn to different locations. The task now is to set up SimpleShader to load the appropriate transformation operator into uModelXformMatrix .
Edit simple_shader.js and add an instance variable to hold the reference to the uModelXformMatrix matrix in the vertex shader:
At the end of the SimpleShader constructor under step E, after setting the reference to uPixelColor, add the following code to initialize this reference:
Modify the activate() function to receive a second parameter, and load the value to uModelXformMatrix via mModelMatrixRef:
The gl.uniformMatrix4fv() function copies the values from trsMatrix to the vertex shader location identified by this.mModelMatrixRef or the uModelXfromMatrix operator in the vertex shader. The name of the variable, trsMatrix, signifies that it should be a matrix operator containing the concatenated result of translation (T), rotation (R), and scaling (S) or TRS.
In this way, when the vertices of the unit square are processed by the vertex shader, the uModelXformMatrix will contain the proper operator for transforming the vertices and thus drawing the square at the desired location, size, and rotation.
Edit my_game.js; after step C, instead of activating and drawing the two squares, replace steps C1 and C2 to create a new identity transform operator, trsMatrix:
Compute the concatenation of matrices to a single transform operator that implements translation (T), rotation (R), and scaling (S) or TRS:
Finally, step F defines the trsMatrix operator that to draw a 0.4×0.4 square that is rotated by 45 degrees and located slightly toward the lower right from the center of the canvas, and step G draws the red square:
Run the project, and you should see the corresponding white and red rectangles drawn on the canvas. You can gain some intuition of the operators by changing the values; for example, move and scale the squares to different locations with different sizes. You can try changing the order of concatenation by moving the corresponding line of code; for example, move mat4.scale() to before mat4.translate(). You will notice that, in general, the transformed results do not correspond to your intuition. In this book, you will always apply the transformation operators in the fixed TRS order. This ordering of transformation operators corresponds to typical human intuition. The TRS operation order is followed by most, if not all, graphical APIs and applications that support transformation operations.
Now that you understand how to work with the matrix transformation operators, it is time to abstract them and hide their details.
In the previous project, the transformation operators were computed directly based on the matrices. While the results were important, the computation involves distracting details and repetitive code. This project guides you to follow good coding practices to encapsulate the transformation operators by hiding the detailed computations with a class. In this way, you can maintain the modularity and accessibility of the game engine by supporting further expansion while maintaining programmability.

Running the Transform Objects project
To create the Transform class to encapsulate the matrix transformation functionality
To integrate the Transform class into the game engine
To demonstrate how to work with Transform objects
Define the Transform class in the game engine by creating a new source code file in the src/engine folder , and name the file transform.js.
Define the constructor to initialize instance variables that correspond to the operators: mPosition for translation, mScale for scaling, and mRotationInRad for rotation.
Add getters and setters for the values of each operator:
Define the getTRSMatrix() function to compute and return the concatenated transform operator, TRS:
Finally, remember to export the newly defined Transform class :
Edit renderable.js and add a new instance variable to reference a Transform object in the constructor:
Define an accessor for the transform operator:
Modify the draw() function to pass the trsMatrix operator of the mXform object to activate the shader before drawing the unit square:
With this simple modification, Renderable objects will be drawn with characteristics defined by the values of its own transformation operators.
Edit index.js; import from the newly define transform.js file:
Export Transform for client’s access:
Run the project to observe identical output as from the previous project. You can now create and draw a Renderable at any location in the canvas, and the transform operator has now been properly encapsulated.
When designing and building a video game, the game designers and programmers must be able to focus on the intrinsic logic and presentation. To facilitate these aspects, it is important that the designers and programmers can formulate solutions in a convenient dimension and space.
For example, continuing with the soccer game idea, consider the task of creating a soccer field. How big is the field? What is the unit of measurement? In general, when building a game world, it is often easier to design a solution by referring to the real world. In the real world, soccer fields are around 100 meters long. However, in the game or graphics world, units are arbitrary. So, a simple solution may be to create a field that is 100 units in meters and a coordinate space where the origin is located at the center of the soccer field. In this way, opposing sides of the fields can simply be determined by the sign of the x value, and drawing a player at location (0, 1) would mean drawing the player 1 meter to the right from the center of the soccer field.
A contrasting example would be when building a chess-like board game. It may be more convenient to design the solution based on a unitless n×n grid with the origin located at the lower-left corner of the board. In this scenario, drawing a piece at location (0, 1) would mean drawing the piece at the location one cell or unit toward the right from the lower-left corner of the board. As will be discussed, the ability to define specific coordinate systems is often accomplished by computing and working with a matrix representing the view from a camera.
In all cases, to support a proper presentation of the game, it is important to allow the programmer to control the drawing of the contents to any location on the canvas. For example, you may want to draw the soccer field and players to one subregion and draw a mini-map into another subregion. These axis-aligned rectangular drawing areas or subregions of the canvas are referred to as viewports.
In this section, you will learn about coordinate systems and how to use the matrix transformation as a tool to define a drawing area that conforms to the fixed ±1 drawing range of the WebGL.

Working with a 2D Cartesian coordinate system
So far in this book, you have experience with two distinct coordinate systems. The first is the coordinate system that defines the vertices for the 1×1 square in the vertex buffer. This is referred to as the Modeling Coordinate System, which defines the Model Space. The Model Space is unique for each geometric object, as in the case of the unit square. The Model Space is defined to describe the geometry of a single model. The second coordinate system that you have worked with is the one that WebGL draws to, where the x-/y-axis ranges are bounded to ±1.0. This is known as the Normalized Device Coordinate (NDC) System. As you have experienced, WebGL always draws to the NDC space and that the contents in the ±1.0 range cover all the pixels in the canvas.

Transforming the square from Model to NDC space
Although it is possible to draw to any location with the Modeling transform, the disproportional scaling that draws squares as rectangles is still a problem. In addition, the fixed -1.0 and 1.0 NDC space is not a convenient coordinate space for designing games. The World Coordinate (WC) System describes a convenient World Space that resolves these issues. For convenience and readability, in the rest of this book, WC will also be used to refer to the World Space that is defined by a specific World Coordinate System.

Working with a World Coordinate (WC) System

In this case, (center.x, center.y) and WxH are the center and the dimension of the WC system.

Working with the WebGL viewport

Running the Camera Transform and Viewport project
To understand the different coordinate systems
To experience working with a WebGL viewport to define and draw to different subregions within the canvas
To understand the Camera transform
To begin drawing to the user-defined World Coordinate System
You are now ready to modify the game engine to support the Camera transform to define your own WC and the corresponding viewport for drawing. The first step is to modify the shaders to support a new transform operator.
Edit simple_vs.glsl to add a new uniform matrix operator to represent the Camera transform:
Make sure to apply the operator on the vertex positions in the vertex shader program:
Recall that the order of matrix operations is important. In this case, the uModelXformMatrix first transforms the vertex positions from Model Space to WC, and then the uCameraXformMatrix transforms from WC to NDC. The order of uModelxformMatrix and uCameraXformMatrix cannot be switched.
Edit simple_shader.js and, in the constructor, add an instance variable for storing the reference to the Camera transform operator in simple_vs.glsl:
At the end of the SimpleShader constructor, retrieve the reference to the Camera transform operator, uCameraXformMatrix, after retrieving those for the uModelXformMatrix and uPixelColor:
Modify the activate function to receive a Camera transform matrix and pass it to the shader:
As you have seen previously, the gl.uniformMatrix4fv() function copies the content of cameraMatrix to the uCameraXformMatrix operator .
It is now possible to set up a WC for drawing and define a subarea in the canvas to draw to.

Designing a WC to support drawing

Drawing the WC to the viewport
Note that the details of the WC, centered at (20, 60) with dimension 20x10, and the viewport, lower-left corner at (20, 40) and dimension of 600x300, are chosen rather randomly. These are simply reasonable values that can demonstrate the correctness of the implementation.
Edit my_game.js. In the constructor, perform step A to initialize the game engine and step B to create six Renderable objects (two to be drawn at the center, with four at each corner of the WC) with corresponding colors.
Steps C and D clear the entire canvas, set up the viewport, and clear the viewport to a different color:
Step E defines the WC with the Camera transform by concatenating the proper scaling and translation operators:
Center: (20,60)
Top-left corner: (10, 65)
Top-right corner: (30, 65)
Bottom-right corner: (30, 55)
Bottom-left corner: (10, 55)
Set up the slightly rotated 5x5 blue square at the center of WC, and draw with the Camera transform operator, cameraMatrix:
Now draw the other five squares, first the 2x2 in the center and one each at a corner of the WC:
Run this project and observe the distinct colors at the four corners: the top left (mTLSq) in red, the top right (mTRSq) in green, the bottom right (mBRSq) in blue, and the bottom left (mBLSq) in dark gray. Change the locations of the corner squares to verify that the center positions of these squares are located in the bounds of the WC, and thus, only one quarter of the squares are actually visible. For example, set mBlSq to (12, 57) to observe the dark-gray square is actually four times the size. This observation verifies that the areas of the squares outside of the viewport/scissor area are clipped by WebGL.
Although lacking proper abstraction, it is now possible to define any convenient WC system and any rectangular subregions of the canvas for drawing. With the Modeling and Camera transformations, a game programmer can now design a game solution based on the semantic needs of the game and ignore the irrelevant WebGL NDC drawing range. However, the code in the MyGame class is complicated and can be distracting. As you have seen so far, the important next step is to define an abstraction to hide the details of Camera transform matrix computation.
The Camera transform allows the definition of a WC. In the physical world, this is analogous to taking a photograph with the camera. The center of the viewfinder of your camera is the center of the WC, and the width and height of the world visible through the viewfinder are the dimensions of WC. With this analogy, the act of taking the photograph is equivalent to computing the drawing of each object in the WC. Lastly, the viewport describes the location to display the computed image.

Running the Camera Objects project
To define the Camera class to encapsulate the definition of WC and the viewport functionality
To integrate the Camera class into the game engine
To demonstrate how to work with a Camera object
Define the Camera class in the game engine by creating a new source file in the src/engine folder, and name the file camera.js.
Add the constructor for Camera:
The Camera defines the WC center and width, the viewport, the Camera transform operator, and a background color. Take note of the following:
The mWCCenter is a vec2 (vec2 is defined in the glMatrix library). It is a float array of two elements. The first element, index position 0, of vec2 is the x, and the second element, index position 1, is the y position.
The four elements of the viewportArray are the x and y positions of the lower-left corner and the width and height of the viewport, in that order. This compact representation of the viewport keeps the number of instance variables to a minimum and helps keep the Camera class manageable.
The mWCWidth is the width of the WC. To guarantee a matching aspect ratio between WC and the viewport, the height of the WC is always computed from the aspect ratio of the viewport and mWCWidth.
mBgColor is an array of four floats representing the red, green, blue, and alpha components of a color.
Outside of the Camera class definition, define enumerated indices for accessing the viewportArray:
Enumerated elements have names that begin with lowercase “e”, as in eViewport and eOrgX.
Define the function to compute the WC height based on the aspect ratio of the viewport:
Add getters and setters for the instance variables:
Create a function to set the viewport and compute the Camera transform operator for this Camera:
The code to configure the viewport under step A is as follows:
The code to set up the Camera transform operator under step B is as follows:
Define a function to access the computed camera matrix:
Finally, remember to export the newly defined Camera class:
Edit index.js; import from the newly define camera.js file:
Export Camera for client’s access:
Edit my_game.js; after the initialization of the game engine in step A, create an instance of the Camera object with settings that define the WC and viewport from the previous project in step B:
Continue with the creation of the six Renderable objects and the clearing of the canvas in steps C and D:
Now, call the setViewAndCameraMatrix() function of the Camera object in to configure the WebGL viewport and compute the camera matrix in step E, and draw all the Renderables using the Camera object in steps F and G.
The mCamera object is passed to the draw() function of the Renderable objects such that the Camera transform matrix operator can be retrieved and used to activate the shader.
In this chapter, you learned how to create a system that can support the drawing of many objects. The system is composed of three parts: the objects, the details of each object, and the display of the objects on the browser’s canvas. The objects are encapsulated by the Renderable, which uses a Transform to capture its details—the position, size, and rotation. The particulars of displaying the objects are defined by the Camera, where objects at specific locations can be displayed at desirable subregions on the canvas.
You also learned that objects are all drawn relative to a World Space or WC, a convenient coordinate system. A WC is defined for scene compositions based on coordinate transformations. Lastly, the Camera transform is used to select which portion of the WC to actually display on the canvas within a browser. This can be achieved by defining an area that is viewable by the Camera and using the viewport functionality provided by WebGL.
As you built the drawing system, the game engine source code structure has been consistently refactored into abstracted and encapsulated components. In this way, the source code structure continues to support further expansion including additional functionality which will be discussed in the next chapter.
Control the position, size, and rotation of Renderable objects to construct complex movements and animations
Receive keyboard input from the player to control and animate Renderable objects
Work with asynchronous loading and unloading of external assets
Define, load, and execute a simple game level from a scene file
Change game levels by loading a new scene
Work with sound clips for background music and audio cues
In the previous chapters, a skeletal game engine was constructed to support basic drawing operations. Drawing is the first step to constructing your game engine because it allows you to observe the output while continuing to expand the game engine functionality. In this chapter, the two important mechanisms, interactivity and resource support, will be examined and added to the game engine. Interactivity allows the engine to receive and interpret player input, while resource support refers to the functionality of working with external files like the GLSL shader source code files, audio clips, and images.
This chapter begins by introducing you to the game loop, a critical component that creates the sensation of real-time interaction and immediacy in nearly all video games. Based on the game loop foundation, player keyboard input will be supported via integrating the corresponding HTML5 functionality. A resource management infrastructure will be constructed from the ground up to support the efficient loading, storing, retrieving, and utilization of external files. Functionality for working with external text files (e.g., the GLSL shader source code files) and audio clips will be integrated with corresponding example projects. Additionally, game scene architecture will be derived to support the ability to work with multiple scenes and scene transitions, including scenes that are defined in external scene files. By the end of this chapter, your game engine will support player interaction via the keyboard, have the ability to provide audio feedback, and be able to transition between distinct game levels including loading a level from an external file.
One of the most basic operations of any video game is the support of seemingly instantaneous interactions between the players’ input and the graphical gaming elements. In reality, these interactions are implemented as a continuous running loop that receives and processes player input, updates the game state, and renders the game. This constantly running loop is referred to as the game loop.
To convey the proper sense of instantaneity, each cycle of the game loop must be completed within an average person’s reaction time. This is often referred to as real time, which is the amount of time that is too short for humans to detect visually. Typically, real time can be achieved when the game loop is running at a rate of higher than 40–60 cycles in a second. Since there is usually one drawing operation in each game loop cycle, the rate of this cycle is also referred to as frames per second (FPS) or the frame rate. An FPS of 60 is a good target for performance. This is to say, your game engine must receive player input, update the game world, and then draw the game world all within 1/60th of a second!
The game loop itself, including the implementation details, is the most fundamental control structure for a game. With the main goal of maintaining real-time performance, the details of a game loop’s operation are of no concern to the rest of the game engine. For this reason, the implementation of a game loop should be tightly encapsulated in the core of the game engine with its detailed operations hidden from other gaming elements.
In the previous pseudocode listing, UPDATE_TIME_RATE is the required real-time update rate. When the elapsed time between the game loop cycle is greater than the UPDATE_TIME_RATE, update() will be called until it has caught up. This means that the draw() operation is essentially skipped when the game loop is running too slowly. When this happens, the entire game will appear to run slowly, with lagging gameplay input responses and skipped frames. However, the game logic will continue to function correctly.
Notice that the while loop that encompasses the update() function call simulates a fixed update time step of UPDATE_TIME_RATE. This fixed time step update allows for a straightforward implementation in maintaining a deterministic game state. This is an important component to make sure your game engine functions as expected whether running optimally or slowly.
To ensure the focus is solely on the understanding of the core game loop’s draw and update operations, input will be ignored until the next project.

Running the Game Loop project
To understand the internal operations of a game loop
To implement and encapsulate the operations of a game loop
To gain experience with continuous draw and update to create animation
Create a new file for the loop module in the src/engine/core folder and name the file loop.js.
Define the following instance variables to keep track of frame rate, processing time in milliseconds per frame, the game loop’s current run state, and a reference to the current scene as follows:
Notice that kUPS is the updates per second similar to the FPS discussed, and it is set to 60 or 60 updates per second. The time available for each update is simply 1/60 of a second. Since there are 1000 milliseconds in a second, the available time for each update in milliseconds is 1000 * (1/60), or kMPF.
When the game is running optimally, frame drawing and updates are both maintained at the same rate; FPS and kUPS can be thought of interchangeably. However, when lag occurs, the loop skips frame drawing and prioritizes updates. In this case, FPS will decrease, while kUPS will be maintained.
Add a function to run the core loop as follows:
The performance.now() is a JavaScript function that returns a timestamp in milliseconds.
Notice the similarity between the pseudocode examined previously and the steps B, C, and D of the loopOnce() function , that is, the drawing of the scene or game in step B, the calculation of the elapsed time since last update in step C, and the prioritization of update if the engine is lagging behind.
The main difference is that the outermost while loop is implemented based on the HTML5 requestAnimationFrame() function call at step A. The requestAnimationFrame() function will, at an approximated rate of 60 times per second, invoke the function pointer that is passed in as its parameter. In this case, the loopOnce() function will be called continuously at approximately 60 times per second. Notice that each call to the requestAnimationFrame() function will result in exactly one execution of the corresponding loopOnce() function and thus draw only once. However, if the system is lagging, multiple updates can occur during this single frame.
The requestAnimationFrame() function is an HTML5 utility provided by the browser that hosts your game. The precise behavior of this function is browser implementation dependent.
The mLoopRunning condition of the while loop in step D is a redundant check for now. This condition will become important in later sections when update() can call stop() to stop the loop (e.g., for level transitions or the end of the game).
Declare a function to start the game loop. This function initializes the game or scene, the frame time variables, and the loop running flag before calling the first requestAnimationFrame() with the loopOnce function as its parameter to begin the game loop.
Declare a function to stop the game loop. This function simply stops the loop by setting mLoopRunning to false and cancels the last requested animation frame.
Lastly, remember to export the desired functionality to the rest of the game engine, in this case just the start and stop functions:
Edit your my_game.js file to provide access to the loop by importing from the module. Allowing game developer access to the game loop module is a temporary measure and will be corrected in later sections.
Replace the MyGame constructor with the following:
Add an initialization function to set up a camera and two Renderable objects:
Draw the scene as before by clearing the canvas, setting up the camera, and drawing each square:
Add an update() function to animate a moving white square and a pulsing red square:
Recall that the update() function is called at about 60 times per second, and each time the following happens:
Step A for the white square: Increase the rotation by 1 degree, increase the x position by 0.05, and reset to 10 if the resulting x position is greater than 30.
Step B for the red square: Increase the size by 0.05 and reset it to 2 if the resulting size is greater than 5.
A white square rotating while moving toward the right and upon reaching the right boundary wrapping around to the left boundary
A red square increasing in size and reducing to a size of 2 when the size reaches 5, thus appearing to be pulsing
Start the game loop from the window.onload function. Notice that a reference to an instance of MyGame is passed to the loop.
You can now run the project to observe the rightward-moving, rotating white square and the pulsing red square. You can control the rate of the movement, rotation, and pulsing by changing the corresponding values of the incXPosBy(), incRotationByDegree(), and incSizeBy() functions. In these cases, the positional, rotational, and size values are changed by a constant amount in a fixed time interval. In effect, the parameters to these functions are the rate of change, or the speed, incXPosBy(0.05), is the rightward speed of 0.05 units per 1/60th of a second or 3 units per second. In this project, the width of the world is 20 units, and with the white square traveling at 3 units per second, you can verify that it takes slightly more than 6 seconds for the white square to travel from the left to the right boundary.
To clearly describe each component of the game engine and illustrate how these components interact, this book does not support extrapolation of the draw() function.
It is obvious that proper support to receive player input is important to interactive video games. For a typical personal computing device such as a PC or a Mac, the two common input devices are the keyboard and the mouse. While keyboard input is received in the form of a stream of characters, mouse input is packaged with positional information and is related to camera views. For this reason, keyboard input is more straightforward to support at this point in the development of the engine. This section will introduce and integrate keyboard support into your game engine. Mouse input will be examined in the Mouse Input project of Chapter 7, after the coverage of supporting multiple cameras in the same game.

Running the Keyboard Support project
Right-arrow key: Moves the white square toward the right and wraps it to the left of the game window
Up-arrow key: Rotates the white square
Down-arrow key: Increases the size of the red square and then resets the size at a threshold
To implement an engine component to receive keyboard input
To understand the difference between key state (if a key is released or pressed) and key event (when the key state changes)
To understand how to integrate the input component in the game loop
Create a new file in the src/engine folder and name it input.js.
Define a JavaScript dictionary to capture the key code mapping:
Key codes are unique numbers representing each keyboard character. Note that there are up to 222 unique keys. In the listing, only a small subset of the keys, those that are relevant to this project, are defined in the dictionary.
Key codes for the alphabets are continuous, starting from 65 for A and ending with 90 for Z. You should feel free to add any characters for your own game engine. For a complete list of key codes, see www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes.
Create array instance variables for tracking the states of every key:
All three arrays define the state of every key as a boolean. The mKeyPreviousState records the key states from the previous update cycle, and the mIsKeyPressed records the current state of the keys. The key code entries of these two arrays are true when the corresponding keyboard keys are pressed, and false otherwise. The mIsKeyClicked array captures key click events. The key code entries of this array are true only when the corresponding keyboard key goes from being released to being pressed in two consecutive update cycles.
Define functions to capture the actual keyboard state changes:
Add a function to initialize all the key states, and register the key event handlers with the browser. The window.addEventListener() function registers the onKeyUp/Down() event handlers with the browser such that the corresponding functions will be called when the player presses or releases keys on the keyboard.
Add an update() function to derive the key click events. The update() function uses mIsKeyPressed and mKeyPreviousState to determine whether a key clicked event has occurred.
Add public functions for inquires to current keyboard states to support the client game developer:
Finally, export the public functions and key constants:
Input state initialization: Modify index.js by importing the input.js module, adding the initialization of the input to the engine init() function, and adding the input module to the exported list to allow access from the client game developer.
To accurately capture keyboard state changes, the input component must be integrated with the core of the game loop. Include the input’s update() function in the core game loop by adding the following lines to loop.js. Notice the rest of the code is identical.
In the previous code, step A ensures that pressing and holding the right-arrow key will move the white square toward the right. Step B checks for the pressing and then the releasing of the up-arrow key event. The white square is rotated when such an event is detected. Notice that pressing and holding the up-arrow key will not generate continuously key press events and thus will not cause the white square to continuously rotate. Step C tests for the pressing and holding of the down-arrow key to pulse the red square.
You can run the project and include additional controls for manipulating the squares. For example, include support for the WASD keys to control the location of the red square. Notice once again that by increasing/decreasing the position change amount, you are effectively controlling the speed of the object’s movement.
The term “WASD keys” is used to refer to the key binding of the popular game controls: key W to move upward, A leftward, S downward, and D rightward.
Video games typically utilize a multitude of artistic assets, or resources, including audio clips and images. The required resources to support a game can be large. Additionally, it is important to maintain the independence between the resources and the actual game such that they can be updated independently, for example, changing the background audio without changing the game itself. For these reasons, game resources are typically stored externally on a system hard drive or a server across the network. Being stored external to the game, the resources are sometimes referred to as external resources or assets .
After a game begins, external resources must be explicitly loaded. For efficient memory utilization, a game should load and unload resources dynamically based on necessity. However, loading external resources may involve input/output device operations or network packet latencies and thus can be time intensive and potentially affect real-time interactivity. For these reasons, at any instance in a game, only a portion of resources are kept in memory, where the loading operations are strategically executed to avoid interrupting the game. In most cases, resources required in each level are kept in memory during the gameplay of that level. With this approach, external resource loading can occur during level transitions where players are expecting a new game environment and are more likely to tolerate slight delays for loading.
Once loaded, a resource must be readily accessible to support interactivity. The efficient and effective management of resources is essential to any game engine. Take note of the clear differentiation between resource management, which is the responsibility of a game engine, and the actual ownerships of the resources. For example, a game engine must support the efficient loading and playing of the background music for a game, and it is the game (or client of the game engine) that actually owns and supplies the audio file for the background music. When implementing support for external resource management, it is important to remember that the actual resources are not part of the game engine.
At this point, the game engine you have been building handles only one type of resource—the GLSL shader files. Recall that the SimpleShader object loads and compiles the simple_vs.glsl and simple_fs.glsl files in its constructor. So far, the shader file loading has been accomplished via synchronous XMLHttpRequest.open() . This synchronous loading is an example of inefficient resource management because no operations can occur while the browser attempts to open and load a shader file. An efficient alternative would be to issue an asynchronous load command and allow additional operations to continue while the file is being opened and loaded.
This section builds an infrastructure to support asynchronous loading and efficient accessing of the loaded resources. Based on this infrastructure, over the next few projects, the game engine will be expanded to support batch resource loading during scene transitions.

Running the Resource Map and Shader Loader project
Right-arrow key: Moves the white square toward the right and wraps it to the left of the game window
Up-arrow key: Rotates the white square
Down-arrow key: Increases the size of the red square and then resets the size at a threshold
To understand the handling of asynchronous loading
To build an infrastructure that supports future resource loading and accessing
To experience asynchronous resource loading via loading of the GLSL shader files
For more information about asynchronous JavaScript operations, you can refer to many excellent resources online, for example, https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Asynchronous.
Create a new file in the src/engine/core folder and name it resource_map.js.
Define the MapEntry class to support reference counting of loaded resources. Reference counting is essential to avoid multiple loading or premature unloading of a resource.
Define a key-value pair map, mMap, for storing and retrieving of resources and an array, mOutstandingPromises, to capture all outstanding asynchronous loading operations:
A JavaScript Map object holds a collection of key-value pairs.
Define functions for querying the existence of, retrieving, and setting a resource. Notice that as suggested by the variable name of the parameter, path, it is expected that the full path to the external resource file will be used as the key for accessing the corresponding resource, for example, using the path to the src/glsl_shaders/simple_vs.glsl file as the key for accessing the content of the file.
Define functions to indicate that loading has been requested, increase the reference count of a loaded resource, and to properly unload a resource. Due to the asynchronous nature of the loading operation, a load request will result in an empty MapEntry which will be updated when the load operation is completed sometime in the future. Note that each unload request will decrease the reference count and may or may not result in the resource being unloaded.
Define a function to append an ongoing asynchronous loading operation to the mOutstandingPromises array
Define a loading function, loadDecodeParse(). If the resource is already loaded, the corresponding reference count is incremented. Otherwise, the function first issues a loadRequest() to create an empty MapEntry in mMap. The function then creates an HTML5 fetch promise, using the path to the resource as key, to asynchronously fetch the external resource, decode the network packaging, parse the results into a proper format, and update the results into the created MapEntry. This created promise is then pushed into the mOutstandingPromises array.
Notice that the decoding and parsing functions are passed in as parameters and thus are dependent upon the actual resource type that is being fetched. For example, the decoding and parsing of simple text, XML (Extensible Markup Language)-formatted text, audio clips, and images all have distinct requirements. It is the responsibility of the actual resource loader to define these functions.
Define a JavaScript async function to block the execution and wait for all outstanding promises to be fulfilled, or wait for all ongoing asynchronous loading operations to be completed:
The JavaScript async/await keywords are paired where only async functions can await for a promise. The await statement blocks and returns the execution back to the caller of the async function. When the promise being waited on is fulfilled, execution will continue to the end of the async function.
Finally, export functionality to the rest of the game engine:
Notice that although the storage-specific functionalities—query, get, and set—are well defined, resource_map is actually not capable of loading any specific resources. This module is designed to be utilized by resource type–specific modules where the decoding and parsing functions can be properly defined. In the next subsection, a text resource loader is defined to demonstrate this idea.
Create a new folder in src/engine/ and name it resources. This new folder is created in anticipation of the necessary support for many resource types and to maintain a clean source code organization.
Create a new file in the src/engine/resources folder and name it text.js.
Import the core resource management and reuse the relevant functionality from resource_map:
Define the text decoding and parsing functions for loadDecodeParse(). Notice that there are no requirements for parsing the loaded text, and thus, the text parsing function does not perform any useful operation.
Define the load() function to call the resource_map loadDecodeParse() function to trigger the asynchronous fetch() operation:
Export the functionality to provide access to the rest of the game engine:
Lastly, remember to update the defined functionality for the client in the index.js:
The text resource module can now be used to assist the loading of the shader files asynchronously as plain-text files. Since it is impossible to predict when an asynchronous loading operation will be completed, it is important to issue the load commands before the resources are needed and to ensure that the loading operations are completed before proceeding to retrieve the resources.
Edit shader_resources.js and import functionality from the text and resource_map modules:
Replace the content of the init() function. Define a JavaScript promise, loadPromise, to load the two GLSL shader files asynchronously, and when the loading is completed, trigger the calling of the createShaders() function. Store the loadPromise in the mOutstandingPromises array of the resource_map by calling the map.pushPromise() function:
Notice that after the shader_resources init() function , the loading of the two GLSL shader files would have begun. At that point, it is not guaranteed that the loading operations are completed and the SimpleShader object may not have been created. However, the promise that is based on the completion of these operations is stored in the resource_map mOutstandingPromises array. For this reason, it is guaranteed that these operations must have completed by the end of the resource_map waitOnPromises() function .
Edit the simple_shader.js file and add an import from the text module for retrieving the content of the GLSL shaders:
Since no loading operations are required, you should change the loadAndCompileShader() function name to simply compileShader() and replace the file-loading commands by text resource retrievals. Notice that the synchronous loading operations are replaced by a single call to text.get() to retrieve the file content based on the filePath or the unique resource name for the shader file.
Remember that in the SimpleShader constructor, the calls to loadAndCompileShader() functions should be replaced by the newly modified compileShader() functions, as follows:
Edit the loop.js file and import from the resource_map module:
Modify the start() function to be an async function such that it is now possible to issue await and hold the execution by calling map.waitOnPromises() to wait for the fulfilment of all outstanding promises:
You can now run the project with shaders being loaded asynchronously . Though the output and interaction experience are identical to the previous project, you now have a game engine that is much better equipped to manage the loading and accessing of external resources.
The rest of this chapter further develops and formalizes the interface between the client, MyGame, and the rest of the game engine. The goal is to define the interface to the client such that multiple game-level instances can be created and interchanged during runtime. With this new interface, you will be able to define what a game level is and allow the game engine to load any level in any order.
The operations involved in initiating a game level from a scene file can assist in the derivation and refinement of the formal interface between the game engine and its client. With a game level defined in a scene file, the game engine must first initiate asynchronous loading, wait for the load completion, and then initialize the client for the game loop. These steps present a complete functional interface between the game engine and the client. By examining and deriving the proper support for these steps, the interface between the game engine and its client can be refined.

Running the Scene File project
Right-arrow key: Moves the white square toward the right and wraps it to the left of the game window
Up-arrow key: Rotates the white square
Down-arrow key: Increases the size of the red square and then resets the size at a threshold
To introduce the protocol for supporting asynchronous loading of the resources of a game
To develop the proper game engine support for the protocol
To identify and define the public interface methods for a general game level
While the parsing and loading process of a scene file is interesting to a game engine designer, the client should never need to concern themselves with these details. This project aims at developing a well-defined interface between the engine and the client. This interface will hide the complexity of the engine internal core from the client and thus avoid situations such as requiring access to the loop module from MyGame in the first project of this chapter.
Instead of hard-coding the creation of all objects to a game in the init() function , the information can be encoded in a file, and the file can be loaded and parsed during runtime. The advantage of such encoding in an external file is the flexibility to modify a scene without the need to change the game source code, while the disadvantages are the complexity and time required for loading and parsing. In general, the importance of flexibility dictates that most game engines support the loading of game scenes from a file.
Objects in a game scene can be defined in many ways. The key decision factors are that the format can properly describe the game objects and be easily parsed. Extensible Markup Language (XML) is well suited to serve as the encoding scheme for scene files.
Define a new file in the src/engine/resources folder and name it xml.js. Edit this file and import the core resource management functionality from the resource_map.
Instantiate an XML DOMParser, define the decoding and parsing functions, and call the loadDecodeParse() function of the resource_map with the corresponding parameters to initiate the loading of the XML file:
Remember to export the defined functionality:
Lastly, remember to export the defined functionality for the client in the index.js :
The newly defined xml module can be conveniently accessed by the client and used in a similar fashion as the text module in loading external XML-encoded text files.
The JavaScript DOMParser provides the ability to parse XML or HTML text strings.
The scene file is an external resource that is being loaded by the client. With asynchronous operations, the game engine must stop and wait for the completion of the load process before it can initialize the game. This is because the game initialization will likely require the loaded resources.
Note that this function is exactly two lines different from the previous project—mCurrentScene is assigned a reference to the parameter, and the client’s load() function is called before the engine waits for the completion of all asynchronous loading operations.
Though slightly involved, the details of XML-parsing specifics are less important than the fact that XML files can now be loaded. It is now possible to use the asynchronous loading of an external resource to examine the required public methods for interfacing a game level to the game engine.
constructor(): For declaring variables and defining constants.
init(): For instantiating the variables and setting up the game scene. This is called from the loop.start() function before the first iteration of the game loop.
draw()/update(): For interfacing to the game loop with these two functions being called continuously from within the core of the game loop, in the loop.loopOnce() function.
load(): For initiating the asynchronous loading of external resources, in this case, the scene file. This is called from the loop.start() function before the engine waits for the completion of all asynchronous loading operations.
unload(): For unloading of external resources when the game has ended. Currently, the engine does not attempt to free up resources. This will be rectified in the next project.
You are now ready to create an XML-encoded scene file to test external resource loading by the client and to interface to the client with game engine based on the described public methods.
Create a new folder at the same level as the src folder and name it assets. This is the folder where all external resources, or assets, of a game will be stored including the scene files, audio clips, texture images, and fonts.
It is important to differentiate between the src/engine/resources folder that is created for organizing game engine source code files and the assets folder that you just created for storing client resources. Although GLSL shaders are also loaded at runtime, they are considered as source code and will continue to be stored in the src/glsl_shaders folder.
Create a new file in the assets folder and name it scene.xml. This file will store the client’s game scene. Add the following content. The listed XML content describes the same scene as defined in the init() functions from the previous MyGame class.
The JavaScript XML parser does not support delimiting attributes with commas.
Create a new folder in the src/my_game folder and name it util. Add a new file in the util folder and name it scene_file_parser.js. This file will contain the specific parsing logic to decode the listed scene file.
Define a new class, name it SceneFileParser, and add a constructor with code as follows:
Note that the xml parameter is the actual content of the loaded XML file.
The following XML parsing is based on JavaScript XML API. Please refer to https://www.w3schools.com/xml for more details.
Add a function to the SceneFileParser to parse the details of the Camera from the xml file you created:
Add a function to the SceneFileParser to parse the details of the squares from the xml file you created:
Add a function outside the SceneFileParser to parse for contents of an XML element:
Finally, export the SceneFileParser :
Edit my_game.js file and import the SceneFileParser:
Modify the MyGame constructor to define the scene file path, the array mSqSet for storing the Renderable objects, and the camera:
Change the init() function to create objects based on the scene parser. Note the retrieval of the XML file content via the engine.xml.get() function where the file path to the scene file is used as the key.
The draw and update functions are similar to the previous examples with the exception of referencing the corresponding array elements.
Lastly, define the functions to load and unload the scene file.
You can now run the project and see that it behaves the same as the previous two projects. While this may not seem interesting, through this project, a simple and well-defined interface between the engine and the client has been derived where the complexities and details of each are hidden. Based on this interface, additional engine functionality can be introduced without the requirements of modifying any existing clients, and at the same time, complex games can be created and maintained independently from engine internals. The details of this interface will be introduced in the next project.
Before continuing, you may notice that the MyGame.unload() function is never called. This is because in this example the game loop never stopped cycling and MyGame is never unloaded. This issue will be addressed in the next project.
The window.onload function initializes the game engine and calls the loop.start() function, passing in MyGame as a parameter.
The loop.start() function, through the resource_map, waits for the completion of all asynchronous loading operations before it calls to initialize MyGame and starts the actual game loop cycle.
From this discussion, it is interesting to recognize that any object with the appropriately defined public methods can replace the MyGame object. Effectively, at any point, it is possible to call the loop.start() function to initiate the loading of a new scene. This section expands on this idea by introducing the Scene object for interfacing the game engine with its clients.

Running the Scene Objects project with both scenes
Left-/right-arrow key: Move the front rectangle left and right
Q key: Quits the game
Notice that on each level, moving the front rectangle toward the left to touch the left boundary will cause the loading of the other level. The MyGame level will cause BlueLevel to be loaded, and BlueLevel will cause the MyGame level to be loaded.
To define the abstract Scene class to interface to the game engine
To experience game engine support for scene transitions
To create scene-specific loading and unloading support
Create a new JavaScript file in the src/engine folder and name it scene.js, and import from the loop module and the engine access file index.js. These two modules are required because the Scene object must start and end the game loop when the game level begins and ends, and the engine must be cleaned up if a level should decide to terminate the game.
The game loop must not be running before a Scene has begun. This is because the required resources must be properly loaded before the update() function of the Scene can be called from the running game loop. Similarly, unloading of a level can only be performed after a game loop has stopped running.
Define JavaScript Error objects for warning the client in case of misuse:
Create a new class named Scene and export it:
Implement the constructor to ensure only subclasses of the Scene class are instantiated:
Define scene transition functions: start(), next(), and stop(). The start() function is an async function because it is responsible for starting the game loop, which in turn is waiting for all the asynchronous loading to complete. Both the next() and the stop() functions stop the game loop and call the unload() function to unload the loaded resources. The difference is that the next() function is expected to be overwritten and called from a subclass where after unloading the current scene, the subclass can proceed to advance to the next level. After unloading, the stop() function assumes the game has terminated and proceeds to clean up the game engine.
Define the rest of the derived interface functions. Notice that the Scene class is an abstract class because all of the interface functions are empty. While a subclass can choose to only implement a selective subset of the interface functions, the draw() and update() functions are not optional because together they form the central core of a level.
Together these functions present a protocol to interface with the game engine. It is expected that subclasses will override these functions to implement the actual game behaviors.
JavaScript does not support abstract classes. The language does not prevent a game programmer from instantiating a Scene object; however, the created instance will be completely useless, and the error message will provide them with a proper warning.
The game engine must be modified in two important ways. First, the game engine access file, index.js, must be modified to export the newly introduced symbols to the client as is done with all new functionality. Second, the Scene.stop() function introduces the possibility of stopping the game and handles the cleanup and resource deallocation required.
Edit index.js once again, this time to implement support for game engine cleanup. Import from the loop module, and then define and export the cleanup() function.
Similar to other core engine internal components, such as gl or vertex_buffer, loop should not be accessed by the client. For this reason, loop module is imported but not exported by index.js, imported such that game loop cleanup can be invoked, not exported, such that the client can be shielded from irrelevant complexity within the engine.
Edit loop.js to define and export a cleanUp() function to stop the game loop and unload the currently active scene:
Edit input.js to define and export a cleanUp() function. For now, no specific resources need to be released.
Edit shader_resources.js to define and export a cleanUp() function to clean up the created shader and unload its source code:
Edit simple_shader.js to define the cleanUp() function for the SimpleShader class to release the allocated WebGL resources:
Edit vertex_buffer.js to define and export a cleanUp() function to delete the allocated buffer memory:
Lastly, edit gl.js to define and export a cleanUp() function to inform the player that the engine is now shut down:
With the abstract Scene class definition and the resource management modifications to the game engine core components, it is now possible to stop an existing scene and load a new scene at will. This section cycles between two subclasses of the Scene class, MyGame and BlueLevel, to illustrate the loading and unloading of scenes.
For simplicity, the two test scenes are almost identical to the MyGame scene from the previous project. In this project, MyGame explicitly defines the scene in the init() function , while the BlueScene, in a manner identical to the case in the previous project, loads the scene content from the blue_level.xml file located in the assets folder. The content and the parsing of the XML scene file are identical to those from the previous project and thus will not be repeated.
Edit my_game.js to import from index.js and the newly defined blue_level.js. Note that with the Scene class support, you no longer need to import from the loop module.
Define MyGame to be a subclass of the engine Scene class, and remember to export MyGame:
The JavaScript extends keyword defines the parent/child relationship.
Define the constructor(), init(), and draw() functions. Note that the scene content defined in the init() function, with the exception of the camera background color, is identical to that of the previous project.
Define the update() function; take note of the this.next() call when the mHero object crosses the x=11 boundary from the right and the this.stop() call when the Q key is pressed.
Define the next() function to transition to the BlueLevel scene:
The super.next() call, where the super class can stop the game loop and cause the unloading of this scene, is necessary and absolutely critical in causing the scene transition.
Lastly, modify the window.onload() function to replace access to the loop module with a client-friendly myGame.start() function:
Create and edit blue_level.js file in the my_game folder to import from the engine index.js, MyGame, and SceneFileParser. Define and export BlueLevel to be a subclass of the engine.Scene class.
Define the init(), draw(), load(), and unload() functions to be identical to those in the MyGame class from the previous project.
Define the update() function similar to that of the MyGame scene. Once again, note the this.next() call when the object crosses the x=11 boundary from the right and the this.stop() call when the Q key is pressed.
Lastly, define the next() function to transition to the MyGame scene. It is worth reiterating that the call to super.next() is necessary because it is critical to stop the game loop and unload the current scene before proceeding to the next scene.
constructor(): For declaring variables and defining constants.
start()/stop(): For starting a scene and stopping the game. These two methods are not meant to be overwritten by a subclass.
init(): For instantiating the variables and setting up the game scene.
load()/unload(): For initiating the asynchronous loading and unloading of external resources.
draw()/update(): For continuously displaying the game state and receiving player input and implementing the game logic.
next(): For instantiating and transitioning to the next scene. Lastly, as a final reminder, it is absolutely critical for the subclass to call the super.next() to stop the game loop and unload the scene.
Any objects that define these methods can be loaded and interacted with by your game engine. You can experiment with creating other levels.
Audio is an essential element of all video games. In general, audio effects in games fall into two categories. The first category is background audio. This includes background music or ambient effects and is often used to bring atmosphere or emotion to different portions of the game. The second category is sound effects. Sound effects are useful for all sorts of purposes, from notifying users of game actions to hearing the footfalls of your hero character. Usually, sound effects represent a specific action, triggered either by the user or by the game itself. Such sound effects are often thought of as an audio cue.
One important difference between these two types of audio is how you control them. Sound effects or cues cannot be stopped or have their volume adjusted once they have started; therefore, cues are generally short. On the other hand, background audio can be started and stopped at will. These capabilities are useful for stopping the background track completely and starting another one.

Running the Audio Support project with both scenes
Left-/right-arrow key: Moves the front rectangle left and right to increase and decrease the volume of the background music
Q key: Quits the game
To add audio support to the resource management system
To provide an interface to play audio for games
bg_clip.mp3
blue_level_cue.wav
my_game_cue.wav
Notice that the audio files are in two formats, mp3 and wav. While both are supported, audio files of these formats should be used with care. Files in .mp3 format are compressed and are suitable for storing longer durations of audio content, for example, for background music. Files in .wav format are uncompressed and should contain only very short audio snippet, for example, for storing cue effects.
While audio and text files are completely different, from the perspective of your game engine implementation, there are two important similarities. First, both are external resources and thus will be implemented similarly as engine components in the src/engine/resources folder. Second, both involve standardized file formats with well-defined API utilities. The Web Audio API will be used for the actual retrieving and playing of sound files. Even though this API offers vast capabilities, in the interests of focusing on the rest of the game engine development, only basic supports for background audio and effect cues are discussed.
Interested readers can learn more about the Web Audio API from www.w3.org/TR/webaudio/.
The latest policy for some browsers, including Chrome, is that audio will not be allowed to play until first interaction from the user. This means that the context creation will result in an initial warning from Chrome that is output to the runtime browser console. The audio will only be played after user input (e.g., mouse click or keyboard events).
In the src/engine/resources folder, create a new file and name it audio.js. This file will implement the module for the audio component. This component must support two types of functionality: loading and unloading of audio files and playing and controlling of the content of audio file for the game developer.
The loading and unloading are similar to the implementations of text and xml modules where the core resource management functionality is imported from resource_map:
Define the decoding and parsing functions, and call the resource_map loadDecodeParse() function to load an audio file. Notice that with the support from resource_map and the rest of the engine infrastructure, loading and unloading of external resources have become straightforward.
With the loading functionality completed, you can now define the audio control and manipulation functions. Declare variables to maintain references to the Web Audio context and background music and to control volumes.
Define the init() function to create and store a reference to the Web Audio context in mAudioContext , and initialize the audio volume gain controls for the background, cue, and a master that affects both. In all cases, volume gain of a 0 corresponds to no audio and 1 means maximum loudness.
Define the playCue() function to play the entire duration of an audio clip with proper volume control. This function uses the audio file path as a resource name to find the loaded asset from the resource_map and then invokes the Web Audio API to play the audio clip. Notice that no reference to the source variable is kept, and thus once started, there is no way to stop the corresponding audio clip. A game should call this function to play short snippets of audio clips as cues.
Define the functionality to play, stop, query, and control the volume of the background music. In this case, the mBackgroundAudio variable keeps a reference to the currently playing audio, and thus, it is possible to stop the clip or change its volume.
Define functions for controlling the master volume, which adjusts the volume of both the cue and the background music:
Define a cleanUp() function to release the allocated HTML5 resources:
Remember to export the functions from this module:
To test the audio component, you must copy the necessary audio files into your game project. Create a new folder in the assets folder and name it sounds. Copy the bg_clip.mp3, blue_level_cue.wav, and my_game_cue.wav files into the sounds folder. You will now need to update the MyGame and BlueLevel implementations to load and use these audio resources.
Declare constant file paths to the audio files in the constructor. Recall that these file paths are used as resource names for loading, storage, and retrieval. Declaring these as constants for later reference is a good software engineering practice.
Request the loading of audio clips in the load() function, and make sure to define the corresponding unload() function. Notice that the unloading of background music is preceded by stopping the music. In general, a resource’s operations must be halted prior to its unloading.
Start the background audio at the end of the init() function.
In the update() function, cue the players when the right- and left-arrow keys are pressed, and increase and decrease the volume of the background music:
In the BlueLevel constructor, add the following path names to the audio resources:
Modify the load() and unload() functions for the audio clips:
In the same manner as MyGame, start the background audio in the init() function and cue the player when the left and right keys are pressed in the update() function. Notice that in this case, the audio cues are played with different volume settings.
You can now run the project and listen to the wonderful audio feedback. If you press and hold the arrow keys, there will be many cues repeatedly played. In fact, there are so many cues echoed that the sound effects are blurred into an annoying blast. This serves as an excellent example illustrating the importance of using audio cues with care and ensuring each individual cue is nice and short. You can try tapping the arrow keys to listen to more distinct and pleasant-sounding cues, or you can simply replace the isKeyPressed() function with the isKeyClicked() function and listen to each individual cue.
In this chapter, you learned how several common components of a game engine come together. Starting with the ever-important game loop, you learned how it implements an input, update, and draw pattern in order to surpass human perception or trick our senses into believing that the system is continuous and running in real time. This pattern is at the heart of any game engine. You learned how full keyboard support can be implemented with flexibility and reusability to provide the engine with a reliable input component. Furthermore, you saw how a resource manager can be implemented to load files asynchronously and how scenes can be abstracted to support scenes being loaded from a file, which can drastically reduce duplication in the code. Lastly, you learned how audio support supplies the client with an interface to load and play both ambient background audio and audio cues.
These components separately have little in common but together make up the core fundamentals of nearly every game. As you implement these core components into the game engine, the games that are created with the engine will not need to worry about the specifics of each component. Instead, the games programmer can focus on utilizing the functionality to hasten and streamline the development process. In the next chapter, you will learn how to create the illusion of an animation with external images.
In this chapter, we discussed the game loop and the technical foundation contributing to the connection between what the player does and how the game responds. If a player selects a square that’s drawn on the screen and moves it from location A to location B by using the arrow keys, for example, you’d typically want that action to appear as a smooth motion beginning as soon as the arrow key is pressed, without stutters, delays, or noticeable lag. The game loop contributes significantly to what’s known as presence in game design; presence is the player’s ability to feel as if they’re connected to the game world, and responsiveness plays a key role in making players feel connected. Presence is reinforced when actions in the real world (such as pressing arrow keys) seamlessly translate to responses in the game world (such as moving objects, flipping switches, jumping, and so on); presence is compromised when actions in the real world suffer translation errors such as delays and lag.
As mentioned in Chapter 1, effective game mechanic design can begin with just a few simple elements. By the time you’ve completed the Keyboard Support project in this chapter, for example, many of the pieces will already be in place to begin constructing game levels: you’ve provided players with the ability to manipulate two individual elements on the screen (the red and white squares), and all that remains in order to create a basic game loop is to design a causal chain using those elements that results in a new event when completed. Imagine the Keyboard Support project is your game: how might you use what’s available to create a causal chain? You might choose to play with the relationship between the squares, perhaps requiring that the red square be moved completely within the white square in order to unlock the next challenge; once the player successfully placed the red square in the white square, the level would complete. This basic mechanic may not be quite enough on its own to create an engaging experience, but by including just a few of the other eight elements of game design (systems design, setting, visual design, music and audio, and the like), it’s possible to turn this one basic interaction into an almost infinite number of engaging experiences and create that sense of presence for players. You’ll add more game design elements to these exercises as you continue through subsequent chapters.
The Resource Map and Shader Loader project, the Scene File project, and the Scene Objects project are designed to help you begin thinking about architecting game designs from the ground up for maximum efficiency so that problems such as asset loading delays that detract from the player’s sense of presence are minimized. As you begin designing games with multiple stages and levels and many assets, a resource management plan becomes essential. Understanding the limits of available memory and how to smartly load and unload assets can mean the difference between a great experience and a frustrating experience.
We experience the world through our senses, and our feeling of presence in games tends to be magnified as we include additional sensory inputs. The Audio Support project adds basic audio to our simple state-changing exercise from the Scene Objects project in the form of a constant background score to provide ambient mood and includes a distinct movement sound for each of the two areas. Compare the two experiences and consider how different they feel because of the presence of sound cues; although the visual and interaction experience is identical between the two, the Audio Support project begins to add some emotional cues because of the beat of the background score and the individual tones the rectangle makes as it moves. Audio is a powerful enhancement to interactive experiences and can dramatically increase a player’s sense of presence in game environments, and as you continue through the chapters, you’ll explore how audio contributes to game design in more detail.
Use any image or photograph as a texture representing characters or objects in your game
Understand and use texture coordinates to identify a location on an image
Optimize texture memory utilization by combining multiple characters and objects into one image
Produce and control animations using sprite sheets
Display texts of different fonts and sizes anywhere in your game
Custom-composed images are used to represent almost all objects including characters, backgrounds, and even animations in most 2D games. For this reason, the proper support of image operations is core to 2D game engines. A game typically works with an image in three distinct stages: loading, rendering, and unloading.
Loading is the reading of the image from the hard drive of the web server into the client’s system main memory, where it is processed and stored in the graphics subsystem. Rendering occurs during gameplay when the loaded image is drawn continuously to represent the respective game objects. Unloading happens when an image is no longer required by the game and the associated resources are reclaimed for future uses. Because of the slower response time of the hard drive and the potentially large amount of data that must be transferred and processed, loading images can take a noticeable amount of time. This, together with the fact that, just like the objects that images represent, the usefulness of an image is usually associated with individual game level, image loading and unloading operations typically occur during game-level transitions. To optimize the number of loading and unloading operations, it is a common practice to combine multiple lower-resolution images into a single larger image. This larger image is referred to as a sprite sheet .
To represent objects, images with meaningful drawings are pasted, or mapped, on simple geometries. For example, a horse in a game can be represented by a square that is mapped with an image of a horse. In this way, a game developer can manipulate the transformation of the square to control the horse. This mapping of images on geometries is referred to as texture mapping in computer graphics.
The illusion of movement, or animation, can be created by cycling through strategically mapping selected images on the same geometry. For example, during subsequent game loop updates, different images of the same horse with strategically drawn leg positions can be mapped on the same square to create the illusion that the horse is galloping. Usually, these images of different animated positions are stored in one sprite sheet or an animated sprite sheet. The process of sequencing through these images to create animation is referred to as sprite animation or sprite sheet animation.
This chapter first introduces you to the concept of texture coordinates such that you can understand and program with the WebGL texture mapping interface. You will then build a core texture component and the associated classes to support mapping with simple textures, working with sprite sheets that contain multiple objects, creating and controlling motions with animated sprite sheets, and extracting alphabet characters from a sprite sheet to display text messages.
A texture is an image that is loaded into the graphics system and ready to be mapped onto a geometry. When discussing the process of texture mapping, “an image” and “a texture” are often used interchangeably. A pixel is a color location in an image and a texel is a color location in a texture.
As discussed, texture mapping is the process of pasting an image on a geometry, just like putting a sticker on an object. In the case of your game engine, instead of drawing a constant color for each pixel occupied by the unit square, you will create GLSL shaders to strategically select texels from the texture and display the corresponding texel colors at the screen pixel locations covered by the unit square. The process of selecting a texel, or converting a group of texels into a single color, to be displayed to a screen pixel location is referred to as texture sampling. To render a texture-mapped pixel, the texture must be sampled to extract a corresponding texel color.

The Texture Coordinate System and the corresponding uv values defined for all images
There are conventions that define the v axis increasing either upward or downward. In all examples of this book, you will program WebGL to follow the convention in Figure 5-1, with the v axis increasing upward.

Defining Texture Space uv values to map the entire image onto the geometry in Model Space

Running the Texture Shaders project with both scenes
The controls of the project are as follows, for both scenes:
Right-arrow key: Moves the middle rectangle toward the right. If this rectangle passes the right window boundary, it will be wrapped to the left side of the window.
Left-arrow key: Moves the middle rectangle toward the left. If this rectangle crosses the left window boundary, the game will transition to the next scene.
To demonstrate how to define uv coordinates for geometries with WebGL
To create a texture coordinate buffer in the graphics system with WebGL
To build GLSL shaders to render the textured geometry
To define the Texture core engine component to load and process an image into a texture and to unload a texture
To implement simple texture tinting, a modification of all texels with a programmer-specified color
You can find the following external resource files in the assets folder: a scene-level file (blue_level.xml) and four images (minion_collector.jpg, minion_collector.png, minion_portal.jpg, and minion_portal.png).
texture_vs.glsl and texture_fs.glsl: These are new files created to define GLSL shaders for supporting drawing with uv coordinates. Recall that the GLSL shaders must be loaded into WebGL and compiled during the initialization of the game engine.
vertex_buffer.js: This file is modified to create a corresponding uv coordinate buffer to define the texture coordinate for the vertices of the unit square.
texture_shader.js : This is a new file that defines TextureShader as a subclass of SimpleShader to interface the game engine with the corresponding GLSL shaders (TextureVS and TextureFS).
texture_renderable.js: This is a new file that defines TextureRenderable as a subclass of Renderable to facilitate the creation, manipulation, and drawing of multiple instances of textured objects.
shader_resources.js : Recall that this file defines a single instance of SimpleShader to wrap over the corresponding GLSL shaders to be shared system wide by all instances of Renderable objects. In a similar manner, this file is modified to define an instance of TextureShader to be shared by all instances of TextureRenderable objects.
gl.js: This file is modified to configure WebGL to support drawing with texture maps.
texture.js : This is a new file that defines the core engine component that is capable of loading, activating (for rendering), and unloading texture images.
my_game.js and blue_level.js: These game engine client files are modified to test the new texture mapping functionality.
Two new source code folders, src/engine/shaders and src/engine/renderables, are created for organizing the engine source code. These folders are created in anticipation of the many new shader and renderer types required to support the corresponding texture-related functionality. Once again, continuous source code reorganization is important in supporting the corresponding increase in complexity. A systematic and logical source code structure is critical in maintaining and expanding the functionality of large software systems.

The SimpleShader and Renderable architecture

The TextureVS/FS GLSL shaders and the corresponding TextureShader/TextureRenderable object pair
Create a new file in the src/glsl_shaders folder and name it texture_vs.glsl.
Add the following code to the texture_vs.glsl file:
The first additional line adds the aTextureCoordinate attribute . This defines a vertex to include a vec3 (aVertexPosition, the xyz position of the vertex) and a vec2 (aTextureCoordinate, the uv coordinate of the vertex).
The second declares the varying vTexCoord variable. The varying keyword in GLSL signifies that the associated variable will be linearly interpolated and passed to the fragment shader. As explained earlier and illustrated in Figure 5-2, uv values are defined only at vertex positions. In this case, the varying vTexCoord variable instructs the graphics hardware to linearly interpolate the uv values to compute the texture coordinate for each invocation of the fragment shader.
The third and final line assigns the vertex uv coordinate values to the varying variable for interpolation and forwarding to the fragment shader.
Create a new file in the src/glsl_shaders folder and name it texture_fs.glsl.
Add the following code to the texture_fs.glsl file to declare the variables. The sampler2D data type is a GLSL utility that is capable of reading texel values from a 2D texture. In this case, the uSampler object will be bound to a GLSL texture such that texel values can be sampled for every pixel rendered. The uPixelColor is the same as the one from SimpleFS. The vTexCoord is the interpolated uv coordinate value for each pixel.
Add the following code to compute the color for each pixel:
The texture2D() function samples and reads the texel value from the texture that is associated with uSampler using the interpolated uv values from vTexCoord. In this example, the texel color is modified, or tinted, by a weighted sum of the color value defined in uPixelColor according to the transparency or the value of the corresponding alpha channel. In general, there is no agreed-upon definition for tinting texture colors. You are free to experiment with different ways to combine uPixelColor and the sampled texel color. For example, you can try multiplying the two. In the provided source code file, a few alternatives are suggested. Please do experiment with them.
Modify vertex_buffer.js to define both xy and uv coordinates for the unit square. As illustrated in Figure 5-2, the mTextureCoordinates variable defines the uv values for the corresponding four xy values of the unit square defined sequentially in mVerticesOfSquare. For example, (1, 1) are the uv values associated with the (0.5, 0.5, 0) xy position, (0, 1) for (-0.5, 0.5, 0), and so on.
Define the variable, mGLTextureCoordBuffer, to keep a reference to the WebGL buffer storage for the texture coordinate values of mTextureCoordinates and the corresponding getter function:
Modify the init() function to include a step D to initialize the texture coordinates as a WebGL buffer. Notice the initialization process is identical to that of the vertex xy coordinates except that the reference to the new buffer is stored in mGLTextureCoordBuffer and the transferred data are the uv coordinate values.
Remember to release the allocated buffer during final cleanup:
Finally, remember to export the changes:
Create a new folder called shaders in src/engine. Move the simple_shader.js file into this folder, and do not forget to update the reference path in index.js.
Create a new file in the src/engine/shaders folder and name it texture_shader.js .
In the listed code, take note of the following:
The defined TextureShader class is an extension, or subclass, to the SimpleShader class.
The constructor implementation first calls super(), the constructor of SimpleShader. Recall that the SimpleShader constructor will load and compile the GLSL shaders defined by the vertexShaderPath and fragmentShaderPath parameters and set mVertexPositionRef to reference the aVertexPosition attribute defined in the shader.
In the rest of the constructor, the mTextureCoordinateRef keeps a reference to the aTextureCoordinate attribute defined in the texture_vs.glsl.
In this way, both the vertex position (aVertexPosition) and texture coordinate (aTextureCoordinate) attributes are referenced by a JavaScript TextureShader object.
Override the activate() function to enable the texture coordinate data. The superclass super.activate() function sets up the xy vertex position and passes the values of pixelColor, trsMatrix, and cameraMatrix to the shader. The rest of the code binds mTextureCoordinateRef, the texture coordinate buffer defined in the vertex_buffer module, to the aTextureCoordinate attribute in the GLSL shader and mSampler to texture unit 0 (to be detailed later).
With the combined functionality of SimpleShader and TextureShader, after the activate() function call, both of the attribute variables (aVertexPosition and aTextureCoordinate) in the GLSL texture_vs shader are connected to the corresponding buffers in the WebGL memory.
In shader_resources.js, add the variables to hold a texture shader:
Define a function to retrieve the texture shader:
Create the instance of texture shader in the createShaders() function:
Modify the init() function to append the loadPromise to include the loading of the texture shader source files:
Remember to release newly allocated resources during cleanup:
Lastly, remember to export the newly defined functionality:
Just as the Renderable class encapsulates and facilitates the definition and drawing of multiple instances of SimpleShader objects, a corresponding TextureRenderable class needs to be defined to support the drawing of multiple instances of TextureShader objects.
Create the src/engine/renderables folder and move renderable.js into this folder. Remember to update index.js to reflect the file location change.
Define the _setShader() function to set the shader for the Renderable. This is a protected function which allows subclasses to modify the mShader variable to refer to the appropriate shaders for each corresponding subclass.
Functions with names that begin with “_” are either private or protected and should not be called from outside of the class. This is a convention followed in this book and not enforced by JavaScript.
Create a new file in the src/engine/renderables folder and name it texture_renderable.js. Add the constructor. Recall that super() is a call to the superclass (Renderable) constructor; similarly, the super.setColor() and super._setShader() are calls to the superclass functions. As will be detailed when discussing the engine texture resource module, the myTexture parameter is the path to the file that contains the texture image.
Define a draw() function to append the function defined in the Renderable class to support textures. The texture.activate() function activates and allows drawing with the specific texture. The details of this function will be discussed in the following section.
Define a getter and setter for the texture reference:
Finally, remember to export the class:
To support drawing with textures, the rest of the game engine requires two main modifications: WebGL context configuration and a dedicated engine component to support operations associated with textures.
The parameter passed to mCanvas.getContext() informs the browser that the canvas should be opaque. This can speed up the drawing of transparent content and images. The blendFunc() function enables transparencies when drawing images with the alpha channel. The pixelStorei() function defines the origin of the uv coordinate to be at the lower-left corner.
Create a new file in the src/engine/resources folder and name it texture.js. This file will implement the Texture engine component.
Define the TextureInfo class to represent a texture in the game engine. The mWidth and mHeight are the pixel resolution of the texture image, and mGLTexID is a reference to the WebGL texture storage.
For efficiency reasons, many graphics hardware only supports texture with image resolutions that are in powers of 2, such as 2x4 (21x 22), or 4x16 (22x 24), or 64x256 (26x 28), and so on. This is also the case for WebGL. All examples in this book only work with textures with resolutions that are powers of 2.
Import the core resource management functionality from the resource_map:
Define a function to load an image asynchronously as a promise and push the promise to be part of the pending promises in the map. Distinct from the text and audio resources, JavaScript Image API supports straightforward image file loading, and the map.loadDecodeParse() is not required in this case. Once an image is loaded, it is passed to the processLoadedImage() function with its file path as the name.
Add an unload() function to clean up the engine and release WebGL resources:
Now define the processLoadedImage() function to convert the format of an image and store it to the WebGL context. The gl.createTexture() function creates a WebGL texture buffer and returns a unique ID. The texImage2D() function stores the image into the WebGL texture buffer, and generateMipmap() computes a mipmap for the texture. Lastly, a TextureInfo object is instantiated to refer to the WebGL texture and stored into the resource_map according to the file path to the texture image file.
A mipmap is a representation of the texture image that facilitates high-quality rendering. Please consult a computer graphics reference book to learn more about mipmap representation and the associated texture mapping algorithms.
Define a function to activate a WebGL texture for drawing:
The get() function locates the TextureInfo object from the resource_map based on the textureName. The located mGLTexID is used in the bindTexture() function to activate the corresponding WebGL texture buffer for rendering.
The texParameteri() function defines the rendering behavior for the texture. The TEXTURE_WRAP_S/T parameters ensure that the texel values will not wrap around at the texture boundaries. The TEXTURE_MAG_FILTER parameter defines how to magnify a texture, in other words, when a low-resolution texture is rendered to many pixels in the game window. The TEXTURE_MIN_FILTER parameter defines how to minimize a texture, in other words, when a high-resolution texture is rendered to a small number of pixels.
The LINEAR and LINEAR_MIPMAP_LINEAR configurations generate smooth textures by blurring the details of the original images, while the commented out NEAREST option will result in unprocessed textures best suitable for pixelated effects. Notice that in this case, color boundaries of the texture image may appear jagged.
In general, it is best to use texture images with similar resolution as the number of pixels occupied by the objects in the game. For example, a square that occupies a 64x64 pixel space should ideally use a 64x64 texel texture.
Define a function to deactivate a texture as follows. This function sets the WebGL context to a state of not working with any texture.
Finally, remember to export the functionality:
With the described modifications, the game engine can now render constant color objects as well as objects with interesting and different types of textures. The following testing code is similar to that from the previous example where two scenes, MyGame and BlueLevel, are used to demonstrate the newly added texture mapping functionality. The main modifications include the loading and unloading of texture images and the creation and drawing of TextureRenderable objects. In addition, the MyGame scene highlights transparent texture maps with alpha channel using PNG images, and the BlueScene scene shows corresponding textures with images in the JPEG format.
As in all cases of building a game, it is essential to ensure that all external resources are properly organized. Recall that the assets folder is created specifically for the organization of external resources. Take note of the four new texture files located in the assets folder: minion_collector.jpg, minion_collector.png, minion_portal.jpg, and minion_portal.png.
The TextureSquare element is similar to Square with the addition of a Texture attribute that specifies which image file should be used as a texture map for the square. Note that as implemented in texture_fs.glsl, the alpha value of the Color element is used for tinting the texture map. The XML scene description is meant to support slight tinting of the minion_portal.jpg texture and no tinting of the minion_collector.jpg texture. This texture tinting effect can be observed in the right image of Figure 5-3. In addition, notice that both images specified are in the JPEG format. Since the JPEG format does not support the storing of alpha channel, the unused regions of the two images show up as white areas outside the portal and collector minions in the right image of Figure 5-3.
The scene file parser, scene_file_parser.js , is modified to support the parsing of the updated blue_scene.xml, in particular, to parse Square elements into Renderable objects and TextureSquare elements into TextureRenderable objects. For details of the changes, please refer to the source code file in the src/my_game/util folder.
Edit blue_level.js and modify the constructor to define constants to represent the texture images:
Initiate loading of the textures in the load() function:
Likewise, add code to clean up by unloading the textures in the unload() function :
Support loading of the next scene with the next() function:
Parse the textured squares in the init() function:
Include appropriate code in the update() function to continuously change the tinting of the portal TextureRenderable, as follows:
Index 1 of mSqSet is the portal TextureRenderable object , and index 3 of the color array is the alpha channel.
The listed code continuously increases and wraps the alpha value of the mColor variable in the TextureRenderable object. Recall that the values of this variable are passed to TextureShader and then loaded to the uPixelColor of TextureFS for tinting the texture map results.
As defined in the first TextureSquare element in the blue_scene.xml file, the color defined for the portal object is red. For this reason, when running this project, in the blue level, the portal object appears to be blinking in red.
Edit my_game.js; modify the MyGame constructor to define texture image files and the variables for referencing the TextureRenderable objects:
Initiate the loading of the textures in the load() function:
Make sure you remember to unload the textures in unload():
Define the next() function to start the blue level:
Create and initialize the TextureRenderables objects in the init() function :
The modification to the draw() function draws the two new TextureRenderable objects by calling their corresponding draw() functions, while the modification to the update() function is similar to that of the BlueLevel discussed earlier. Please refer to the my_game.js source code file in the src/my_game folder for details.
When running the example for this project in the chapter5/5.1.texture_shaders folder, once again take note of the results of continuously changing the texture tinting—the blinking of the portal minion in red. In addition, notice the differences between the PNG-based textures in the MyGame level and the corresponding JPEG ones with white borders in the BlueLevel. It is visually more pleasing and accurate to represent objects using textures with the alpha (or transparency) channel. PNG is one of the most popular image formats that supports the alpha channel.
This project has been the longest and most complicated one that you have worked with. This is because working with texture mapping requires you to understand texture coordinates, the implementation cuts across many of the files in the engine, and the fact that actual images must be loaded, converted into textures, and stored/accessed via WebGL. To help summarize the changes, Figure 5-6 shows the game engine states in relation to the states of an image used for texture mapping and some of the main game engine operations.

Overview of the states of an image file and the corresponding WebGL texture

Example sprite sheet: minion_sprite.png composed of lower-resolution images of different objects
Sprite sheets are defined to optimize both memory and processing requirements. For example, recall that WebGL only supports textures that are defined by images with 2x × 2y resolutions. This requirement means that the Dye character at a resolution of 120x180 must be stored in a 128x256 (27 × 28) image in order for it to be created as a WebGL texture. Additionally, if the 13 elements of Figure 5-7 were stored as separate image files, then it would mean 13 slow file system accesses would be required to load all the images, instead of one single system access to load the sprite sheet.
Pixel positions: The lower-left corner is (315, 0), and the upper-right corner is (495, 180).
UV values: The lower-left corner is (0.308, 0.0), and the upper-right corner is (0.483, 0.352).
Use in Model Space : Texture mapping of the element is accomplished by associating the corresponding uv values with the xy values at each vertex position.

A conversion of coordinate from pixel position to uv values and used for mapping on geometry

Running the Sprite Shaders project
Right-arrow key: Moves the Dye character (the hero) right and loops to the left boundary when the right boundary is reached
Left-arrow key: Moves the hero left and resets the position to the middle of the window when the left boundary is reached
To gain a deeper understanding of texture coordinates
To experience defining subregions within an image for texture mapping
To draw squares by mapping from sprite sheet elements
To prepare for working with sprite animation and bitmap fonts
You can find the following external resource files in the assets folder: consolas-72.png and minion_sprite.png . Notice that minion_sprite.png is the image shown in Figure 5-7.

Defining a texture coordinate buffer in the SpriteShader
Create a new file in the src/engine/shaders folder and name it sprite_shader.js.
Define the SpriteShader class and its constructor to extend the TextureShader class:
Define a function to set the WebGL texture coordinate buffer:
Override the texture coordinate accessing function, _getTexCoordBuffer(), such that when the shader is activated, the locally allocated dynamic buffer is returned and not the global static buffer. Note that the activate() function is inherited from TextureShader.
Remember to export the class:
Create a new file in the src/engine/renderables folder and name it sprite_renderable.js.
Define the SpriteRenderable class and constructor to extend from the TextureRenderable class. Notice that the four instance variables, mElmLeft, mElmRight, mElmTop, and mElmBottom, together identify a subregion within the Texture Space. These are the bounds of a sprite sheet element.
Define an enumerated data type with values that identify corresponding offset positions of a WebGL texture coordinate specification array:
An enumerated data type has a name that begins with an “e”, as in eTexCoordArrayIndex.
Define functions to allow the specification of uv values for a sprite sheet element in both texture coordinate space (normalized between 0 and 1) and with pixel positions (which are converted to uv values):
Add a function to construct the texture coordinate specification array that is appropriate for passing to the WebGL context:
Override the draw() function to load the specific texture coordinate values to WebGL context before the actual drawing:
Finally, remember to export the class and the defined enumerated type:
In the engine/core/shader_resources.js file, import SpriteShader, add a variable for storing, and define the corresponding getter function to access the shared SpriteShader instance:
Modify the createShaders() function to create the SpriteShader:
Update the cleanUp() function for proper release of resources:
Make sure to export the new functionality:
The constructing, loading, unloading, and drawing of MyGame are similar to previous examples, so the details will not be repeated here. Please refer to the source code in the src/my_game folder for details.
Modify the init() function as follows.
After the camera is set up in step A, notice that in step B both mPortal and mCollector are created based on the same image, kMinionSprite , with the respective setElementPixelPositions() and setElementUVCoordinate() calls to specify the actual sprite element to use for rendering.
Step C creates two additional SpriteRenderable objects: mFontImage and mMinion. The sprite element uv coordinate settings are the defaults where the texture image will cover the entire geometry.
Similar to step B, step D creates the hero character as a SpriteRenderable object based on the same kMinionSprite image. The sprite sheet element that corresponds to the hero is identified with the setElementPixelPositions() call.
Notice that in this example, four of the five SpriteRenderable objects created are based on the same kMinionSprite image.
The update() function is modified to support the controlling of the hero object and changes to the uv values.
Observe that the keyboard control and the drawing of the hero object are identical to previous projects.
Notice the calls to setElementUVCoordinate() for mFontImage and mMinion. These calls continuously decrease and reset the V values that correspond to the bottom, the U values that correspond to the right for mFontImage, the V values that correspond to the top, and the U values that correspond to the left for mMinion. The end results are the continuous changing of texture and the appearance of a zooming animation on these two objects
In games, you often want to create animations that reflect the movements or actions of your characters. In the previous chapter, you learned about moving the geometries of these objects with transformation operators. However, as you have observed when controlling the hero character in the previous example, if the textures on these objects do not change in ways that correspond to the control, the interaction conveys the sensation of moving a static image rather than setting a character in motion. What is needed is the ability to create the illusion of animations on geometries when desired.
In the previous example, you observed from the mFontImage and mMinion objects that the appearance of an animation can be created by constantly changing the uv values on a texture-mapped geometry. As discussed at the beginning of this chapter, one way to control this type of animation is by working with an animated sprite sheet.

An animated sprite sheet organized into two rows representing two animated sequences of the same object

A sprite animation sequence that loops

Running the Sprite Animate Shaders project
Right-arrow key: Moves the hero right; when crossing the right boundary, the hero is wrapped back to the left boundary
Left-arrow key: Opposite movements of the right arrow key
Number 1 key: Animates by showing sprite elements continuously from right to left
Number 2 key: Animates by showing sprite elements moving back and forth continuously from left to right and right to left
Number 3 key: Animates by showing sprite elements continuously from left to right
Number 4 key: Increases the animation speed
Number 5 key: Decreases the animation speed
To understand animated sprite sheets
To experience the creation of sprite animations
To define abstractions for implementing sprite animations
You can find the same files as in the previous project in the assets folder.
Sprite animation can be implemented by strategically controlling the uv values of a SpriteRenderable to display the appropriate sprite element at desired time periods. For this reason, only a single class, SpriteAnimateRenderable , needs to be defined to support sprite animations.
Create a new file in the src/engine/renderables folder and name it sprite_animate_renderable.js.
Define an enumerated data type for the three different sequences to animate:
The eAnimationType enum defines three modes for animation:
eRight starts at the leftmost element and animates by iterating toward the right along the same row. When the last element is reached, the animation continues by starting from the leftmost element again.
eLeft is the reverse of eRight; it starts from the right, animates toward the left, and continues by starting from the rightmost element after reaching the leftmost element.
eSwing is a continuous loop from left to right and then from right to left.
Define the SpriteAnimateRenderable class to extend from SpriteRenderable and define the constructor:
The first set, including mFirstElmLeft, mElmTop, and so on, defines the location and dimensions of each sprite element and the number of elements in the animation. This information can be used to accurately compute the texture coordinate for each sprite element when the elements are ordered by rows and columns. Note that all coordinates are in texture coordinate space (0 to 1).
The second set stores information on how to animate, the mAnimationType of left, right, or swing, and how much time, mUpdateInterval, to wait before advancing to the next sprite element. This information can be changed during runtime to reverse, loop, or control the speed of a character’s movement.
The third set, mCurrentAnimAdvance and mCurrentElm, describes offset for advancing and the current frame number. Both of these variables are in units of element counts and are not designed to be accessed by the game programmer because they are used internally to compute the next sprite element for display.
Define the _initAnimation() function to compute the proper vales for mCurrentAnimAdance and mCurrentElm according to the current animation type:
Define the _setSpriteElement() function to compute and load the uv values of the currently identified sprite element for rendering:
Define a function to set the animation type. Note that the animation is always reset to start from the beginning when the animation type (left, right, or swing) is changed.
Define a function for specifying a sprite animation sequence. The inputs to the function are in pixels and are converted to texture coordinates by dividing by the width and height of the image.
Define functions to change animation speed, either directly or by an offset:
Define a function to advance the animation for each game loop update:
Finally, remember to export the defined class and enumerated animation type:
The constructing, loading, unloading, and drawing of MyGame are similar to the previous example and the details are not repeated.
In the init() function , add code to create and initialize the SpriteAnimateRenderable objects between steps C and D:
The update() function must invoke the SpriteAnimateRenderable object’s updateAnimation() function to advance the sprite animation:
The keys 1, 2, and 3 change the animation type, and keys 4 and 5 change the animation speed. Note that the limit of the animation speed is the update rate of the game loop.
A valuable tool that many games use for a variety of tasks is text output. Drawing of text messages is an efficient way to communicate to the user as well as you, the developer. For example, text messages can be used to communicate the game’s story, the player’s score, or debugging information during development. Unfortunately, WebGL does not support the drawing of text. This section briefly introduces bitmap fonts and introduces FontRenderable objects to support the drawing of texts.

An example bitmap font sprite image

A snippet of the XML file with the decoding information for the bitmap font image shown in Figure 5-14
Notice that the decoding information as shown in Figure 5-15 uniquely defines the uv coordinate positions for each character in the image, as shown in Figure 5-14. In this way, displaying individual characters from a bitmap font sprite image can be performed in a straightforward manner by the SpriteRenderable objects .
There are many bitmap font file formats. The format used in this book is the AngleCode BMFont–compatible font in XML form. BMFont is an open source software that converts vector fonts, such as TrueType and OpenType, into bitmap fonts. See www.angelcode.com/products/bmfont/ for more information.

Running the Font Support project
Number keys 0, 1, 2, and 3: Select the Consolas, 16, 24, 32, or 72 fonts, respectively, for size modification.
Up/down key while holding down X/Y key: Increases or decreases (arrow keys) the width (X key) or the height (Y key) of the selected font.
Left- and right-arrow keys: Move the hero left or right. The hero wraps if it exits the bounds.
To understand bitmap fonts
To gain a basic understanding of drawing text strings in a game
To implement text drawing support in your game engine
You can find the following external resource files in the assets folder: consolas-72.png and minion_sprite.png. In the assets/fonts folder are the bitmap font sprite image files and the associated XML files that contain the decoding information: consolas-16.fnt, consolas-16.png, consolas-24.fnt, consolas-24.png, consolas-32.fnt, consolas-32.png, consolas-72.fnt, consolas-72.png, segment7-96.fnt, segment7-96.png, system-default-font.fnt, and system-default-font.png.
Notice that the .fnt and .png files are paired. The former contains decoding information for the latter. These file pairs must be included in the same folder for the engine to load the font properly. system-default-font is the default font for the game engine, and it is assumed that this font is always present in the asset/fonts folder.
The actions of parsing, decoding, and extracting of character information from the .fnt files are independent from the foundational operations of a game engine. For this reason, the details of these operations are not presented. If you are interested, you should consult the source code.
Create a new file in the src/engine/resources folder and name it font.js.
Import the resource management functionality from the xml module for loading the .fnt file and the texture module for the .png sprite image file, and define local constants for these file extensions:
Define a class for storing uv coordinate locations and the size associated with a character. This information can be computed based on the contents from the .fnt file.
Define two functions to return proper extensions based on a path with no file extension. Note that fontName is a path to the font files but without any file extensions. For example, assets/fonts/system-default-font is the string and the two functions identify the two associated .fnt and .png files.
Define the load() and unload() functions. Notice that two file operations are actually invoked in each: one for the .fnt and the second for the .png files.
Define a function to inquire the loading status of a given font:
Define a function to compute CharacterInfo based on the information presented in the .fnt file:
Details of decoding and extracting information for a given character are omitted because they are unrelated to the rest of the game engine implementation.
For details of the .fnt format information, please refer to www.angelcode.com/products/bmfont/doc/file_format.html.
Finally, remember to export the functions from this module:
Create a file in the src/engine/resources folder and name it default_resources.js, import functionality from the font and resource_map modules, and define a constant string and its getter function for the path to the default system font:
Define an init() function to issue the default system font loading request in a JavaScript Promise and append the Promise to the array of outstanding load requests in the resource_map. Recall that the loop.start() function in the loop module waits for the fulfillment of all resource_map loading promises before starting the game loop. For this reason, as in the case of all other asynchronously loaded resources, by the time the game loop begins, the default system font will have been properly loaded.
Define the cleanUp() function to release all allocated resources, in this case, unload the font:
Lastly, remember to export all defined functionality:
Create a new file in the src/engine/renderables folder and name it font_renderable.js.
Define the FontRenderable class and its constructor to accept a string as its parameter:
The aString variable is the message to be drawn.
Notice that FontRenderable objects do not customize the behaviors of SpriteRenderable objects. Rather, it relies on a SpriteRenderable object to draw each character in the string. For this reason, FontRenderable is not a subclass of but instead contains an instance of the SpriteRenderable object, the mOneChar variable.
Define the draw() function to parse and draw each character in the string using the mOneChar variable :
The dimension of each character is defined by widthOfOneChar and heightOfOneChar where the width is simply dividing the total FontRenderable width by the number of characters in the string. The for loop then performs the following operations:
Extracts each character in the string
Calls the getCharInfo() function to receive the character’s uv values and size information in charInfo
Uses the uv values from charInfo to identify the sprite element location for mOneChar (by calling and passing the information to the mOneChar.setElementUVCoordinate() function )
Uses the size information from charInfo to compute the actual size (xSize and ySize) and location offset for the character (xOffset and yOffset) and draws the character mOneChar with the appropriate settings
Implement the getters and setters for the transform, the text message to be drawn, the font to use for drawing, and the color:
Define the setTextHeight() function to define the height of the message to be output:
Finally, remember to export the defined class:
FontRenderable does not support the rotation of the entire message. Text messages are always drawn horizontally from left to right.
Edit index.js to import functionality from the font and default_resources modules and the FontRenderable class:
Add default resources initialization and cleanup in the engine init() and cleanUp() functions:
Remember to export the newly defined functionality:
In the my_game.js file, modify the constructor to define corresponding variables for printing the messages, and modify the draw() function to draw all objects accordingly. Please refer to the src/my_game/my_game.js file for the details of the code.
Modify the load() function to load the textures and fonts. Once again, notice that the font paths, for example, assets/fonts/consolas-16, do not include file name extensions. Recall that this path will be appended with .fnt and .png, where two separate files will be loaded to support the drawing of fonts.
Modify the unload() function to unload the textures and fonts:
Define a private _initText() function to set the color, location, and height of a FontRenderable object. Modify the init() function to set up the proper WC system and initialize the fonts. Notice the calls to setFont() function to change the font type for each message.
Modify the update() function with the following:
Select which FontRenderable object to work with based on keyboard 0 to 4 input.
Control the width and height of the selected FontRenderable object when both the left/right arrow and X/Y keys are pressed.
You can now interact with the Font Support project to modify the size of each of the displayed font message and to move the hero toward the left and right.
In this chapter, you learned how to paste, or texture map, images on unit squares to better represent objects in your games. You also learned how to identify a selected subregion of an image and texture map to the unit square based on the normalize-ranged Texture Coordinate System. The chapter then explained how sprite sheets can reduce the time required for loading texture images while facilitating the creation of animations. This knowledge was then generalized and applied to the drawing of bitmap fonts.
The implementation of texture mapping and sprite sheet rendering takes advantage of an important aspect of game engine architecture: the SimpleShader/Renderable object pair where JavaScript SimpleShader objects are defined to interface with corresponding GLSL shaders and Renderable objects to facilitate the creation and interaction with multiple object instances. For example, you created TextureShader to interface with TextureVS and TextureFS GLSL shaders and created TextureRenderable for the game programmers to work with. This same pattern is repeated for SpriteShader and SpriteRenderable. The experience from SpriteShader objects paired with SpriteAnimateRenderable shows that, when appropriate, the same shader object can support multiple renderable object types in the game engine. This SimpleShader/Renderable pair implementation pattern will appear again in Chapter 8, when you learn to create 3D illumination effects.
At the beginning of this chapter, your game engine supports the player manipulating objects with the keyboard and the drawing of these objects in various sizes and orientations. With the functionality from this chapter, you can now represent these objects with interesting images and create animations of these objects when desired. In the next chapter, you will learn about defining and supporting behaviors for these objects including pseudo autonomous behaviors such as chasing and collision detections.
In Chapter 4, you learned how responsive game feedback is essential for making players feel connected to a game world and that this sense of connection is known as presence in game design. As you move through future chapters in this book, you’ll notice that most game design is ultimately focused on enhancing the sense of presence in one way or another, and you’ll discover that visual design is one of the most important contributors to presence. Imagine, for example, a game where an object controlled by the player (also known as the hero object) must maneuver through a 2D platformer-style game world; the player’s goal might be to use the mouse and keyboard to jump the hero between individual surfaces rendered in the game without falling through gaps that exist between those surfaces. The visual representation of the hero and other objects in the environment determines how the player identifies with the game setting, which in turn determines how effectively the game creates presence: Is the hero represented as a living creature or just an abstract shape like a square or circle? Are the surfaces represented as building rooftops, as floating rocks on an alien planet, or simply as abstract rectangles? There is no right or wrong answer when it comes to selecting a visual representation or game setting, but it is important to design a visual style for all game elements that feels unified and integrated into whatever game setting you choose (e.g., abstract rectangle platforms may negatively impact presence if your game setting is a tropical rainforest).
The Texture Shaders project demonstrated how .png images with transparency, more effectively integrate game elements into the game environment than formats like .jpg that don’t support transparency. If you move the hero (represented here as simply a rectangle) to the right, nothing on the screen changes, but if you move the hero to the left, you’ll eventually trigger a state change that alters the displayed visual elements as you did in the Scene Objects project from Chapter 4. Notice how much more effectively the robot sprites are integrated into the game scene when they’re .png files with transparency on the gray background compared to when they’re .jpg images without transparency on the blue background.
The Sprite Shaders project introduces a hero that more closely matches other elements in the game setting: you’ve replaced the rectangle from the Texture Shaders project with a humanoid figure stylistically matched to the flying robots on the screen, and the area of the rectangular hero image not occupied by the humanoid figure is transparent. If you were to combine the hero from the Sprite Shaders project with the screen-altering action in the Texture Shaders project, imagine that as the hero moves toward the robot on the right side of the screen, the robot might turn red when the hero gets too close. The coded events are still simple at this point, but you can see how the visual design and a few simple triggered actions can already begin to convey a game setting and enhance presence.
Note that as game designers we often become enamored with highly detailed and elaborate visual designs, and we begin to believe that higher fidelity and more elaborate visual elements are required to make the best games; this drive for ever-more powerful graphics is the familiar race that many AAA games engage in with their competition. While it’s true that game experiences and the sense of presence can be considerably enhanced when paired with excellent art direction, excellence does not always require elaborate and complex. Great art direction relies on developing a unified visual language where all elements harmonize with each other and contribute to driving the game forward and that harmony can be achieved with anything from simple shapes and colors in a 2D plane to hyperreal 3D environments and every combination in between.
Adding animated motion to the game’s visual elements can further enhance game presence because animation brings a sense of cinematic dynamism to gameplay that further connects players to the game world. We typically experience motion in our world as interconnected systems; when you walk across the room, for example, you don’t just glide without moving your body but move different parts of your body together in different ways. By adding targeted animations to objects onscreen that cause those objects to behave in ways you might expect complex systems to move or act, you connect players in a more immersive and convincing way to what’s going on in the game world. The Sprite Animation project demonstrates how animation increases presence by allowing you to articulate the flying robot’s spikes, controlling direction and speed. Imagine again combining the Sprite Animation project with the earlier projects in this chapter; as the hero moves closer to the robot, it might first turn red, eventually triggering the robot’s animations and moving it either toward or away from the player. Animations often come fairly late in the game design process because it’s usually necessary to first have the game mechanic and other systems well defined to avoid time-consuming changes that may be required as environments and level designs are updated. Designers typically use simple placeholder assets in the early stages of development, adding polished and animated final assets only when all of the other elements of gameplay have been finalized to minimize the need for rework.
As was the case with visual design, the animation approach need not be complex to be effective. While animation needs to be intentional and unified and should feel smooth and stutter-free unless it’s intentionally designed to be otherwise, a wide degree of artistic license can be employed in how movement is represented onscreen.
The Font Support project introduced you to game fonts. While fonts rarely have a direct impact on gameplay, they can have a dramatic impact on presence. Fonts are a form of visual communication, and the style of the font is often as important as the words it conveys in setting tone and mood and can either support or detract from the game setting and visual style. Pay particular attention to the fonts displayed in this project, and note how the yellow font conveys a digital feeling that’s matched to the science fiction–inspired visual style of the hero and robots, while the Consolas font family with its round letterforms feels a bit out of place with this game setting (sparse though the game setting may still be). As a more extreme example, imagine how disconnected a flowing calligraphic script font (the type typically used in high-fantasy games) would appear in a futuristic game that takes place on a spaceship.
There are as many visual style possibilities for games as there are people and ideas, and great games can feature extremely simple graphics. Remember that excellent game design is a combination of the nine contributing elements (return to the introduction if you need to refresh your memory), and the most important thing to keep in mind as a game designer is maintaining focus on how each of those elements harmonizes with and elevates the others to create something greater than the sum of its parts.
Implement autonomous behaviors such as target-locked chasing and gradual turning
Collide textured objects accurately
Understand the efficiency concerns of pixel-accurate collision
Program with pixel-accurate collision effectively and efficiently
By this point, your game engine is capable of implementing games in convenient coordinate systems as well as presenting and animating objects that are visually appealing. However, there is a lack of abstraction support for the behaviors of objects. You can see the direct results of this shortcoming in the init() and update() functions of the MyGame objects in all the previous projects: the init() function is often crowded with mundane per-game object settings, while the update() function is often crowded with conditional statements for controlling objects, such as checking for key presses for moving the hero.
A well-designed system should hide the initialization and controls of individual objects with proper object-oriented abstractions or classes. An abstract GameObject class should be introduced to encapsulate and hide the specifics of its initialization and behaviors. There are two main advantages to this approach. First, the init() and update() functions of a game level can focus on managing individual game object and the interactions of these objects without being clustered with details specific to different types of objects. Second, as you have experienced with the Renderable and SimpleShader class hierarchies, proper object-oriented abstraction creates a standardized interface and facilitates code sharing and reuse.
As you transition from working with the mere drawing of objects (in other words, Renderable) to programming with the behavior of objects (in other words, GameObject), you will immediately notice that for the game to be entertaining or fun, the objects need to interact. Interesting behaviors of objects, such as facing or evading enemies, often require the knowledge of the relative positions of other objects in the game. In general, resolving relative positions of all objects in a 2D world is nontrivial. Fortunately, typical video games require the knowledge of only those objects that are in close proximity to each other or are about to collide or have collided.
An efficient but somewhat crude approximation to detect collision is to compute the bounds of an object and approximate object collisions based on colliding bounding boxes. In the simplest cases, bounding boxes are rectangular boxes with edges that are aligned with the x/y axes. These are referred to as axis-aligned bounding boxes or AABBs. Because of the axis alignments, it is computationally efficient to detect when two AABBs overlap or when collision is about to occur.
Many 2D game engines can also detect the actual collision between two textured objects by comparing the location of pixels from both objects and detecting the situation when at least one of the nontransparent pixels overlaps. This computationally intensive process is known as per-pixel-accurate collision detection, pixel-accurate collision, or per-pixel collision.
This chapter begins by introducing the GameObject class to provide a platform for abstracting game object behaviors. The GameObject class is then generalized to introduce common behavior attributes including speed, movement direction, and target-locked chasing. The rest of the chapter focuses on deriving an efficient per-pixel accurate collision implementation that supports both textured and animated sprite objects.
As mentioned, an abstraction that encapsulates the intrinsic behavior of typical game objects should be introduced to minimize the clustering of code in the init() and update() functions of a game level and to facilitate reuse. This section introduces the simple GameObject class to illustrate how the cleaner and uncluttered init() and update() functions clearly reflect the in-game logic and to demonstrate how the basic platform for abstracting object behaviors facilitates design and code reuse.

Running the Game Objects project
WASD keys : Move the hero up, left, down, and right
To begin defining the GameObject class to encapsulate object behaviors in games
To demonstrate the creation of subclasses to the GameObject class to maintain the simplicity of the MyGame level update() function
To introduce the GameObjectSet class demonstrating support for a set of homogenous objects with an identical interface

The new sprite elements of the minion_sprite.png image
Add a new folder src/engine/game_objects for storing GameObject-related files.
Create a new file in this folder, name it game_object.js, and add the following code:
With the assessors to the Renderable and Transform objects defined, all GameObject instances can be drawn and have defined locations and sizes. Note that the update() function is designed for subclasses to override with per object–specific behaviors, and thus, it is left empty.
Create a new file in the src/engine/game_objects folder and name it game_object_set.js. Define the GameObjectSet class and the constructor to initialize an array for holding GameObject instances.
Define functions for managing the set membership:
Define functions to update and draw each of the GameObject instances in the set:
This process of import/export classes via the engine access file, index.js, must be repeated for every newly defined functionality. Henceforth, only a reminder will be provided and the straightforward code change will not be shown again.
The goals of this project are to ensure proper functioning of the new GameObject class, to demonstrate customization of behaviors by individual object types, and to observe a cleaner MyGame implementation clearly reflecting the in-game logic. To accomplish these goals, three object types are defined: DyePack, Hero, and Minion. Before you begin to examine the detailed implementation of these objects, follow good source code organization practice and create a new folder src/my_game/objects for storing the new object types.
The DyePack class derives from the GameObject class to demonstrate the most basic example of a GameObject: an object that has no behavior and is simply drawn to the screen.
Notice that even without specific behaviors, the DyePack is implementing code that used to be found in the init() function of the MyGame level. In this way, the DyePack object hides specific geometric information and simplifies the MyGame level .
The need to import from the engine access file, index.js, is true for almost all client source code file and will not be repeated.
Create a new file in the src/my_game/objects folder and name it hero.js. Define Hero as a subclass of GameObject, and implement the constructor to initialize the sprite UV values, size, and position. Make sure to export and share this class.
Add a function to support the update of this object by user keyboard control. The Hero object moves at a kDelta rate based on WASD input from the keyboard.
Create a new file in the src/my_game/objects folder and name it minion.js. Define Minion as a subclass of GameObject, and implement the constructor to initialize the sprite UV values, sprite animation parameters, size, and position as follows:
Add a function to update the sprite animation, support the simple right-to-left movements, and provide the wrapping functionality:
In addition to the engine access file, index.js, in order to gain access to the newly defined objects, the corresponding source code must be imported:
As is the case for other import/export statements, unless there are other specific reasons, this reminder will not be shown again.
The constructor and the load(), unload(), and draw() functions are similar as in previous projects, so the details are not shown here.
Edit the init() function and add the following code:
Edit the update() function to update the game state :
With the well-defined behaviors for each object type abstracted, the clean update() function clearly shows that the game consists of three noninteracting objects.
You can now run the project and notice that the slightly more complex movements of six minions are accomplished with much cleaner init() and update() functions. The init() function consists of only logic and controls for placing created objects in the game world and does not include any specific settings for different object types. With the Minion object defining its motion behaviors in its own update() function, the logic in the MyGame update() function can focus on the details of the level. Note that the structure of this function clearly shows that the three objects are updated independently and do not interact with each other .
Throughout this book, in almost all cases, MyGame classes are designed to showcase the engine functionality. As a result, the source code organization in most MyGame classes may not represent the best practices for implementing games.
A closer examination of the previous project reveals that though there are quite a few minions moving on the screen, their motions are simple and boring. Even though there are variations in speed and direction, the motions are without purpose or awareness of other game objects in the scene. To support more sophisticated or interesting movements, a GameObject needs to be aware of the locations of other objects and determine motion based on that information.
Chasing behavior is one such example. The goal of a chasing object is usually to catch the game object that it is targeting. This requires programmatic manipulation of the front direction and speed of the chaser such that it can hone in on its target. However, it is generally important to avoid implementing a chaser that has perfect aim and always hits its target—because if the player is unable to avoid being hit, the game becomes impossibly difficult. Nonetheless, this does not mean you should not implement a perfect chaser if your game design requires it. You will implement a chaser in the next project.
Vectors and the associated operations are the foundation for implementing object movements and behaviors. Before programming with vectors, a quick review is provided. As in the case of matrices and transform operators, the following discussion is not meant to be a comprehensive coverage of vectors. Instead, the focus is on the application of a small collection of concepts that are relevant to the implementation of the game engine. This is not a study of the theories behind the mathematics. If you are interested in the specifics of vectors and how they relate to games, please refer to the discussion in Chapter 1 where you can learn more about these topics in depth by delving into relevant books on linear algebra and games.
Vectors are used across many fields of study, including mathematics, physics, computer science, and engineering. They are particularly important in games; nearly every game uses vectors in one way or another. Because they are used so extensively, this section is devoted to understanding and utilizing vectors in games.
For an introductory and comprehensive coverage of vectors, you can refer to www.storyofmathematics.com/vectors. For more detailed coverage of vector applications in games, you can refer to Basic Math for Game Development with Unity 3D: A Beginner’s Guide to Mathematical Foundations, Apress, 2019.
One of the most common uses for vectors is to represent an object’s displacement and direction or velocity. This can be done easily because a vector is defined by its size and direction. Using only this small amount of information, you can represent attributes such as the velocity or acceleration of an object. If you have the position of an object, its direction, and its velocity, then you have sufficient information to move it around the game world without user input.
as Pb − Pa. You can see this represented in the following equations and
Figure 6-3:Pa = (xa, ya)
Pb = (xb, yb)


A vector defined by two points
Now that you have a vector
, you can easily determine its length (or size) and direction. A vector’s length is equal to the distance between the two points that created it. In this example, the length of
is equal to the distance between Pa and Pb, while the direction of
goes from Pa toward Pb.
The size of a vector is often referred to as its length or magnitude.
In the gl-matrix library, the vec2 object implements the functionality of a 2D vector. Conveniently, you can also use the vec2 object to represent 2D points or positions in space. In the preceding example, Pa, Pb, and
can all be implemented as instances of the vec2 object. However,
is the only mathematically defined vector. Pa and Pb represent positions or points used to create a vector.
and for a normalized vector is
:
: Normalizes vector
and stores the results to the vec2 object

A vector being normalized
represents the direction from the origin to the position (xv, yv) and you want to rotate it by θ, then, as illustrated in Figure 6-5
, you can use the following equations to derive xr and yr:xr = xv cos θ − yv sin θ
yr = xv sin θ + yv cos θ

A vector from the origin to the position (xv, yv) being rotated by the angle theta
JavaScript trigonometric functions, including the Math.sin() and Math.cos() functions, assume input to be in radians and not degrees. Recall that 1 degree is equal to
radians.
and
that are located at different positions but have the same direction and magnitude and thus are equal to each other. In contrast, the vector
is not the same because its direction and magnitude are different from the others.
Three vectors represented in 2D space with two vectors equal to each other


.
and
are normalized, then
and
vectors with an angle θ in between them. It is also important to recognize that if
, then the two vectors are perpendicular.
The angle between two vectors, which can be found through the dot product
If you need to review or refresh the concept of a dot product, please refer to www.mathsisfun.com/algebra/vectors-dot-product.html.


is a vector perpendicular to both
and
.
, you know that
is in the clockwise direction from
similarly, when
, you know that
is in the counterclockwise direction. Figure 6-8 should help clarify this concept.
The cross product of two vectors
If you need to review or refresh the concept of a cross product, please refer to www.mathsisfun.com/algebra/vectors-cross-product.html.

Running the Front and Chase project
WASD keys: Moves the Hero object
Left-/right-arrow keys: Change the front direction of the Brain object when it is under user control
Up-/down-arrow keys: Increase/decrease the speed of the Brain object
H key: Switches the Brain object to be under user arrow keys control
J key: Switches the Brain object to always point at and move toward the current Hero object position
K key: Switches the Brain object to turn and move gradually toward the current Hero object position
To experience working with speed and direction
To practice traveling along a predefined direction
To implement algorithms with vector dot and cross products
To examine and implement chasing behavior
You can find the same external resource files as in the previous project in the assets folder.
This modification to the gl-matrix library must be present in all projects from this point forward.
Edit the game_object.js file and modify the GameObject constructor to define visibility, front direction, and speed:
Add assessor and setter functions for the instance variables:
Implement a function to rotate the front direction toward a position, p:
Step A computes the distance between the current object and the destination position p. If this value is small, it means current object and the target position are close. The function returns without further processing.
Step B, as illustrated in Figure 6-10, computes the dot product to determine the angle θ between the current front direction of the object (fdir) and the direction toward the destination position p (dir). If these two vectors are pointing in the same direction (cosθ is almost 1 or θ almost zero), the function returns.

A GameObject (Brain) chasing a target (Hero)
Step C checks for the range of cosTheta. This is a step that must be performed because of the inaccuracy of floating-point operations in JavaScript.
Step D uses the results of the cross product to determine whether the current GameObject should be turning clockwise or counterclockwise to face toward the destination position p.
Step E rotates mCurrentFrontDir and sets the rotation in the Transform of the Renderable object. It is important to recognize the two separate object rotation controls. The Transform controls the rotation of what is being drawn, and mCurrentFrontDir controls the direction of travel. In this case, the two are synchronized and thus must be updated with the new value simultaneously.
Add a function to update the object’s position with its direction and speed. Notice that if the mCurrentFrontDir is modified by the rotateObjPointTo() function, then this update() function will move the object toward the target position p, and the object will behave as though it is chasing the target.
Add a function to draw the object based on the visibility setting:
The strategy and goals of this test case are to create a steerable Brain object to demonstrate traveling along a predefined front direction and to direct the Brain to chase after the Hero to demonstrate the chasing functionality.
Create a new file in the src/my_game/objects folder and name it brain.js. Define Brain as a subclass of GameObject, and implement the constructor to initialize the appearance and behavior parameters.
Override the update() function to support the user steering and controlling the speed. Notice that the default update() function in the GameObject must be called to support the basic traveling of the object along the front direction according to its speed.
In the update() function, the switch statement uses mMode to determine how to update the Brain object. In the cases of J and K modes, the Brain object turns toward the Hero object position with the rotateObjPointTo() function call. While in the H mode, the Brain object’s update() function is called for the user to steer the object with the arrow keys. The final three if statements simply set the mMode variable according to user input.
Note that in the cases of J and K modes, in order to bypass the user control logic after the rotateObjPointTo(), the update() function being called is the one defined by the GameObject and not by the Brain.
The JavaScript syntax, ClassName.prototype.FunctionName.call(anObj), calls FunctionName defined by ClassName, where anObj is a subclass of ClassName.
You can now try running the project. Initially, the Brain object is under the user’s control. You can use the left- and right-arrow keys to change the front direction of the Brain object and experience steering the object. Pressing the J key causes the Brain object to immediately point and move toward the Hero object. This is a result of the default turn rate value of 1.0. The K key causes a more natural behavior, where the Brain object continues to move forward and gradually turns to move toward the Hero object. Feel free to change the values of the rate variable or modify the control value of the Brain object. For example, change the kDeltaRad or kDeltaSpeed to experiment with different settings for the behavior.
In the previous project, the Brain object would never stop traveling. Notice that under the J and K modes, the Brain object would orbit or rapidly flip directions when it reaches the target position. The Brain object is missing the critical ability to detect that it has collided with the Hero object, and as a result, it never stops moving. This section describes axis-aligned bounding boxes (AABBs), one of the most straightforward tools for approximating object collisions, and demonstrates the implementation of collision detection based on AABB.

The lower-left corner and size of the bounds for an object
It is interesting to note that in addition to representing the bounds of an object, bounding boxes can be used to represent the bounds of any given rectangular area. For example, recall that the WC visible through the Camera is a rectangular area with the camera’s position located at the center and the WC width/height defined by the game developer. An AABB can be defined to represent the visible WC rectangular area, or the WC window, and used for detecting collision between the WC window and GameObject instances in the game world.
In this book, AABB and “bounding box” are used interchangeably.

Running the Bounding Box and Collisions project
WASD keys: Moves the Hero object
Left-/right-arrow keys: Change the front direction of the Brain object when it is under user control
Up-/down-arrow keys: Increase/decrease the speed of the Brain object
H key: Switches the Brain object to be under user arrow keys control
J key: Switches the Brain object to always point at and move toward the current Hero object position
K key: Switches the Brain object to turn and move gradually toward the current Hero object position
To understand the implementation of the bounding box class
To experience working with the bounding box of a GameObject instance
To compute and work with the bounds of a Camera WC window
To program with object collisions and object and camera WC window collisions
You can find the same external resource files as in the previous project in the assets folder.
Create a new file in the src/engine folder; name it bounding_box.js. First, define an enumerated data type with values that identify the colliding sides of a bounding box.
Now, define the BoundingBox class and the constructor with instance variables to represent a bound, as illustrated in Figure 6-11. Notice that the eBoundCollideStatus must also be exported such that the rest of the engine, including the client, can also have access.
The setBounds() function computes and sets the instance variables of the bounding box:
Define a function to determine whether a given position, (x, y), is within the bounds of the box:
Define a function to determine whether a given bound intersects with the current one:
Define a function to compute the intersection status between a given bound and the current one:
Implement the functions that return the X/Y values to the min and max bounds of the bounding box:
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
Edit game_object.js to import the newly defined functionality and modify the GameObject class; implement the getBBox() function to return the bounding box of the unrotated Renderable object:
Edit camera.js to import from bounding box, and modify the Camera class to compute the collision status between the bounds of a Transform object (typically defined in a Renderable object) and that of the WC window:

Camera WC bounds colliding with the bounds defining a Transform object
In the switch statement’s J and K cases, the modification tests for bounding box collision between the Brain and Hero objects before invoking Brain.rotateObjPointTo() and update() to cause the chasing behavior. In this way, the Brain object will stop moving as soon as it touches the bound of the Hero object. In addition, the collision results between the Hero object and 80 percent of the camera WC window are computed and displayed.
You can now run the project and observe that the Brain object, when in autonomous mode (J or K keys), stops moving as soon as it touches the Hero object. When you move the Hero object around, observe the Hero bound output message begins to echo WC window collisions before the Hero object actually touches the WC window bounds. This is a result of the 0.8, or 80 percent, parameter passed to the mCamera.collideWCBound() function , configuring the collision computation to 80 percent of the current WC window size. When the Hero object is completely within 80 percent of the WC window bounds, the output Hero bound value is 16 or the value of eboundcollideStatus.eInside. Try moving the Hero object to touch the top 20 percent of the window bound, and observe the Hero bound value of 4 or the value of eboundcollideStatus.eCollideTop. Now move the Hero object toward the top-left corner of the window, and observe the Hero bound value of 5 or eboundcollideStatus.eCollideTop | eboundcollideStatus.eCollideLeft. In this way, the collision status is a bitwise-or result of all the colliding bounds.
In the previous example, you saw the results of bounding box collision approximation. Namely, the Brain object’s motion stops as soon as its bounds overlap that of the Hero object. This is much improved over the original situation where the Brain object never stops moving. However, as illustrated in Figure 6-14 , there are two serious limitations to the bounding box–based collisions.
The BoundingBox object introduced in the previous example does not account for rotation. This is a well-known limitation for AABB: although the approach is computationally efficient, it does not support rotated objects.
The two objects do not actually collide. The fact that the bounds of two objects overlap does not automatically equate to the two objects colliding.

Limitation with bounding box–based collision
In this project, you will implement per-pixel-accurate collision detection, pixel-accurate collision detection, or per-pixel collision detection, to detect the overlapping of nontransparent pixels of two colliding objects. However, keep in mind that this is not an end-all solution. While the per-pixel collision detection is precise, the trade-off is potential performance costs. As an image becomes larger and more complex, it also has more pixels that need to be checked for collisions. This is in contrast to the constant computation cost required for bounding box collision detection.

Running the Per-Pixel Collisions project
Arrow keys: Move the small textured object, the Portal minion
WASD keys: Move the large textured object, the Collector minion
To demonstrate how to detect nontransparent pixel overlap
To understand the pros and cons of using per-pixel-accurate collision detection
A “transparent” pixel is one you can see through completely and, in the case of this engine, has an alpha value of 0. A “nontransparent” pixel has a greater than 0 alpha value, or the pixel does not completely block what is behind it; it may or may not occlude. An “opaque” pixel occludes what is behind it, is “nontransparent,” and has an alpha value of 1. For example, notice that you can “see through” the top region of the Portal object. These pixels are nontransparent but not opaque and should cause a collision when an overlap occurs based on the parameters defined by the project.
You can find the following external resources in the assets folder: the fonts folder that contains the default system fonts, minion_collector.png, minion_portal.png, and minion_sprite.png. Note that minion_collector.png is a large, 1024x1024 image, while minion_portal.png is a small, 64x64 image; minion_sprite.png defines the DyePack sprite element.

Overlapping bounding boxes without actual collision

Pixel collision occurring between the large texture and the small texture
The per-pixel transformation to Image-B space from pixelCameraSpace is required because collision checking must be carried out within the same coordinate space.
Notice that in the algorithm Image-A and Image-B are exchangeable. That is, when testing for collision between two images, it does not matter which image is Image-A or Image-B. The collision result will be the same. Either the two images do overlap, or they do not. Additionally, pay attention to the runtime of this algorithm. Each pixel within Image-A must be processed; thus, the runtime is O(N), where N is the number of pixels in Image-A or Image-A’s resolution. For this reason, for performance reason, it is important to choose the smaller of the two images (the Portal minion in this case) as Image-A.
At this point, you can probably see why the performance of pixel-accurate collision detection is concerning. Checking for these collisions during every update with many high-resolution textures can quickly bog down performance. You are now ready to examine the implementation of per-pixel-accurate collision.
In the texture.js file, expand the TextureInfo object to include a new variable for storing the color array of a file texture:
Define and export a function to retrieve the color array from the GPU memory:
The getColorArray() function creates a WebGL FRAMEBUFFER, fills the buffer with the desired texture, and retrieves the buffer content into the CPU memory referenced by texInfo.mColorArray .
The TextureRenderable is the most appropriate class for implementing the per-pixel collision functionality. This is because TextureRenderable is the base class for all classes that render textures. Implementation in this base class means all subclasses can inherit the functionality with minimal additional changes.
As the functionality of the TextureRenderable class increases, so will the complexity and size of the implementation source code. For readability and expandability, it is important to maintain the size of source code files. An effective approach is to separate the source code of a class into multiple files according to their functionality.
Rename texture_renderable.js.to texture_renderable_main.js. This file defines the basic functionality of the TextureRenderable class.
Create a new file in src/engine/renderables and name it texture_renderable_pixel_collision.js. This file will be used to extend the TextureRenderable class functionality in supporting per-pixel-accurate collision. Add in the following code to import from the Texture module and the basic TextureRenderable class, and reexport the TextureRenderable class. For now, this file does not serve any purpose; you will add in the appropriate extending functions in the following subsection.
Create a new texture_renderable.js file to serve as the TextureRenderable access point by adding the following code:
With this structure, the texture_renderable_main.js file implements all the basic functionality and exports to texture_renderable_pixel_collision.js, which appends additional functionality to the TextureRenderable class. Finally, texture_renderable.js imports the extended functions from texture_renderable_pixel_collision.js. The users of the TextureRenderable class can simply import from texture_renderable.js and will have access to all of the defined functionality.
In this way, from the perspective of the game developer, texture_renderable.js serves as the access point to the TextureRenderable class and hides the details of the implementation source code structure. At the same time, from the perspective of you as the engine developer, complex implementations are separated into source code files with names indicating the content achieving readability of each individual file.
Edit the texture_renderable_main.js file, and modify the constructor to add instance variables to hold texture information, including a reference to the retrieved color array, for supporting per-pixel collision detection and for later subclass overrides:
Modify the setTexture() function to initialize the instance variables accordingly:
Note that by default, the mColorArry is initialized to null. For CPU memory optimization, the color array is fetched from the GPU only for textures that participate in per-pixel collision. The mElmWidthPixels and mElmHeightPixels variables are the width and height of the texture. These variables are defined for later subclass overrides such that the algorithm can support the collision of sprite elements.
Edit the texture_renderable_pixel_collision.js file, and define a new function for the TextureRenderable class to set the mColorArray:
JavaScript classes are implemented based on prototype chains. After class construction, instance methods can be accessed and defined via the prototype of the class or aClass.prototype.method. For more information on JavaScript classes and prototypes, please refer to https://developer.mozilla.org/en-US/docs/Web/JavaScript/Inheritance_and_the_prototype_chain.
Define a new function to return the alpha value, or the transparency, of any given pixel (x, y):
Define a function to compute the WC position (returnWCPos) of a given pixel (i, j):
Now, implement the inverse of the previous function, and use a WC position (wcPos) to compute the texture pixel indices (returnIndex):
Now it is possible to implement the outlined per-pixel collision algorithm:
The parameter other is a reference to the other TextureRenderable object that is being tested for collision. If pixels do overlap between the objects, the returned value of wcTouchPos is the first detected colliding position in the WC space. Notice that the nested loops terminate as soon as one-pixel overlap is detected or when pixelTouch becomes true. This is an important feature for efficiency concerns. However, this also means that the returned wcTouchPos is simply one of the many potentially colliding points.
This function checks to ensure that the objects are colliding and delegates the actual per-pixel collision to the TextureRenderable objects. Notice the intersectsBound() function for a bounding box intersection check before invoking the potentially expensive TextureRenderable.pixelTouches() function .
As illustrated in Figure 6-15, the testing of per-pixel collision is rather straightforward, involving three instances of GameObject: the large Collector minion, the small Portal minion, and the DyePack. The Collector and Portal minions are controlled by the arrow and WASD keys, respectively. The details of the implementation of MyGame are similar to the previous projects and are not shown.
You can now test the collision accuracy by moving the two minions and intersecting them at different locations (e.g., top colliding with the bottom, left colliding with the right) or moving them such that there are large overlapping areas. Notice that it is rather difficult, if not impossible, to predict the actual reported intersection position (position of the DyePack). It is important to remember that the per-pixel collision function is mainly a function that returns true or false indicating whether there is a collision. You cannot rely on this function to compute the actual collision positions.
Lastly, try switching to calling the Collector.pixelTouches() function to detect collisions. Notice the less than real-time performance! In this case, the computation cost of the Collector.pixelTouches() function is 16×16=256 times that of the Portal.pixelTouches() function .
In the previous section, you saw the basic operations required to achieve per-pixel-accurate collision detection. However, as you may have noticed, the previous project applies only when the textures are aligned along the x/y axes. This means that your implementation does not support collisions between rotated objects.
This section explains how you can achieve per-pixel-accurate collision detection when objects are rotated. The fundamental concepts of this project are the same as in the previous project; however, this version involves working with vector decomposition, and a quick review can be helpful.
: the normalized component vectors
and
decompose the vector
into the components
and
.
The decomposition of vector 
and
and any vector
, the following formulae are always true:


Decomposing a vector by two normalized component vectors

An axes-aligned texture
and
, as shown in Figure 6-21.

A rotated texture and its component vectors

Running the General Pixel Collisions project
Arrow keys: Move the small textured object, the Portal minion
P key: Rotates the small textured object, the Portal minion
WASD keys: Move the large textured object, the Collector minion
E key: Rotates the large textured object, the Collector minion
To access pixels of a rotated image via vector decomposition
To support per-pixel-accurate collision detection between two rotated textured objects
You can find the same external resource files as in the previous project in the assets folder.
Edit the texture_renderable_pixel_collision.js file and modify the _indexToWCPosition() function :
and
normalized component vectors. The variables xDisp and yDisp are the displacements to be offset along xDir and yDir, respectively. The returned value of returnWCPos is a simple displacement from the object’s center position along the xDirDisp and yDirDisp vectors. Note that xDirDisp and yDirDisp are the scaled xDir and yDir vectors.In a similar fashion, modify the _wcPositionToIndex() function to support the rotated normalized vector components:
The pixelTouches() function needs to be modified to compute the rotated normalized component vectors:
The variables xDir and yDir are the rotated normalized component vectors
and
of this TextureRenderable object, while otherXDir and otherYDir are those of the colliding object. These vectors are used as references for computing transformations from texture index to WC and from WC to texture index.
The listed code shows that if either of the colliding objects is rotated, then two encompassing circles are used to determine whether the objects are sufficiently close for the expensive per-pixel collision computation. The two circles are defined with radii equal to the hypotenuse of the x/y size of the corresponding TextureRenderable objects. The per-pixel collision detection is invoked only if the distance between these two circles is less than the sum of the radii.
The code for testing the rotated TextureRenderable objects is essentially identical to that from the previous project, with the exception of the two added controls for rotations. The details of the implementation are not shown. You can now run the project, rotate the two objects, and observe the accurate collision results.
The previous project implicitly assumes that the Renderable object is covered by the entire texture map. This assumption means that the per-pixel collision implementation does not support sprite or animated sprite objects. In this section, you will remedy this deficiency.

Running the Sprite Pixel Collisions project
Arrow and P keys: Move and rotate the Portal minion
WASD keys: Move the Hero
L, R, H, B keys: Select the target for colliding with the Portal minion
To generalize the per-pixel collision implementation for sprite and animated sprite objects
You can find the following external resource files in the assets folder: the fonts folder that contains the default system fonts, minion_sprite.png, and minion_portal.png.
Modify the SpriteRenderable constructor to call the _setTexInfo() function to initialize per-pixel collision parameters; this function is defined in the next step:
Define the _setTexInfo() function to override instance variables defined in the TextureRenderable superclass. Instead of the entire texture image, the instance variables now identify the currently active sprite element.
Remember to call the _setTexInfo() function when the current sprite element is updated in the setElementUVCoordinate() and setElementPixelPositions() functions:
Portal minion: A simple TextureRenderable object
Hero and Brain: SpriteRenderable objects where the textures shown on the geometries are sprite elements defined in the minion_sprite.png sprite sheet
Left and Right minions: SpriteAnimateRenderable objects with sprite elements defined in the top two rows of the minion_sprite.pnganimated sprite sheet
Try moving the Hero object and observe how the Brain object constantly seeks out and collides with it. This is the case of collision between two SpriteRenderable objects.
Press the L/R keys and then move the Portal minion with the WASD keys to collide with the Left or Right minions. Remember that you can rotate the Portal minion with the P key. This is the case of collision between TextureRenderable and SpriteAnimatedRenderable objects.
Press the H key and then move the Portal minion to collide with the Hero object. This is the case of collision between TextureRenderable and SpriteRenderable objects.
Press the B key and then move the Portal minion to collide with the Brain object. This is the case of collision between rotated TextureRenderable and SpriteRenderable objects .
This chapter showed you how to encapsulate common behaviors of objects in games and demonstrated the benefits of the encapsulation in the forms of a simpler and better organized control logic in the client’s MyGame test levels. You reviewed vectors in 2D space. A vector is defined by its direction and magnitude. Vectors are convenient for describing displacements (velocities). You reviewed some foundational vector operations, including normalization of a vector and how to calculate both dot and cross products. You worked with these operators to implement the front-facing direction capability and create simple autonomous behaviors such as pointing toward a specific object and chasing.
The need for detecting object collisions became a prominent omission as the behaviors of objects increased in sophistication. The axis-aligned bounding boxes, or AABBs, were introduced as a crude, yet computationally efficient solution for approximating object collisions. You learned the algorithm for per-pixel-accurate collision detection and that its accuracy comes at the cost of performance. You now understand how to mitigate the computational cost in two ways. First, you invoke the pixel-accurate procedure only when the objects are sufficiently close to each other, such as when their bounding boxes collide. Second, you invoke the pixel iteration process based on the texture with a lower resolution.
When implementing pixel-accurate collision, you began with tackling the basic case of working with axis-aligned textures. After that implementation, you went back and added support for collision detection between rotated textures. Finally, you generalized the implementation to support collisions between sprite elements. Solving the easiest case first lets you test and observe the results and helps define what you might need for the more advanced problems (rotation and subregions of a texture in this case).
At the beginning of this chapter, your game engine supported interesting sophistication in drawing ranging from the abilities to define WC space, to view the WC space with the Camera object, and to draw visually pleasing textures and animations on objects. However, there was no infrastructure for supporting the behaviors of the objects. This shortcoming resulted in clustering of initialization and control logic in the client-level implementations. With the object behavior abstraction, mathematics, and collision algorithms introduced and implemented in this chapter, your game engine functionality is now better balanced. The clients of your game engine now have tools for encapsulating specific behaviors and detecting collisions. The next chapter reexamines and enhances the functionality of the Camera object. You will learn to control and manipulate the Camera object and work with multiple Camera objects in the same game.
Chapters 1–5 introduced foundation techniques for drawing, moving, and animating objects on the screen. The Scene Objects project from Chapter 4 described a simple interaction behavior and showed you how to change the game screen based on the location of a rectangle: recall that moving the rectangle to the left boundary caused the level to visually change, while the Audio Support project added contextual sound to reinforce the overall sense of presence. Although it’s possible to build an intriguing (albeit simple) puzzle game using only the elements from Chapters 1 to 5, things get much more interesting when you can integrate object detection and collision triggers; these behaviors form the basis for many common game mechanics and provide opportunities to design a wide range of interesting gameplay scenarios.
Starting with the Game Objects project, you can see how the screen elements start working together to convey the game setting; even with the interaction in this project limited to character movement, the setting is beginning to resolve into something that conveys a sense of place. The hero character appears to be flying through a moving scene populated by a number of mechanized robots, and there’s a small object in the center of the screen that you might imagine could become some kind of special pickup.
Even at this basic stage of development it’s possible to brainstorm game mechanics that could potentially form the foundation for a full game. If you were designing a simple game mechanic based on only the screen elements found in the Game Objects project, what kind of behaviors would you choose and what kind of actions would you require the player to perform? As one example, imagine that the hero character must avoid colliding with the flying robots and that perhaps some of the robots will detect and pursue the hero in an attempt to stop the player’s progress; maybe the hero is also penalized in some way if they come into contact with a robot. Imagine perhaps that the small object in the center of the screen allows the hero to be invincible for a fixed period of time and that we’ve designed the level to require temporary invincibility to reach the goal, thus creating a more complex and interesting game loop (e.g., avoid the pursuing robots to reach the power up, activate the power up and become temporarily invincible, use invincibility to reach the goal). With these few basic interactions, we’ve opened opportunities to explore mechanics and level designs that will feel very familiar from many different kinds of games, all with just the inclusion of the object detection, chase, and collision behaviors covered in Chapter 6. Try this design exercise yourself using just the elements shown in the Game Objects project: What kinds of simple conditions and behaviors might you design to make your experience unique? How many ways can you think of to use the small object in the center of the screen? The final design project in Chapter 12 will explore these themes in greater detail.
This is also a good opportunity to brainstorm some of the other nine elements of game design discussed in Chapter 1. What if the game wasn’t set in space with robots? Perhaps the setting is in a forest, or under water, or even something completely abstract. How might you incorporate audio to enhance the sense of presence and reinforce the game setting? You’ll probably be surprised by the variety of settings and scenarios you come up with. Limiting yourself to just the elements and interactions covered through Chapter 6 is actually a beneficial exercise as design constraints often help the creative process by shaping and guiding your ideas. Even the most advanced video games typically have a fairly basic set of core game loops as their foundation.
The Vectors: Front and Chase project is interesting from both a game mechanic and presence perspective. Many games, of course, require objects in the game world to detect the hero character and will either chase or try to avoid the player (or both if the object has multiple states). The project also demonstrates two different approaches to chase behavior, instant and smooth pursuit, and the game setting will typically influence which behavior you choose to implement. The choice between instant and smooth pursuit is a great example of subtle behaviors that can significantly influence the sense of presence. If you were designing a game where ships were interacting on the ocean, for example, you would likely want their pursuit behavior to take real-world inertia and momentum into consideration because ships can’t instantly turn and respond to changes in movement; rather, they move smoothly and gradually, demonstrating a noticeable delay in how quickly they can respond to a moving target. Most objects in the physical world will display the same inertial and momentum constraint to some degree, but there are also situations where you may want game objects to respond directly to path changes (or, perhaps, you want to intentionally flout real-world physics and create a behavior that isn’t based on the limitations of physical objects). The key is to always be intentional about your design choices, and it’s good to remember that virtually no implementation details are too small to be noticed by players.
The Bounding Box and Collisions project introduces the key element of detection to your design arsenal, allowing you to begin including more robust cause-and-effect mechanics that form the basis for many game interactions. Chapter 6 discusses the trade-offs of choosing between the less precise but more performant bounding box collision detection method and the precise but resource-intensive per-pixel detection method. There are many situations where the bounding-box approach is sufficient, but if players perceive collisions to be arbitrary because the bounding boxes are too different from the actual visual objects, it can negatively impact the sense of presence. Detection and collision are even more powerful design tools when coupled with the result from the Per-Pixel Collisions project. Although the dye pack in this example was used to indicate the first point of collision, you can imagine building interesting causal chains around a new object being produced as the result of two objects colliding (e.g., player pursues object, player collides with object, object “drops” a new object that enables the player to do something they couldn’t do before). Game objects that move around the game screen will typically be animated, of course, so the Sprite Pixel Collisions project describes how to implement collision detection when the object boundaries aren’t stationary.
With the addition of the techniques in Chapter 6, you now have a critical mass of behaviors that can be combined to create truly interesting game mechanics covering the spectrum from action games to puzzlers. Of course, game mechanic behaviors are only one of the nine elements of game design and typically aren’t sufficient on their own to create a magical gameplay experience: the setting, visual style, meta-game elements, and the like all have something important to contribute. The good news is that creating a memorable game experience need not be as elaborate as you often believe and great games continue being produced based on relatively basic combinations of the behaviors and techniques covered in Chapters 1–6. The games that often shine the brightest aren’t always the most complex, but rather they’re often the games where every aspect of each of the nine elements of design is intentional and working together in harmony. If you give the appropriate attention and focus to all aspects of the game design, you’re on a great track to produce something great whether you’re working on your own or you’re part of a large team.
Implement operations that are commonly employed in manipulating a camera
Interpolate values between old and new to create a smooth transition
Understand how some motions or behaviors can be described by simple mathematical formulations
Build games with multiple camera views
Transform positions from the mouse-clicked pixel to the World Coordinate (WC) position
Program with mouse input in a game environment with multiple cameras
Your game engine is now capable of representing and drawing objects. With the basic abstraction mechanism introduced in the previous chapter, the engine can also support the interactions and behaviors of these objects. This chapter refocuses the attention on controlling and interacting with the Camera object that abstracts and facilitates the presentation of the game objects on the canvas. In this way, your game engine will be able to control and manipulate the presentation of visually appealing game objects with well-structured behaviors.

Review of WC parameters that define a Camera object
In this book, the WC window or WC bounds are used to refer to the WC window bounds.
The Camera object abstraction allows the game programmer to ignore the details of WC bounds and the HTML canvas and focus on designing a fun and entertaining gameplay experience. Programming with a Camera object in a game level should reflect the use of a physical video camera in the real world. For example, you may want to pan the camera to show your audiences the environment, you may want to attach the camera on an actress and share her journey with your audience, or you may want to play the role of a director and instruct the actors in your scene to stay within the visual ranges of the camera. The distinct characteristics of these examples, such as panning or following a character’s view, are the high-level functional specifications. Notice that in the real world you do not specify coordinate positions or bounds of windows.
This chapter introduces some of the most commonly encountered camera manipulation operations including clamping, panning, and zooming. Solutions in the form of interpolation will be derived to alleviate annoying or confusing abrupt transitions resulting from the manipulation of cameras. You will also learn about supporting multiple camera views in the same game level and working with mouse input.
In a 2D world, you may want to clamp or restrict the movements of objects to be within the bounds of a camera, to pan or move the camera, or to zoom the camera into or away from specific areas. These high-level functional specifications can be realized by strategically changing the parameters of the Camera object: the WC center and the Wwc × Hwc of the WC window. The key is to create convenient functions for the game developers to manipulate these values in the context of the game. For example, instead of increasing/decreasing the width/height of the WC windows, zoom functions can be defined for the programmer.

Running the Camera Manipulations project
WASD keys: Move the Dye character (the Hero object). Notice that the camera WC window updates to follow the Hero object when it attempts to move beyond 90 percent of the WC bounds.
Arrow keys: Move the Portal object. Notice that the Portal object cannot move beyond 80 percent of the WC bounds.
L/R/P/H keys: Select the Left minion, Right minion, Portal object, or Hero object to be the object in focus; the L/R keys also set the camera to center on the Left or Right minion.
N/M keys: Zoom into or away from the center of the camera.
J/K keys: Zoom into or away while ensuring the constant relative position of the currently in-focus object. In other words, as the camera zooms, the positions of all objects will change except that of the in-focus object.
To experience some of the common camera manipulation operations
To understand the mapping from manipulation operations to the corresponding camera parameter values that must be altered
To implement camera manipulation operations
You can find the following external resources in the assets folder: the fonts folder that contains the default system fonts and three texture images (minion_portal.png, minion_sprite.png, and bg.png). The Portal object is represented by the first texture image, the remaining objects are sprite elements of minion_sprite.png, and the background is a large TextureRenderable object texture mapped with bg.png.
camera_main.js for implementing the basic functionality from previous projects
camera_manipulation.js for supporting the newly introduced manipulation operations
camera.js for serving as the class access point
Create a new folder called cameras in src/engine. Move the camera.js file into this folder and rename it to camera_main.js.
Create a new file in src/engine/cameras and name it camera_manipulation.js. This file will be used to extend the Camera class functionality in supporting manipulations. Add in the following code to import and export the basic Camera class functionality. For now, this file does not contain any useful source code and thus does not serve any purpose. You will define the appropriate extension functions in the following subsection.
Create a new camera.js to serve as the Camera access point by adding the following code:
With this structure of the source code files, camera_main.js implements all the basic functionality and exports to camera_manipulation.js that defines additional functionality for the Camera class. Finally, camera.js imports the extended functions from camera_manipulation.js. The users of the Camera class can simply import from camera.js and will have access to all of the defined functionality. This allows camera.js to serve as the access point to the Camera class while hiding the details of the implementation source code structure.
The aXform object can be the Transform of a GameObject or Renderable object. The clampAtBoundary() function ensures that the bounds of the aXform remain inside the WC bounds of the camera by clamping the aXform position. The zone variable defines a percentage of clamping for the WC bounds. For example, a 1.0 would mean clamping to the exact WC bounds, while a 0.9 means clamping to a bound that is 90 percent of the current WC window size. It is important to note that the clampAtBoundary() function operates only on bounds that collide with the camera WC bounds. For example, if the aXform object has its bounds that are completely outside of the camera WC bounds, it will remain outside.
Edit camera_manipulate.js. Ensure you are adding code between the initial import and final export of the Camera class functionality.
Import the bounding box collision status, and define the panWidth() function to pan the camera based on the bounds of a Transform object. This function is complementary to the clampAtBoundary() function, where instead of changing the aXform position, the camera is moved to ensure the proper inclusion of the aXform bounds. As in the case of the clampAtBoundary() function, the camera will not be changed if the aXform bounds are completely outside the tested WC bounds area.
Define camera panning functions panBy() and panTo() by appending to the Camera class prototype. These two functions change the camera WC center by adding a delta to it or moving it to a new location.
Define functions to zoom the camera with respect to the center or a target position:

Zooming toward the WC Center and toward a target position
In the listed code, the first four if statements select the in-focus object, where L and R keys also re-center the camera by calling the panTo() function with the appropriate WC positions. The second set of four if statements control the zoom, either toward the WC center or toward the current in-focus object. Then the function clamps the Brain and Portal objects to within 90 percent and 80 percent of the WC bounds, respectively. The function finally ends by panning the camera based on the transform (or position) of the Hero object.
You can now run the project and move the Hero object with the WASD keys. Move the Hero object toward the WC bounds to observe the camera being pushed. Continue pushing the camera with the Hero object; notice that because of the clampAtBoundary() function call, the Portal object will in turn be pushed such that it never leaves the camera WC bounds. Now press the L/R key to observe the camera center switching to the center on the Left or Right minion. The N/M keys demonstrate straightforward zooming with respect to the center. To experience zooming with respect to a target, move the Hero object toward the top left of the canvas and then press the H key to select it as the zoom focus. Now, with your mouse pointer pointing at the head of the Hero object, you can press the K key to zoom out first and then the J key to zoom back in. Notice that as you zoom, all objects in the scene change positions except the areas around the Hero object. Zooming into a desired region of a world is a useful feature for game developers with many applications. You can experience moving the Hero object around while zooming into/away from it.
It is now possible to manipulate the camera based on high-level functions such as pan or zoom. However, the results are often sudden or visually incoherent changes to the rendered image, which may result in annoyance or confusion. For example, in the previous project, the L or R key causes the camera to re-center with a simple assignment of new WC center values. The abrupt change in camera position results in the sudden appearance of a seemingly new game world. This is not only visually distracting but can also confuse the player as to what has happened.

Interpolating values based on linear and exponential functions
Figure 7-4 shows that there are multiple ways to interpolate values over time. For example, linear interpolation computes intermediate results according to the slope of the line connecting the old and new values. In contrast, an exponential function may compute intermediate results based on percentages from previous values. In this way, with linear interpolation, a camera position would move from an old to new position with a constant speed similar to a moving (or panning) a camera at some constant speed. In comparison, the interpolation based on the given exponential function would move the camera position rapidly at first and then slow down quickly over time giving a sensation of moving and focusing the camera on a new target.
Human motions and movements typically follow the exponential interpolation function. For example, try turning your head from facing the front to facing the right or moving your hand to pick up an object on your desk. Notice that in both cases, you began with a relatively quick motion and slowed down significantly when the destination is in close proximity. That is, you probably started by turning your head quickly and slowed down rapidly as your view approaches your right side, and it is likely your hand started moving quickly toward the object and slowed down significantly when the hand is almost reaching the object. In both of these examples, your displacements followed the exponential interpolation function as depicted in Figure 7-4, quick changes followed by a rapid slow down as the destination approaches. This is the function you will implement in the game engine because it mimics human movements and is likely to seem natural to human players.
Linear interpolation is often referred to as LERP or lerp. The result of lerp is the linear combination of an initial and a final value. In this chapter, and in almost all cases, the exponential interpolation depicted in Figure 7-4 is approximated by repeatedly applying the lerp function where in each invocation, the initial value is the result of the previous lerp invocation. In this way, the exponential function is approximated with a piecewise linear function.
This section introduces the Lerp and LerpVec2 utility classes to support smooth and gradual camera movements resulting from camera manipulation operations.

Running the Camera Interpolations project
WASD keys: Move the Dye character (the Hero object). Notice that the camera WC window updates to follow the Hero object when it attempts to move beyond 90 percent of the WC bounds.
Arrow keys: Move the Portal object. Notice that the Portal object cannot move beyond 80 percent of the WC bounds.
L/R/P/H keys: Select the Left minion, Right minion, Portal object, or Hero object to be the object in focus. The L/R keys also set the camera to focus on the Left or Right minion.
N/M keys: Zoom into or away from the center of the camera.
J/K keys: Zoom into or away while ensuring constant relative position of the currently in-focus object. In other words, as the camera zooms, the positions of all objects will change except that of the in-focus object.
To understand the concept of interpolation between given values
To implement interpolation supporting gradual camera parameter changes
To experience interpolated changes in camera parameters
As in previous projects, you can find external resource files in the assets folder.
Similar to the Transform class supporting transformation functionality and the BoundingBox class supporting collision detection, a Lerp class can be defined to support interpolation of values. To keep the source code organized, a new folder should be defined to store these utilities.
Create the src/engine/utils folder and move the transform.js and bounding_box.js files into this folder.
Create a new file in the src/engine/utils folder, name it lerp.js, and define the constructor. This class is designed to interpolate values from mCurrentValue to mFinalValue in the duration of mCycles. During each update, intermediate results are computed based on the mRate increment on the difference between mCurrentValue and mFinalValue, as shown next.
Define the function that computes the intermediate results:
Define a function to configure the interpolation. The mRate variable defines how quickly the interpolated result approaches the final value. A mRate of 0.0 will result in no change at all, where 1.0 causes instantaneous change. The mCycle variable defines the duration of the interpolation process .
Define relevant getter and setter functions. Note that the setFinal() function both sets the final value and triggers a new round of interpolation computation.
Define the function to trigger the computation of each intermediate result:
Finally, make sure to export the defined class:
Create a new file in the src/engine/utils folder, name it lerp_vec2.js, and define its constructor:
Override the _interpolateValue() function to compute intermediate results for vec2:
The vec2.lerp() function defined in the gl-matrix.js file computes the vec2 components for x and y. The computation involved is identical to the _interpolateValue() function in the Lerp class.
Lastly, remember to update the engine access file, index.js, to forward the newly defined Lerp and LerpVec2 functionality to the client.
Create a new file in the src/engine/cameras folder, name it camera_state.js, import the defined Lerp functionality, and define the constructor:
Define the getter and setter functions:
Define the update function to trigger the interpolation computation:
Define a function to configure the interpolation :
The stiffness variable is the mRate of Lerp. It defines how quickly the interpolated intermediate results should converge to the final value. As discussed in the Lerp class definition, this is a number between 0 and 1, where 0 means the convergence will never happen and a 1 means instantaneous convergence. The duration variable is the mCycle of Lerp. It defines the number of update cycles it takes for the results to converge. This must be a positive integer value.
Note that as the sophistication of the engine increases, so does the complexity of the supporting code. In this case, you have designed an internal utility class, CameraState, for storing the internal state of a Camera object to support interpolation. This is an internal engine operation. There is no reason for the game programmer to access this class, and thus, the engine access file, index.js, should not be modified to forward the definition.
Edit the camera_main.js file and import the newly defined CameraState class:
Modify the Camera constructor to replace the center and width variables with an instance of CameraState:
Now, edit the camera_manipulation.js file to define the functions to update and configure the interpolation functionality of the CameraState object:
Modify the panBy() camera manipulation function to support the CameraState object as follows:
Update panWith() and zoomTowards() functions to receive and set WC center to the newly defined CameraState object:
The call to update the camera for computing interpolated intermediate results is the only change in the my_game.js file. You can now run the project and experiment with the smooth and gradual changes resulting from camera manipulation operations. Notice that the interpolated results do not change the rendered image abruptly and thus maintain the sense of continuity in space from before and after the manipulation commands. You can try changing the stiffness and duration variables to better appreciate the different rates of interpolation convergence.
In video games, shaking the camera can be a convenient way to convey the significance or mightiness of events, such as the appearance of an enemy boss or the collisions between large objects. Similar to the interpolation of values, the camera shake movement can also be modeled by straightforward mathematical formulations.
Consider how a camera shake may occur in a real-life situation. For instance, while shooting with a video camera, say you are surprised or startled by someone or that something collided with you. Your reaction will probably be slight disorientation followed by quickly refocusing on the original targets. From the perspective of the camera, this reaction can be described as an initial large displacement from the original camera center followed by quick adjustments to re-center the camera. Mathematically, as illustrated in Figure 7-6, damped simple harmonic motions, which can be represented with the damping of trigonometric functions, can be used to describe these types of displacements.

The displacements of a damped simple harmonic motion

Running the Camera Shake and Object Oscillate project
Q key: Initiates the positional oscillation of the Dye character and the camera shake effects.
WASD keys: Move the Dye character (the Hero object). Notice that the camera WC window updates to follow the Hero object when it attempts to move beyond 90 percent of the WC bounds.
Arrow keys: Move the Portal object. Notice that the Portal object cannot move beyond 80 percent of the WC bounds.
L/R/P/H keys: Select the Left minion, Right minion, Portal object, or Hero object to be the object in focus. The L/R keys also set the camera to focus on the Left or Right minion.
N/M keys: Zoom into or away from the center of the camera.
J/K keys: Zoom into or away while ensuring constant relative position of the currently in-focus object. In other words, as the camera zooms, the positions of all objects will change except that of the in-focus object.
To gain some insight into modeling displacements with simple mathematical functions
To experience the oscillate effect on an object
To experience the shake effect on a camera
To implement oscillations as damped simple harmonic motion and to introduce pseudo-randomness to create the camera shake effect
As in previous projects, you can find external resource files in the assets folder.
Oscillate: The base class that implements simple harmonic oscillation of a value over time
Shake: An extension of the Oscillate class that introduces randomness to the magnitudes of the oscillations to simulate slight chaos of the shake effect on a value
ShakeVec2: An extension of the Shake class that expands the Shake behavior to two values such as a position
Create a new file in the src/engine/utils folder and name it oscillate.js. Define a class named Oscillate and add the following code to construct the object:
Define the damped simple harmonic motion:
, is the damping factor. This function returns a value between -1 and 1 and can be scaled as needed.
The damped simple harmonic motion that specifies value oscillation
Define a protected function to retrieve the value of the next damped harmonic motion. This function may seem trivial and unnecessary. However, as you will observe in the next subsection, this function allows a shake subclass to overwrite and inject randomness.
Define functions to check for the end of the oscillation and for restarting the oscillation:
Lastly, define a public function to trigger the calculation of oscillation. Notice that the computed oscillation result must be scaled by the desired magnitude, mMag:
You can now extend the oscillation behavior to convey a sense of shaking by introducing pseudo-randomness into the effect .
Create a new file, shake.js, in the src/engine/utils folder. Define the Shake class to extend Oscillate and add the following code to construct the object:
Overwrite the _nextValue() to randomize the sign of the oscillation results as follows. Recall that the _nextValue() function is called from the public getNext() function to retrieve the oscillating value. While the results from the damped simple harmonic oscillation continuously and predictably decrease in magnitude, the associated signs of the values are randomized causing sudden and unexpected discontinuities conveying a sense of chaos from the results of a shake.
You can now generalize the shake effect to support the shaking of two values simultaneously. This is a useful utility because positions in 2D games are two-value entities and positions are convenient targets for shake effects. For example, in this project, the shaking of the camera position, a two-value entity, simulates the camera shake effect.
Create a new file, shake_vec2.js, in the src/engine/utils folder. Define the ShakeVec2 class to extend the Shake class. Similar to the constructor parameters of the Shake super classes, the deltas and freqs parameters are 2D, or vec2, versions of magnitude and frequency for shaking in the x and y dimensions. In the constructor, the xShake instance variable keeps track of shaking effect in the x dimension. Note the y-component parameters, array indices of 1, in the super() constructor invocation. The Shake super class keeps track of the shaking effect in the y dimension.
Extend the reStart() and getNext() functions to support the second dimension:
Lastly, remember to update the engine access file, index.js, to forward the newly defined Oscillate, Shake, and ShakeVec2 functionality to the client.
Create a new file, camera_shake.js, in the src/engine/cameras folder, and define the constructor to receive the camera state, the state parameter, and shake configurations: deltas, freqs, and shakeDuration. The parameter state is of datatype CameraState, consisting of the camera center position and width.
Define the function that triggers the displacement computation for accomplishing the shaking effect. Notice that the shake results are offsets from the original position. The given code adds this offset to the original camera center position.
Define utility functions: inquire if shaking is done, restart the shaking, and getter/setter functions.
Similar to CameraState, CameraShake is also a game engine internal utility and should not be exported to the client game programmer. The engine access file, index.js, should not be updated to export this class.
Modify camera_main.js and camera_manipulation.js to import camera_shake.js as shown:
In camera_main.js, modify the Camera constructor to initialize a CameraShake object:
Modify step B of the setViewAndCameraMatrix() function to use the CameraShake object’s center if it is defined:
Modify the camera_manipulation.js file to add support to initiate and restart the shake effect:
Continue working with the camera_manipulation.js file, and modify the update() function to trigger a camera shake update if one is defined:
Define a new instance variable for creating oscillation or bouncing effect on the Dye character:
Modify the update() function to trigger the bouncing and camera shake effects with the Q key. In the following code, note the advantage of well-designed abstraction. For example, the camera shake effect is opaque where the only information a programmer needs to specify is the actual shake behavior, that is, the shake magnitude, frequency, and duration. In contrast, the oscillating or bouncing effect of the Dye character position is accomplished by explicitly inquiring and using the mBounce results.
You can now run the project and experience the pseudo-random damped simple harmonic motion that simulates the camera shake effect. You can also observe the oscillation of the Dye character’s x position. Notice that the displacement of the camera center position will undergo interpolation and thus result in a smoother final shake effect. You can try changing the parameters when creating the mBounce object or when calling the mCamera.shake() function to experiment with different oscillation and shake configurations. Recall that in both cases the first two parameters control the initial displacements and the frequency (number of cosine periods) and the third parameter is the duration of how long the effects should last.
Video games often present the players with multiple views into the game world to communicate vital or interesting gameplay information, such as showing a mini-map to help the player navigate the world or providing a view of the enemy boss to warn the player of what is to come.
In your game engine, the Camera class abstracts the graphical presentation of the game world according to the source and destination areas of drawing. The source area of the drawing is the WC window of the game world, and the destination area is the viewport region on the canvas. This abstraction already effectively encapsulates and supports the multiple view idea with multiple Camera instances. Each view in the game can be handled with a separate instance of the Camera object with distinct WC window and viewport configurations.

Running the Multiple Cameras project
Q key: Initiates the positional oscillation of the Dye character and the camera shake effects.
WASD keys: Move the Dye character (the Hero object). Notice that the camera WC window updates to follow the Hero object when it attempts to move beyond 90 percent of the WC bounds.
Arrow keys: Move the Portal object. Notice that the Portal object cannot move beyond 80 percent of the WC bounds.
L/R/P/H keys: Select the Left minion, Right minion, Portal object, or Hero object to be the object in focus. The L/R keys also set the camera to focus on the Left or Right minion.
N/M keys: Zoom into or away from the center of the camera.
J/K keys: Zoom into or away while ensuring the constant relative position of the currently in-focus object. In other words, as the camera zooms, the positions of all objects will change except that of the in-focus object.
To understand the camera abstraction for presenting views into the game world
To experience working with multiple cameras in the same game level
To appreciate the importance of interpolation configuration for cameras with specific purposes
As in previous projects, you can find external resource files in the assets folder.
Edit camera_main.js and modify the Camera constructor to allow programmers to define a bound number of pixels to surround the viewport of the camera:
Define the setViewport() function :
Define the getViewport() function to return the actual bounds that are reserved for this camera. In this case, it is the mScissorBound instead of the potentially smaller viewport bounds.
Modify the setViewAndCameraMatrix() function to bind scissor bounds with mScissorBound instead of the viewport bounds:
Modify the init() function to define three Camera objects. Both the mHeroCam and mBrainCam define a two-pixel boundary for their viewports, with the mHeroCam boundary defined to be gray (the background color) and with mBrainCam white. Notice the mBrainCam object’s stiff interpolation setting informing the camera interpolation to converge to new values in ten cycles.
Define a helper function to draw the world that is common to all three cameras:
Modify the MyGame object draw() function to draw all three cameras. Take note of the mMsg object only being drawn to mCamera, the main camera. For this reason, the echo message will appear only in the viewport of the main camera.
Modify the update() function to pan the mHeroCam and mBrainCam with the corresponding objects and to move the mHeroCam viewport continuously:
Viewports typically do not change their positions during gameplays. For testing purposes, the following code moves the mHeroCam viewport continuously from left to right in the canvas.
You can now run the project and notice the three different viewports displayed on the HTML canvas. The two-pixel-wide bounds around the mHeroCam and mBrainCam viewports allow easy visual parsing of the three views. Observe that the mBrainCam viewport is drawn on top of the mHeroCam. This is because in the MyGame.draw() function , the mBrainCam is drawn last. The last drawn object always appears on the top. You can move the Hero object to observe that mHeroCam follows the hero and experience the smooth interpolated results of panning the camera.
Now try changing the parameters to the mBrainCam.configLerp() function to generate smoother interpolated results, such as by setting the stiffness to 0.1 and the duration to 100 cycles. Note how it appears as though the camera is constantly trying to catch up to the Brain object. In this case, the camera needs a stiff interpolation setting to ensure the main object remains in the center of the camera view. For a much more drastic and fun effect, you can try setting mBrainCam to have much smoother interpolated results, such as with a stiffness value of 0.01 and a duration of 200 cycles. With these values, the camera can never catch up to the Brain object and will appear as though it is wandering aimlessly around the game world.
The mouse is a pointing input device that reports position information in the Canvas Coordinate space. Recall from the discussion in Chapter 3 that the Canvas Coordinate space is simply a measurement of pixel offsets along the x/y axes with respect to the lower-left corner of the canvas. Remember that the game engine defines and works with the WC space where all objects and measurements are specified in WC. For the game engine to work with the reported mouse position, this position must be transformed from Canvas Coordinate space to WC.
mouseDCX = mouseX − Vx
mouseDCY = mouseY − Vy

Mouse position on canvas and viewport



Mouse position in viewport DC space and WC space
With the knowledge of how to transform positions from the Canvas Coordinate space to the WC space, it is now possible to implement mouse input support in the game engine.

Running the Mouse Input project
Left mouse button clicked in the main Camera view: Drags the Portal object
Middle mouse button clicked in the HeroCam view: Drags the Hero object
Right/middle mouse button clicked in any view: Hides/shows the Portal object
Q key: Initiates the positional oscillation of the Dye character and the camera shake effects
WASD keys: Move the Dye character (the Hero object) and push the camera WC bounds
Arrow keys: Move the Portal object
L/R/P/H keys: Select the in-focus object with L/R keys refocusing the camera to the Left or Right minion
N/M and J/K keys: Zoom into or away from the center of the camera or the in-focus object
To understand the Canvas Coordinate space to WC space transform
To appreciate the importance of differentiating between viewports for mouse events
To implement transformation between coordinate spaces
To support and experience working with mouse input
As in previous projects, you can find external resource files in the assets folder.
Edit input.js and define the constants to represent the three mouse buttons:
Define the variables to support mouse input. Similar to keyboard input, mouse button states are arrays of three boolean elements, each representing the state of the three mouse buttons.
Define the mouse movement event handler:
Define the mouse button click handler to record the button event:
Define the mouse button release handler to facilitate the detection of a mouse button click event. Recall from the keyboard input discussion in Chapter 4 that in order to detect the button up event, you should test for a button state that was previously released and currently clicked. The mouseUp() handler records the released state of a mouse button.
Modify the init() function to receive the canvasID parameter and initialize mouse event handlers:
Modify the update() function to process mouse button state changes in a similar fashion to the keyboard. Take note of the mouse-click condition that a button that was previously not clicked is now clicked.
Define the functions to retrieve mouse position and mouse button states:
Lastly, remember to export the newly defined functionality:
The Camera class encapsulates the WC window and viewport and thus should be responsible for transforming mouse positions. Recall that to maintain readability, the Camera class source code files are separated according to functionality. The basic functions of the class are defined in camera_main.js. The camera_manipulate.js file imports from camera_main.js and defines additional manipulation functions. Lastly, the camera.js file imports from camera_manipulate.js to include all the defined functions and exports the Camera class for external access.
camera_manipulation.js for all the defined functions for the Camera class
eViewport constants for accessing the viewport array
input module to access the mouse-related functions
Define functions to transform mouse positions from Canvas Coordinate space to the DC space, as illustrated in Figure 7-10:
Define a function to determine whether a given mouse position is within the viewport bounds of the camera:
Define the functions to transform the mouse position into the WC space, as illustrated in Figure 7 11:
The camera.isMouseInViewport() condition is checked when the viewport context is important, as in the case of a left mouse button click in the main camera view or a middle mouse button click in the mHeroCam view. This is in contrast to a right or middle mouse button click for setting the visibility of the Portal object. These two mouse clicks will cause execution no matter where the mouse position is.
You can now run the project and verify the correctness of the transformation to WC space. Click and drag with left mouse button in the main view, or middle mouse button in the mHeroCam view, to observe the accurate movement of the corresponding object as they follow the changing mouse position. The left or middle mouse button drag actions in the wrong views have no effect on the corresponding objects. For example, a left mouse button drag in the mHeroCam or mBrainCam view has no effect on the Portal object. However, notice that the right or middle mouse button click controls the visibility of the Portal object, independent of the location of the mouse pointer. Be aware that the browser maps the right mouse button click to a default pop-up menu. For this reason, you should avoid working with right mouse button clicks in your games.
This chapter was about controlling and interacting with the Camera object. You have learned about the most common camera manipulation operations including clamping, panning, and zooming. These operations are implemented in the game engine with utility functions that map the high-level specifications to actual WC window bound parameters. The sudden, often annoying, and potentially confusing movements from camera manipulations are mitigated with the introduction of interpolation. Through the implementation of the camera shake effect, you have discovered that some movements can be modeled by simple mathematical formulations. You have also experienced the importance of effective Camera object abstraction in supporting multiple camera views. The last section guided you through the implementation of transforming a mouse position from the Canvas Coordinate space to the WC space.
In Chapter 5, you found out how to represent and draw an object with a visually appealing image and control the animation of this object. In Chapter 6, you read about how to define an abstraction to encapsulate the behaviors of an object and the fundamental support required to detect collisions between objects. This chapter was about the “directing” of these objects: what should be visible, where the focus should be, how much of the world to show, how to ensure smooth transition between foci, and how to receive input from the mouse. With these capabilities, you now have a well-rounded game engine framework that can represent and draw objects, model and manage the behaviors of the objects, and control how, where, and what objects are shown.
The following chapters will continue to examine object appearance and behavior at more advanced levels, including creating lighting and illumination effects in a 2D world and simulating and integrating behaviors based on simple classical mechanics.
You’ve learned the basics of object interaction, and it’s a good time to start thinking about creating your first simple game mechanic and experimenting with the logical conditions and rules that constitute well-formed gameplay experiences. Many designers approach game creation from the top-down (meaning they start with an idea for an implementation of a specific genre like a real-time strategy, tower defense, or role-playing game), which we might expect in an industry like video games where the creators typically spend quite a bit of time as content consumers before transitioning into content makers. Game studios often reinforce this top-down design approach, assigning new staff to work under seasoned leads to learn best practices for whatever genre that particular studio works in. This has proven effective for training designers who can competently iterate on known genres, but it’s not always the best path to develop well-rounded creators who can design entirely new systems and mechanics from the ground-up.
The aforementioned might lead us to ask, “What makes gameplay well formed?” At a fundamental level, a game is an interactive experience where rules must be learned and applied to achieve a specified outcome; all games must meet this minimum criterion, including card, board, physical, video, and other game types. Taking it a step further, a good game is an interactive experience with rules people enjoy learning and applying to achieve an outcome they feel invested in. There’s quite a bit to unpack in this brief definition, of course, but as a general rule, players will enjoy a game more when the rules are discoverable, consistent, and make logical sense and when the outcome feels like a satisfactory reward for mastering those rules. This definition applies to both individual game mechanics and entire game experiences. To use a metaphor, it can be helpful to think of game designs as being built with letters (interactions) that form words (mechanics) that form sentences (levels) that ultimately form readable content (genres). Most new designers attempt to write novels before they know the alphabet, and everyone has played games where the mechanics and levels felt at best like sentences written with poor grammar and at worst like unsatisfying, random jumbles of unintelligible letters.
Over the next several chapters, you’ll learn about more advanced features in 2D game engines, including simulations of illumination and physical behaviors. You’ll also be introduced to a set of design techniques enabling you to deliver a complete and well-formed game level, integrating these techniques and utilizing more of the nine elements of game design discussed in Chapter 4 in an intentional way and working from the ground-up to deliver a unified experience. In the earliest stages of design exploration, it’s often helpful to focus only on creating and refining the basic game mechanics and interaction model; at this stage, try to avoid thinking about setting, meta-game, systems design, and the like (these will be folded into the design as it progresses).

The image represents a single game screen divided into three areas. A playable area on the left with a hero character (the circle marked with a P), an impassable barrier marked with a lock icon, and a reward area on the right
The screen represented in Figure 7-13 is a useful starting place when exploring new mechanics. The goal for this exercise is to create one logical challenge that a player must complete to unlock the barrier and reach the reward. The specific nature of the task can be based on a wide range of elemental mechanics: it might involve jumping or shooting, puzzle solving, narrative situations, or the like. The key is to keep this first iteration simple (this first challenge should have a limited number of components contributing to the solution) and discoverable (players must be able to experiment and learn the rules of engagement so they can intentionally solve the challenge). You’ll add complexity and interest to the mechanic in later iterations, and you’ll see how elemental mechanics can be evolved to support many kinds of game types.

The game screen is populated with an assortment of individual objects

As the player moves the hero character around the game screen, the shapes “activate” with a highlight (#1); activating certain shapes causes a section of the lock and one-third of the surrounding ring to glow (#2)

Activating some shapes (#3) will not cause the lock and ring to glow (#4)

After the first object was activated (the circle in the upper right) and caused the top section of the lock and first third of the ring to glow, as shown in Figure 7-15, the second object in the correct sequence (#5) caused the middle section of the lock and second third of the ring to glow (#6)
You (and players) should now have all required clues to learn the rules of this mechanic and solve the puzzle. There are three shapes the player can interact with and only one instance of each shape per row; the shapes are representations of the top, middle, and bottom of the lock icon, and as shown in Figure 7-15, activating the circle shape caused the corresponding section of the lock to glow. Figure 7-16, however, did not cause the corresponding section of the lock to glow, and the difference is the “hook” for this mechanic: sections of the lock must be activated in the correct relative position: top in the top row, middle in the middle row, bottom on the bottom (you might also choose to require that players activate them in the correct sequence starting with the top section, although that requirement is not discoverable just from looking at Figures 7-15 to 7-17).
Congratulations, you’ve now created a well-formed and logically consistent (if simple) puzzle, with all of the elements needed to build a larger and more ambitious level! This unlocking sequence is a game mechanic without narrative context: the game screen is intentionally devoid of game setting, visual style, or genre alignment at this stage of design because we don’t want to burden our exploration yet with any preconceived expectations. It can benefit you as a designer to spend time exploring game mechanics in their purest form before adding higher-level game elements like narrative and genre, and you’ll likely be surprised at the unexpected directions, these simple mechanics will take you as you build them out.
Simple mechanics like the one in this example can be described as “complete a multistage task in the correct sequence to achieve a goal” and are featured in many kinds of games; any game that requires players to collect parts of an object and combine them in an inventory to complete a challenge, for example, utilizes this mechanic. Individual mechanics can also be combined with other mechanics and game features to form compound elements that add complexity and flavor to your game experience.
The camera exercises in this chapter provide good examples for how you might add interest to a single mechanic; the simple Camera Manipulations project, for example, demonstrates a method for advancing game action. Imagine in the previous example that after a player receives a reward for unlocking the barrier, they move the hero object to the right side of the screen and advance to a new “room” or area. Now imagine how gameplay would change if the camera advanced the screen at a fixed rate when the level started; the addition of autoscrolling changes this mechanic considerably because the player must solve the puzzle and unlock the barrier before the advancing barrier pushes the player off the screen. The first instance creates a leisurely puzzle-solving game experience, while the latter increases the tension considerably by giving the player a limited amount of time to complete each screen. In an autoscrolling implementation, how might you lay out the game screen to ensure the player had sufficient time to learn the rules and solve the puzzle?
The Multiple Cameras project can be especially useful as a mini-map that provides information about places in the game world not currently displayed on the game screen; in the case of the previous exercise, imagine that the locked barrier appeared somewhere else in the game world other than the player’s current screen and that a secondary camera acting as a mini-map displayed a zoomed out view of the entire game world map. As the game designer, you might want to let the player know when they complete a task that allows them to advance and provide information about where they need to go next, so in this case, you might flash a beacon on the mini-map calling attention to the barrier that just unlocked and showing the player where to go. In the context of our “game design is like a written language” metaphor, adding additional elements like camera behavior to enhance or extend a simple mechanic is one way to begin forming “adjectives” that add interest to the basic nouns and verbs we’ve been creating from the letters in the game design alphabet.
A game designer’s primary challenge is typically to create scenarios that require clever experimentation while maintaining logical consistency; it’s perfectly fine to frustrate players by creating devious scenarios requiring creative problem solving (we call this “good” frustration), but it’s generally considered poor design to frustrate players by creating scenarios that are logically inconsistent and make players feel that they succeeded in a challenge only by random luck (“bad” frustration). Think back to the games you’ve played that have resulted in bad frustration: where did they go wrong, and what might the designers have done to improve the experience?
The locked room scenario is a useful design tool because it forces you to construct basic mechanics, but you might be surprised at the variety of scenarios that can result from this exercise. Try a few different approaches to the locked room puzzle and see where the design process takes you, but keep it simple. For now, stay focused on one-step events to unlock the room that require players to learn only one rule. You’ll revisit this exercise in the next chapter and begin creating more ambitious mechanics that add additional challenges.
Understand the parameters of simple illumination models
Define infrastructure supports for working with multiple light sources
Understand the basics of diffuse reflection and normal mapping
Understand the basics of specular reflection and the Phong illumination model
Implement GLSL shaders to simulate diffuse and specular reflection and the Phong illumination model
Create and manipulate point, directional, and spotlights
Simulate shadows with the WebGL stencil buffer
Up until now in the game engine, you have implemented mostly functional modules in order to provide the fundamentals required for many types of 2D games. That is, you have developed engine components and utility classes that are designed to support the actual gameplay directly. This is a great approach because it allows you to systematically expand the capabilities of your engine to allow more types of games and gameplay. For instance, with the topics covered thus far, you can implement a variety of different games including puzzle games, top-down space shooters, and even simple platform games.
An illumination model, or a lighting model, is a mathematical formulation that describes the color and brightness of a scene based on approximating light energy reflecting off the surfaces in the scene. In this chapter, you will implement an illumination model that indirectly affects the types of gameplay your game engine can support and the visual fidelity that can be achieved. This is because illumination support from a game engine can be more than a simple aesthetic effect. When applied creatively, illumination can enhance gameplay or provide a dramatic setting for your game. For example, you could have a scene with a torch light that illuminates an otherwise dark pathway for the hero, with the torch flickering to communicate a sense of unease or danger to the player. Additionally, while the lighting model is based on light behaviors in the physical world, in your game implementation, the lighting model allows surreal or physically impossible settings, such as an oversaturated light source that displays bright or iridescent colors or even a negative light source that absorbs visible energy around it.
When implementing illumination models commonly present in game engines, you will need to venture into concepts in 3D space to properly simulate light. As such, the third dimension, or depth, must be specified for the light sources to cast light energy upon the game objects, or the Renderable objects, which are flat 2D geometries. Once you consider concepts in 3D, the task of implementing a lighting model becomes much more straightforward, and you can apply knowledge from computer graphics to properly illuminate a scene.
A simplified variation of the Phong illumination model that caters specifically to the 2D aspect of your game engine will be derived and implemented. However, the principles of the illumination model remain the same. If you desire more information or a further in-depth analysis of the Phong illumination model, please refer to the recommended reference books from Chapter 1.
Ambient light: Reviews the effects of lights in the absence of explicit light sources
Light source: Examines the effect of illumination from a single light source
Multiple light sources: Develop game engine infrastructure to support multiple light sources
Diffuse reflection and normal maps: Simulate light reflection from matte or diffuse surfaces
Specular light and material: Models light reflecting off shinning surfaces and reaching the camera
Light source types: Introduce illumination based on different types of light sources
Shadow: Approximates the results from light being blocked
Together, the projects in this chapter build a powerful tool for adding visual intricacy into your games. In order to properly render and display the results of illumination, the associated computation must be performed for each affected pixel. Recall that the GLSL fragment shader is responsible for computing the color of each pixel. In this way, each fundamental element of the Phong illumination model can be implemented as additional functionality to existing or new GLSL fragment shaders. In all projects of this chapter, you will begin by working with the GLSL fragment shader.
Ambient light, often referred to as background light, allows you to see objects in the environment when there are no explicit light sources. For example, in the dark of night, you can see objects in a room even though all lights are switched off. In the real world, light coming from the window, from underneath the door, or from the background illuminates the room for you. A realistic simulation of the background light illumination, often referred to as indirect illumination, is algorithmically complex and can be computationally expensive. Instead, in computer graphics and most 2D games, ambient lighting is approximated by adding a constant color, or the ambient light, to every object within the current scene or world. It is important to note that while ambient lighting can provide the desired results, it is only a rough approximation and does not mimic real-world indirect lighting.

Running the Global Ambient project
Left mouse button: Increases the global red ambient
Middle mouse button: Decreases the global red ambient
Left-/right-arrow keys: Decrease/increase the global ambient intensity
To experience the effects of ambient lighting
To understand how to implement a simple global ambient illumination across a scene
To refamiliarize yourself with the SimpleShader/Renderable pair structure to interface to GLSL shaders and the game engine
You can find the following external resources in the assets folder. The fonts folder contains the default system fonts and two texture images: minion_sprite.png, which defines the sprite elements for the hero and the minions, and bg.png, which defines the background.
Modify the fragment shader simple_fs.glsl by defining two new uniform variables uGlobalAmbientColor and uGlobalAmbientIntensity and multiplying these variables with the uPixelColor when computing the final color for each pixel:
Similarly, modify the texture fragment shader texture_fs.glsl by adding the uniform variables uGlobalAmbientColor and uGlobalAmbientIntensity. Multiply these two variables with the sampled texture color to create the background lighting effect.
Modify the simple_shader.js file in the src/engine/shaders folder to import from the defaultResources module for accessing the global ambient light effects variables:
Define two new instance variables in the constructor for storing the references or locations of the ambient color and intensity variables in the GLSL shader:
In step E of the SimpleShader constructor, call the WebGL getUniformLocation() function to query and store the locations of the uniform variables for ambient color and intensity in the GLSL shader:
In the activate() function, retrieve the global ambient color and intensity values from the defaultResources module and pass to the corresponding uniform variables in the GLSL shader. Notice the data type-specific WebGL function names for setting uniform variables. As you can probably guess, uniform4fv corresponds to vec4, which is the color storage, and uniform1f corresponds to a float, which is the intensity.
Create the MyGame class access file in src/my_game. For now, the MyGame functionality should be imported from the basic class implementation file, my_game_main.js. With full access to the MyGame class, it is convenient to define the webpage onload() function in this file.
Create my_game_main.js; import from the engine access file, index.js, and from Hero and Minion; and remember to export the MyGame functionality. Now, as in all previous cases, define MyGame as a subclass of engine.Scene with the constructor that initializes instance variables to null.
Load and unload the background and the minions:
Initialize the camera and scene objects with corresponding values to ensure proper scene view at startup. Note the simple elements in the scene, the camera, the large background, a Hero, the left and right Minion objects, and the status message.
Define the draw() function. As always, draw the status message last such that it will not be covered by any other object.
Lastly, implement the update() function to update all objects and receive controls over global ambient color and intensity:
You can now run the project and observe the results. Notice that the initial scene is dark. This is because the RGB values for the global ambient color were all initialized to 0.3. Since the ambient color is multiplied by the color sampled from the textures, the results are similar to applying a dark tint across the entire scene. The same effect can be accomplished if the RGB values were set to 1.0 and the intensity was set 0.3 because the two sets of values are simply multiplied.
Before moving onto the next project, try fiddling with the ambient red channel and the ambient intensity to observe their effects on the scene. By pressing the right-arrow key, you can increase the intensity of the entire scene and make all objects more visible. Continue with this increment and observe that when the intensity reaches values beyond 15.0, all colors in the scene converge toward white or begin to oversaturate. Without proper context, oversaturation can be a distraction. However, it is also true that strategically creating oversaturation on selective objects can be used to indicate significant events, for example, triggering a trap. The next section describes how to create and direct a light source to illuminate selected objects.
Examine your surroundings and you can observe many types of light sources, for example, your table lamp, light rays from the sun, or an isolated light bulb. The isolated light bulb can be described as a point that emits light uniformly in all directions or a point light. The point light is where you will begin to analyze light sources.
Fundamentally, a point light illuminates an area or radius around a specified point. In 3D space, this region of illumination is simply a sphere, referred to as a volume of illumination. The volume of illumination of a point light is defined by the position of the light, or the center of the sphere, and the distance that the light illuminates, or the radius of the sphere. To observe the effects of a light source, objects must be present and within the volume of illumination.
As mentioned in the introduction of this chapter, the 2D engine will need to venture into the third dimension to properly simulate the propagation of light energy. Now, consider your 2D engine; thus far, you have implemented a system in which everything is in 2D. An alternative way is to interpret that the engine defines and renders everything on a single plane where z = 0 and objects are layered by drawing order. On this system, you are going to add light sources that reside in 3D.

Point light and the corresponding volume of illumination in 3D

LightShader/LightRenderable pair and the corresponding GLSL LightShader
Finally, it is important to remember that the GLSL fragment shader is invoked once for every pixel covered by the corresponding geometry. This means that the GLSL fragment shaders you are about to create will be invoked many times per frame, probably in the range of hundreds of thousands or even millions. Considering the fact that the game loop initiates redrawing at a real-time rate, or around 60 frame redraws per second, the GLSL fragment shaders will be invoked many millions of times per second! The efficiency of the implementation is important for a smooth experience.

Running the Simple Light Shader project
WASD keys: Move the hero character on the screen
WASD keys + left mouse button: Move the hero character and the light source around the screen
Left-/right-arrow key: Decreases/increases the light intensity
Z/X key: Increases/decreases the light Z position
C/V key: Increases/decreases the light radius
To understand how to simulate the illumination effects from a point light
To observe point light illumination
To implement a GLSL shader that supports point light illumination
In the src/glsl_shaders folder, create a new file and name it light_fs.glsl.
Refer to texture_fs.glsl and copy all uniform and varying variables. This is an important step because the light_fs fragment shader will interface to the game engine via the LightShader class. The LightShader class, in turn, will be implemented as a subclass of TextureShader, where the existence of these variables is assumed.
Now, define the variables to support a point light: on/off switch, color, position, and radius. It is important to note that the position and radius are in units of pixels.
Step A, sample the texture color and apply the ambient color and intensity.
Step B, perform the light source illumination. This is accomplished by determining if the computation is required—testing if the light is switched on and if the pixel is nontransparent. If both are favorable, the distance between the light position and the current pixel is compared with the light radius to determine if the pixel is inside the volume of illumination. Note that gl_FragCord.xyz is the GLSL-defined variable for current pixel position and that this computation assumes pixel-space units. When all conditions are favorable, the color of the light is accumulated to the final results.
The last step is to apply the tint and to set the final color via gl_FragColor.
Create a new lights folder in the src/engine folder. In the lights folder, add a new file and name it lights.js.
Edit lights.js to create the Light class, and define the constructor to initialize the light color, position, radius, and on/off status. Remember to export the class.
Define the getters and setters for the instance variables:
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
In the src/engine/shaders folder, create a new file and name it light_shader.js.
Define the LightShader class to be a subclass of SpriteShader. In the constructor, define the necessary variables to support sending the information associated with a point light to the light_fs fragment shader. The point light information in the engine is stored in mLight, while the reference to the Camera is important to convert all information from WC to pixel space. The last four lines of the constructor query to obtain the reference locations to the uniform variables in light_fs. Don’t forget to export the class.
Define a simple setter function to associate a light and camera with the shader:
Override the activate() function to append the new functionality of loading the point light information in mLight when the light is present. Notice that you still call the activate() function of the super class to communicate the rest of the values to the uniform variables of the light_fs fragment shader.
Implement the _loadToShader() function to communicate the values of the point light to the uniform variables in the shader. Recall that this communication is performed via the references created in the constructor and the set uniform functions. It is important to note that the camera provides the new coordinate space transformation functionality of wcPosToPixel() and wcSizeToPixel(). These two functions ensure corresponding values in the light_fs are in pixel space such that relevant computations such as distances between positions can be performed. The implementation of these functions will be examined shortly.
Create a new file in the src/engine/renderables folder and name it light_renderable.js.
Define the LightRenderable class to extend SpriteAnimateRenderable, set the shader to reference the new LightShader, and initialize a Light reference in the constructor. This is the light that shines and illuminates the SpriteAnimateRenderable. Don’t forget to export the class.
Define a draw function to pass the camera and illuminating light source to the LightShader before invoking the superclass draw() function to complete the drawing:
Lastly, simply add the support to get and set the light:
Before moving on, remember to update the engine access file, index.js, to forward the newly defined functionality to the client .
As discussed, when you first defined TextureShader (Chapter 5), only a single instance is required for each shader type, and all the shaders are always hidden from the game programmer by a corresponding Renderable type. Each instance of the shader type is created during engine initialization by the shaderResources module in the src/engine/core folder.
Edit shader_resources.js in the src/engine/core folder to import LightShader; define the path to the GLSL source code, a corresponding variable and access function for the shader:
Create a new instance of light shader in the createShaders() function:
Load the light shader GLSL source code in the init() function:
Remember to release GLSL resources and unload the source code during cleanup:
Lastly, export the access function to allow sharing of the created instance in the engine:
The Camera utility functions, such as wcPosToPixel() , are invoked multiple times while rendering the LightShader object. These functions compute the transformation between WC and pixel space. This transformation requires the computation of intermediate values, for example, lower-left corner of WC window, that do not change during each rendering invocation. To avoid repeated computation of these values, a per-render invocation cache should be defined for the Camera object.
Edit camera_main.js and define a PerRenderCache class; in the constructor, define variables to hold the ratio between the WC space and the pixel space as well as the origin of the Camera. These are intermediate values required for computing the transformation from WC to pixel space, and these values do not change once rendering begins.
Modify the Camera class to instantiate a new PerRenderCache object. It is important to note that this variable represents local caching of information and should be hidden from the rest of the engine.
Initialize the per-render cache in the setViewAndCameraMatrix() function by adding a step B3 to calculate and set the cache based on the Camera viewport width, world width, and world height:
Notice that the PerRenderCache class is completely local to the camera_main.js file. It is important to hide and carefully handle complex local caching functionality.
Edit the Camera access file, camera.js, to import from the file, camera_xform.js, which will contain the latest functionality additions, the WC to pixel space transform support:
In the src/engine/cameras folder, create a new file and name it camera_xform.js. Import from camera_input.js such that you can continue to add new functionality to the Camera class, and do not forget to export.
Create a function to approximate a fake pixel space z value by scaling the input parameter according to the mWCToPixelRatio variable:
Define a function to convert from WC to pixel space by subtracting the camera origin followed by scaling with the mWCToPixelRatio. The 0.5 offset at the end of the x and y conversion ensures that you are working with the center of the pixel rather than a corner.
Lastly, define a function for converting a length from WC to pixel space by scaling with the mWCToPixelRatio variable:
The MyGame level must be modified to utilize and test the newly defined light functionality.
Edit the hero.js file in the src/my_game/objects folder; in the constructor, replace the SpriteRenderable with a LightRenderable instantiation:
Edit the minion.js file in the src/my_game/objects folder; in the constructor, replace the SpriteRenderable with a LightRenderable instantiation:
With the implementation of the light completed and the game objects properly updated, you can now modify the MyGame level to display and test the light source. Because of the simplistic and repetitive nature of the code changes in the my_game_main.js file of adding variables for the new objects, initializing the objects, drawing the objects, and updating the objects, the details will not be shown here.
With the project now complete, you can run it and examine the results. There are a few observations to take note of. First is the fact that the illuminated results from the light source look like a circle. As depicted in Figure 8-2, this is the illuminated circle of the point light on the z = 0 plane where your objects are located. Press the Z or X key to increase or decrease the light’s z position to observe that the illuminated circle decreases and increases in size as a result of intersection area changes. The sphere/plane intersection result can be verified when you continue to increase/decrease the z position. The illuminated circle will eventually begin to decrease in size and ultimately disappear completely when the sphere is moved more than its radius away from the z=0 plane.
You can also press the C or V key to increase or decrease the point light radius to increase or decrease the volume of illumination, and observe the corresponding changes in the illuminated circle radius.
Now, press the WASD keys along with the left mouse button to move the Hero and observe that the point light always follow the Hero and properly illuminates the background. Notice that the light source illuminates the left minion, the hero, and the background but not the other three objects in the scene. This is because the right minion and the red and green blocks are not LightRenderable objects and thus cannot be illuminated by the defined light source.

Support for multiple light sources
The point light illumination results from the previous project can be improved. You have observed that at its boundary, the illuminated circle disappears abruptly with a sharp brightness transition. This sudden disappearance of illumination results does not reflect real life where effects from a given light source decrease gradually over distance instead of switching off abruptly. A more visually pleasing light illumination result should show an illuminated circle where the illumination results at the boundary disappear gradually. This gradual decrease of light illumination effect over distance is referred to as distance attenuation. It is a common practice to approximate distant attenuation with quadratic functions because they produce effects that resemble the real world. In general, distance attenuation can be approximated in many ways, and it is often refined to suit the needs of the game.
In the following, you will implement a near and far cutoff distance, that is, two distances from the light source at which the distance attenuation effect will begin and end. These two values give you control over a light source to show a fully illuminated center area with illumination drop-off occurring only at a specified distance. Lastly, a light intensity will be defined to allow the dimming of light without changing its color. With these additional parameters, it becomes possible to define dramatically different effects. For example, you can have a soft, barely noticeable light that covers a wide area or an oversaturated glowing light that is concentrated over a small area in the scene.

Running the Multiple Lights project
WASD keys: Move the hero character on the screen
Number keys 0, 1, 2, and 3: Select the corresponding light source
Arrow keys: Move the currently selected light
Z/X keys: Increase/decrease the light z position
C/V and B/N keys: Increase/decrease the near and far cutoff distances of the selected light
K/L keys: Increase/decrease the intensity of the selected light
H key: Toggles the selected light on/off
To build the infrastructure for supporting multiple light sources in the engine and GLSL shaders
To understand and examine the distance attenuation effects of light
To experience controlling and manipulating multiple light sources in a scene
In the light_fs.glsl file, remove the light variables that were added for a single light and add a struct for light information that holds the position, color, near-distance, far-distance, intensity, and on/off variables. With the struct defined, add a uniform array of lights to the fragment shader. Notice that a #define has been added to hold the number of light sources to be used.
GLSL requires array sizes and the number of loop iterations to be constants. The kGLSLuLightArraySize is the constant for light array size and the corresponding loop iteration control. Feel free to change this value to define as many lights as the hardware can support. For example, you can try increasing the number of lights to 50 and then test and measure the performance.
Define LightEffect() function to compute the illumination results from a light source. This function uses the distance between the light and the current pixel to determine whether the pixel lies within the near radius, in between near and far radii, or farther than the far radius. If the pixel position lies within the near radius, there is no attenuation, so the strength is set to 1. If the position is in between the near and far radii, then the strength is modulated by a quadratic function. A distance of greater than the far radius will result in no illumination from the corresponding light source, or a strength of 0.
Modify the main function to iterate through all the defined light sources and call the LightEffect() function to calculate and accumulate the contribution from the corresponding light in the array:
Modify the Lights.js constructor to define variables for the new properties:
Define the corresponding get and set accessors for the variables. Note that the radius variable has been generalized and replaced by the near and far cutoff distances.
Lastly, don’t forget to export the class and remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
In the src/engine/shaders folder, create a new file and name it shader_light_at.js; define the ShaderLightAt class and the constructor to receive a shader and an index to the uLight array. Don’t forget to export the class.
Implement the _setShaderReferences() function to set the light property references to a specific index in the uLights array in the light_fs fragment shader:
Implement the loadToShader() function to push the properties of a light to the light_fs fragment shader. Notice that this function is similar to the _loadToShader() function defined in the light_shader.js file from previous project. The important difference is that in this case, light information is loaded to a specific array index.
Define a simple function to update the on/off status of the light in the array of the light_fs fragment shader :
Note that the ShaderLightAt class is defined for loading a light to a specific array element in the GLSL fragment shader. This is an internal engine operation. There is no reason for the game programmer to access this class, and thus, the engine access file, index.js, should not be modified to forward the definition of this class.
Begin by editing the light_shader.js file, importing ShaderLightAt, and removing the _loadToShader() function. The actual loading of light information to the light_fs fragment shader is now handled by the newly defined ShaderLightAt objects.
Modify the constructor to define mLights, which is an array of ShaderLightAt objects to correspond to the uLights array defined in the light_fs fragment shader. It is important to note that the mLights and uLights arrays must be the exact same size.
Modify the activate() function to iterate and load the contents of each ShaderLightAt object to the light_fs shader by calling the corresponding loadToShader() function. Recall that the GLSL fragment shader requires the for-loop control variable to be a constant. This implies that all elements of the uLights array will be processed on each light_fs invocation. For this reason, it is important to ensure all unused lights are switched off. This is ensured by the last while loop in the following code:
Rename the setCameraAndLight() function to setCameraAndLights(); in addition to setting the corresponding variables, check to ensure that the light array size is not greater than the defined array size in the light_fs fragment shader. Lastly, remember to update the corresponding function name in sprite_shader.js.
In the LightRenderable constructor, replace the single light reference variable with an array:
Make sure to update the draw function to reflect the change to multiple light sources:
Define the corresponding accessor functions for the light array:
Modify the my_game_main.js file in the src/my_game folder to reflect the changes to the constructor, initialize function, draw function, and update function. All these changes revolve around handling multiple lights through a light set.
In the src/my_game folder, create the new file my_game_lights.js to import MyGame class from my_game_main.js and to add functionality to instantiate and initialize the lights.
In the src/my_game folder, create the new file my_game_light_control.js to import from my_game_lights.js and to continue to add controls of the lights to MyGame.
Modify my_game.js to import from my_game_light_control.js ensuring access to all of the newly defined functionality.
Run the project to examine the implementation. Try selecting the lights with the 0, 1, 2, and 3 keys and toggling the selected light on/off. Notice that the game programmer has control over which light illuminates which of the objects: all lights illuminate the background, while the hero is illuminated only by lights 0 and 3, the left minion is illuminated only by lights 1 and 3, and the right minion is illuminated only by lights 2 and 3.
Move the Hero object with the WASD keys to observe how the illumination changes as it is moved through the near and far radii of light source 0. With light source 0 selected (type 0), press the C key to increase the near radius of the light. Notice that as the near radius approaches the value of the far, the illuminated circle boundary edge also becomes sharper. Eventually, when near radius is greater than far radius, you can once again observe the sudden brightness change at the boundary. You are observing the violation of the implicit assumption of the underlying illumination model that the near is always less than the far radius. This exact situation can be created by decreasing the far radius with the N key.
You can move the light sources with the arrow keys to observe the additive property of lights. Experiment with changing the light source’s z position and its near/far values to observe how similar illumination effects can be accomplished with different z/near/far settings. In particular, try adjusting light intensities with the K/L keys to observe the effects of oversaturation and barely noticeable lighting. You can continue to press the L key till the intensity becomes negative to create a source that removes color from the scene. The two constant color squares are in the scene to confirm that nonilluminated objects can still be rendered.
You can now place or move many light sources and control the illumination or shading at targeted regions. However, if you run the previous project and move one of the light sources around, you may notice some peculiar effects. Figure 8-7 highlights these effects by comparing the illumination results from the previous project on the left to an illumination that you probably expect on the right. Now, refer to the image on the left. First, take note of the general uniform lighting within the near cutoff region where the expected brighter spot around the position of the point light source cannot be observed. Second, examine the vertical faces of the geometric block and take note of the bright illumination on the bottom face that is clearly behind, or pointing away from, the light source. Both of these peculiarities are absent from the right image in Figure 8-7.

Left: from previous project. Right: expected illumination

Surface normal vectors of an object
A human’s observation of light illumination is the result of visible energy from light sources reflecting off object surfaces and reaching the eyes. A diffuse, matte, or Lambertian surface reflects light energy uniformly in all directions. Examples of diffuse surfaces include typical printer papers or matte-painted surfaces. Figure 8-9 shows a light source illuminating three diffuse surface element positions, A, B, and C. First, notice that the direction from the position being illuminated toward the light source is defined as the light vector,
, at the position. It is important to note that the direction of the
vector is always toward the light source and that this is a normalized vector with a magnitude of 1.
, is perpendicular to its light vector
, or
. Position B can receive all the energy because its normal vector is pointing in the same direction as its light vector, or
. In general, as exemplified by position C, the proportion of light energy received and reflected by a diffuse surface is proportional to the cosine of the angle between its normal and the light vector, or
. In an illumination model, the term with the
computation is referred to as the diffuse, or Lambertian, component.
Normal and light vectors and diffuse illumination
, or the diffuse component. For example, Figure 8-10 shows a sphere and torus (doughnut shape object) with (the left images) and without (the right images) the corresponding diffuse components. Clearly, in both cases, the 3D contour of the objects is captured by the left versions of the image with the diffuse component.
Examples of 3D objects with and without diffuse component
In a 2D world, as in the case of your game engine, all objects are represented as 2D images, or textures. Since all objects are 2D textured images defined on the xy plane, the normal vectors for all the objects are the same: a vector in the z direction. This lack of distinct normal vectors for objects implies that it is not possible to compute a distinct diffuse component for objects. Fortunately, similar to how texture mapping addresses the limitation of each geometry having only a single color, normal mapping can resolve the issue of each geometry having only a single normal vector.

Normal mapping with two texture images: the normal and the color texture
term properly computed and displayed, the human vision system will perceive a sloped contour.
Normal mapping with two texture images: the normal and the color texture
In summary, a normal texture map or a normal map is a texture map that stores normal vector information rather than the usual color information. Each texel of a normal map encodes the xyz values of a normal vector in the RGB channels. In lieu of displaying the normal map texels as you would with a color texture, the texels are used purely for calculating how the surface would interact with light. In this way, instead of a constant normal vector pointing in the z direction, when a square is normal mapped, the normal vector of each pixel being rendered will be defined by texels from the normal map and can be used for computing the diffuse component. For this reason, the rendered image will display contours that resemble the shapes encoded in the normal map.
In the previous project, you expanded the engine to support multiple light sources. In this section, you will define the IllumShader class to generalize a LightShader to support the computation of the diffuse component based on normal mapping.

Running the Normal Maps and Illumination Shaders project
WASD keys: Move the hero character on the screen
Number keys 0, 1, 2, and 3: Select the corresponding light source
Arrow keys: Move the currently selected light
Z/X key: Increases/decreases the light z position
C/V and B/N keys: Increases/decreases the near and far cutoff distances of the selected light
K/L key: Increases/decreases the intensity of the selected light
H key: Toggles the selected light on/off
To understand and work with normal maps
To implement normal maps as textures in the game engine
To implement GLSL shaders that support diffuse component illumination
To examine the diffuse,
, component in an illumination model
You can find the following external resource files in the assets folder. The fonts folder contains the default system fonts, two texture images, and two corresponding normal maps for the texture images, minion_sprite.png and bg.png, and the corresponding normal maps, minion_sprite_normal.png and bg_normal.png. As in previous projects, the objects are sprite elements of minion_sprite.png, and the background is represented by bg.png.
The minion_sprite_normal.png normal map is generated algorithmically from http://cpetry.github.io/NormalMap-Online/ based on the minion_sprite.png image.
Begin by copying from light_fs.glsl and pasting to a new file, illum_fs.glsl, in the src/glsl_shaders folder.
Edit the illum_fs.glsl file and add a sampler2D object, uNormalSampler, to sample the normal map:
Modify the LightEffect() function
to receive a normal vector parameter, N. This normal vector N is assumed to be normalized with a magnitude of 1 and will be used in the diffuse component
computation. Enter the code to compute the
vector, remember to normalize the vector, and use the result of
to scale the light strength accordingly.
Edit the main() function to sample from both the color texture with uSampler and the normal texture with uNormalSampler . Remember that the normal map provides you with a vector that represents the normal vector of the surface element at that given position. Because the xyz normal vector values are stored in the 0 to 1 RGB color format, the sampled normal map results must be scaled and offset to the -1 to 1 range. In addition, recall that texture uv coordinates can be defined with the v direction increasing upward or downward. In this case, depending on the v direction of the normal map, you may also have to flip the y direction of the sampled normal map values. The normalized normal vector, N, is then passed on to the LightEffect() function for the illumination calculations.
Normal maps can be created in a variety of different layouts where x or y might need to be flipped in order to properly represent the desired surface geometries. It depends entirely upon the tool or artist that created the map.
In the src/engine/shaders folder, create illum_shader.js, and define IllumShader to be a subclass of LightShader to take advantage of the functionality related to light sources. In the constructor, define a variable, mNormalSamplerRef, to maintain the reference to the normal sampler in the illum_fs fragment shader. Don’t forget to export the class.
Override and extend the activate() function to binding the normal texture sampler reference to WebGL texture unit 1. You may recall from Chapter 5 that TextureShader binds the color texture sampler to texture unit 0. By binding normal mapping to texture unit 1, the WebGL texture system can work concurrently with two active textures: units 0 and 1. As will be discussed in the next subsection, it is important to configure WebGL, via the texture module, to activate the appropriate texture units for the corresponding purpose: color vs. normal texture mapping.
WebGL supports simultaneous activation of multiple texture units during rendering. Depending on the GPU, a minimum of eight texture units can be active simultaneously during a single rendering pass. In this book, you will activate only two of the texture units during rendering: one for color texture and the other for normal texture.
So far, you have been binding the color texture map to WebGL texture unit 0. With the addition of the normal texture, the binding to the unit of WebGL texture system must now be parameterized. Fortunately, this is a straightforward change.
Begin by creating illum_renderable.js in the src/engine/renderables folder, defining the IllumRenderable class to subclass from LightRenderable, and initializing a mNormalMap instance variable to record the normal map ID. The IllumRenderable object works with two texture maps: myTexture for color texture map and myNormalMap for normal mapping. Note that these two texture maps share the same texture coordinates defined in mTexCoordBuffer in the SpriteShader. This sharing of texture coordinate implicitly assumes that the geometry of the object is depicted in the color texture map and the normal texture map is derived to capture the contours of the object, which is almost always the case. Lastly, don’t forget to export the class.
Once again, it is important to reiterate that the normal texture map is an image that must be created explicitly by an artist or algorithmically by an appropriate program. Using a regular color texture map image as a normal texture map will not work in general.
Next, override the draw() function to activate the normal map before calling the draw() method of the super class. Notice the second argument of the texture.activate() function call where the WebGL texture unit 1 is explicitly specified. In this way, with IllumShader linking uNormalSampler to WebGL texture unit 1 and illum_fs sampling the uNormalSampler as a normal map, your engine now supports proper normal mapping.
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
Similar to all other shaders in the engine, a default instance of the IllumShader must be defined to be shared. The code involved in defining the default IllumShader instance is identical to that of LightShader presented earlier in this chapter, with the straightforward exception of substituting the corresponding variable names and data type. Please refer to the “Defining a Default LightShader Instance” subsection and the shader_resources.js source code file in the src/engine/core folder for details.
Testing the newly integrated normal map functionality must include the verification that the non-normal mapped simple color texture is working correctly. To accomplish this, the background, hero, and left minion will be created as the newly defined IllumRenderable object, while the right minion will remain a LightRenderable object.
Edit hero.js in src/my_game/objects to modify the constructor of the Hero class to instantiate the game object with an IllumRenderable:
In the same folder, edit minion.js to modify the constructor of Minion class to conditionally instantiate the game object with either a LightRenderable or an IllumRenderable when the normal texture map is present:
You can now modify MyGame to test and display your implementation of the illumination shader. Modify the my_game_main.js file in the src/my_game folder to load and unload the new normal maps and to create the Hero and Minion objects with the normal map files. As previously, the involved changes are straightforward and relatively minimal; as such, the details are not shown here.
With the project now completed, you can run it and check your results to observe the effects of diffuse illumination. Notice that the Hero, left Minion, and the background objects are illuminated with a diffuse computation and appear to provide more depth from the lights. There is much more variation of colors and shades across these objects.
You can verify that the peculiar effects observed on the left image of Figure 8-7 is resolved. For a clearer observation, switch off all other lights (type the light number followed the H key) except leaving light 2 switched on. Now, move the light position (with the arrow keys) to illuminate the geometric block behind the Hero character; you can move it away with the WASD keys. Verify that you are viewing similar results to those in the right image of Figure 8-7. You should be able to clearly observe the brightest spot that corresponds to the point light position. Additionally, take note that the bottom face on the block is only illuminated when the light position is in front of the face or when the diffuse term,
, is positive.
In general, as you move the light source, observe faces with vertical orientations, for example, the side faces of the geometric block or gaps. As the light position moves across such a boundary, the sign of the
term would flip, and the corresponding surface illumination would undergo drastic changes (from dark to lit, or vice versa). For a more dramatic result, lower the z height of the light (with the X key) to a value lower than 5. With the normal map and diffuse computation, you have turned a static background image into a background that is defined by complex 3D geometric shapes. Try moving the other light sources and observe the illumination changes on all the objects as the light sources move across them.
Lastly, the slightly pixelated and rough appearances of the Hero and left Minion attest to the fact that the normal maps for these objects are generated algorithmically from the corresponding color images and that the maps are not created by an artist.

Specularity and shininess of objects

Specularity: the reflection of the light source
vector is the reflection direction of the light vector
, the specular highlight on an object is visible even when the viewing direction
is not perfectly aligned with the
vector. Real-life experience also informs you that the further away
is from
, or the larger the angle-α is, the less likely you will observe the light reflection. In fact, you know that when α is zero, you would observe the maximum light reflection, and when α is 90° or when
and
are perpendicular, you would observe zero light reflection.
The Phong specularity model
The Phong illumination model simulates the characteristics of specularity with a
term. When
and
are aligned, or when α=0°, the specularity term evaluates to 1, and the term drops off to 0 according to the cosine function when the separation between
and
increases to 90° or when α=90°. The power n, referred to as shininess, describes how rapidly the specular highlight rolls off as α increases. The larger the n value, the faster the cosine function decreases as α increases, the faster the specular highlight drops off, and the glossier the surface would appear. For example, in Figure 8-14, the left, middle, and right spheres have corresponding n values of 0, 5, and 30.
term models specular highlight effectively, the cost involved in computing the
vector for every shaded pixel can be significant. As illustrated in Figure 8-17,
, the halfway vector, is defined as the average of the
and
vectors. It is observed that β, the angle between the
and
, can also be used to characterize specular reflection. Though slightly different,
produces similar results as
with less per-pixel computation cost. The halfway vector will be used to approximate specularity in your implementation.
The halfway vector
The ambient term: IaCaKa
The diffuse term: 
The specular term: 

The Phong illumination model

Support for material

Running the Material and Specularity project
WASD keys: Move the hero character on the screen
Number keys 0, 1, 2, and 3: Select the corresponding light source
Arrow keys: Move the currently selected light
Z/X key: Increases/decreases the light z position
C/V and B/N keys: Increase/decrease the near and far cutoff distances of the selected light
K/L key: Increases/decreases the intensity of the selected light
H key: Toggles the selected light on/off
Number keys 5 and 6: Select the left minion and the hero
Number keys 7, 8, and 9: Select the Ka, Kd, and Ks material properties of the selected character (left minion or the hero)
E/R, T/Y, and U/I keys: Increase/decrease the red, green, and blue channels of the selected material property
O/P keys: Increase/decrease the shininess of the selected material property
To understand specular reflection and the Phong specular term
To implement specular highlight illumination in GLSL fragment shaders
To understand and experience controlling the Material of an illuminated object
To examine specular highlights in illuminated images
Edit the illum_fs.glsl file and define a variable, uCameraPosition
, for storing the camera position. This position is used to compute the
vector, the viewing direction. Now, create a material struct and a corresponding variable, uMaterial, for storing the per-object material properties. Note the correspondence between the variable names Ka, Kd, Ks, and n and the terms in the Phong illumination model in Figure 8-18.
To support readability, mathematical terms in the illumination model will be defined into separate functions. You will begin by defining the DistanceDropOff() function to perform the exact same near/far cutoff computation as in the previous project.
Define the function to compute the diffuse term. Notice that the texture map color is applied to the diffuse term.
Define the function to compute the specular term. The
vector, V, is computed by normalizing the results of subtracting uCameraPosition from the current pixel position, gl_FragCoord. It is important to observe that this operation is performed in the pixel space, and the IllumShader/IllumRenderable object pair must transform the WC camera position to pixel space before sending over the information.
Now you can implement the Phong illumination model to accumulate the diffuse and specular terms. Notice that lgt.Intensity, IL, and lgt.Color, CL, in Figure 8-18 are factored out and multiplied to the sum of diffuse and specular results. The scaling by the light strength based on near/far cutoff computation, strength, is the only difference between this implementation and the diffuse/specular terms listed in Figure 8-18.
Complete the implementation in the main() function by accounting for the ambient term and looping over all defined light sources to accumulate for ShadedResults(). The bulk of the main function is similar to the one in the illum_fs.glsl file from the previous project. The only important differences are highlighted in bold.
Create material.js in the src/engine folder, define the Material class, and in the constructor, initialize the variables as defined in the surface material property in Figure 8-18. Notice that ambient, diffuse, and specular (Ka, Kd, and Ks) are colors, while shininess is a floating-point number.
Provide straightforward get and set accessors to the variables:
Note that the Material class is designed to represent material properties of Renderable objects and must be accessible to the game programmer. As such, remember to update the engine access file, index.js, to forward the newly defined functionality to the client .
Create shader_material.js in the src/engine/shaders folder, define ShaderMaterial class, and in the constructor, initialize the variables as references to the ambient, diffuse, specular, and shininess in the illum_fs GLSL shader.
Define the loadToShader() function to push the content of a Material to the GLSL shader:
Similar to the ShaderLightAt class , the ShaderMaterial class is defined for loading a material to the GLSL fragment shader. This is an internal engine operation. There is no reason for the game programmer to access this class, and thus, the engine access file, index.js, should not be modified to forward the definition of this class .
Edit illum_shader.js in src/engine/shaders to import ShaderMaterial, and modify the constructor to define new variables, mMaterial and mCameraPos, to support Phong illumination computation. Then define the variables mMaterialLoader and mCameraPosRef for keeping references and for loading the corresponding contents to the uniform variables in the shader.
Modify the activate() function to load the material and camera position to the illum_fs fragment shader:
Define the setMaterialAndCameraPos() function to set the corresponding variables for Phong illumination computation:
Edit illum_renderable.js in the src/engine/renderables folder, and modify the constructor to instantiate a new Material object:
Update the draw() function to set the material and camera position to the shader before the actual rendering. Notice that in the call to camera.getWCCenterInPixelSpace(), the camera position is properly transformed into pixel space.
Define a simple accessor for the material object:
As you have seen in the illum_fs fragment shader implementation, the camera position required for computing the
vector must be in pixel space. The Camera object must be modified to provide such information. Since the Camera object stores its position in WC space, this position must be transformed to pixel space for each IllumRenderable object rendered.
Edit the camera_main.js file and add a vec3 to the PerRenderCache to cache the camera’s position in pixel space:
In the Camera constructor, define a z variable to simulate the distance between the Camera object and the rest of the Renderable objects. This third piece of information represents depth and is required for the illumination computation.
In step B4 of the setViewAndCameraMatrix() function, call the wcPosToPixel() function to transform the camera’s position to 3D pixel space and cache the computed results:
Define the accessor for the camera position in pixel space:
You can now test your implementation of the Phong illumination model and observe the effects of altering an object’s material property and specularity. Since the background, Hero, and left Minion are already instances of the IllumRenderable object, these three objects will now exhibit specularity. To ensure prominence of specular reflection, the specular material property, Ks, of the background object is set to bright red in the init() function.
A new function, _selectCharacter() , is defined to allow the user to work with the material property of either the Hero or the left Minion object. The file my_game_material_control.js implements the actual user interaction for controlling the selected material property.
You can run the project and interactively control the material property of the currently selected object (type keys 5 to select the left Minion and 6 for the Hero). By default, the material property of the Hero object is selected. You can try changing the diffuse RGB components by pressing the E/R, T/Y, or U/I keys. Notice that you can press multiple keys simultaneously to change multiple color channels at the same time.
The normal map of the background image is carefully generated and thus is best for examining specularity effects. You can observe red highlights along vertical boundaries in the background image. If you are unsure, pay attention to the top right region of the background image, select light 3 (type the 3 key), and toggle the on/off switch (typing the H key). Notice that as the light toggles from off to on, the entire top right region becomes brighter with a red highlight along the vertical boundary. This red highlight is the reflection of light 3 toward the camera. Now, with light 3 switched on, move it toward the left and right (the left-/right-arrow keys). Observe how the highlight intensifies and then fades as results of the angle between the halfway vector,
and the face normal vector,
change.
You can also adjust the material to observe specularity on the Hero. Now, select the Hero object (type the 6 key), decrease its diffuse material property (press R, Y, and I keys at the same time) to around 0.2, and increase the specularity property (type 9 to select Specular, and then press E, T, and U keys at the same time) to values beyond 1. With this setting, the diffused term is reduced and specular highlight emphasized, you can observe a dark Hero figure with bright highlight spots. If you are unsure, try toggling light 0 (type the 0 key) on/off (type the H key). At this point, you can press and hold the P key to decrease the value of the shininess, n. As the n value decreases, you can observe the increase in the sizes of the highlighted spots coupled with decrease in the brightness of these spots. As depicted by the middle sphere of Figure 8-14, a smaller n value corresponds to a less polished surface which typically exhibits highlights that are larger in area but with less intensity.
Relatively small objects, such as the Hero, do not occupy many pixels; the associated highlight is likely to span even smaller number of pixels and can be challenging to observe. Specular highlights can convey subtle and important effects; however, its usage can also be challenging to master.
At this point, your game engine supports the illumination by many instances of a single type of light, a point light. A point light behaves much like a lightbulb in the real world. It illuminates from a single position with near and far radii where objects can be fully, partially, or not lit at all by the light. There are two other light types that are popular in most game engines: the directional light and the spotlight.
A directional light, in contrast to the point light, does not have a light position or a range. Rather, it illuminates everything in a specific direction. While these characteristics may not seem intuitive, they are perfect for general background lighting. This is the case in the real world. During the day, the general environment is illuminated by the sun where rays from the sun can conveniently be modeled as a directional light. The light rays from the sun, from the perspective of the earth, are practically parallel coming from a fixed direction, and these rays illuminate everything. A directional light is a simple light type that requires only a direction variable and has no distance drop-off. The directional lights are typically used as global lights that illuminate the entire scene.

A spotlight and its parameters
In illustrative diagrams, like Figure 8-21, for clarity purposes, light directions are usually represented by lines extending from the light position toward the environment. These lines are usually for illustrative purposes and do not carry mathematical meanings. These illustrative diagrams are contrasted with vector diagrams that explain illumination computations, like Figures 8-15 and 8-16. In vector diagrams, all vectors always point away from the position being illuminated and are assumed to be normalized with a magnitude of 1.

Running the Directional and Spotlights project
WASD keys: Move the hero character on the screen
Number keys 0, 1, 2, and 3: Select the corresponding light source.
Arrow keys: Move the currently selected light; note that this has no effect on the directional light (light 1).
Arrow keys with spacebar pressed: Change the direction of the currently selected light; note that this has no effect on the point light (light 0).
Z/X key: Increases/decreases the light z position; note that this has no effect on the directional light (light 1).
C/V and B/N keys: Increase/decrease the inner and outer cone angles of the selected light; note that these only affect the two spotlights in the scene (lights 2 and 3).
K/L key: Increases/decreases the intensity of the selected light.
H key: Toggles the selected light on/off.
Number keys 5 and 6: Select the left minion and the hero
Number keys 7, 8, and 9: Select the Ka, Kd, and Ks material properties of the selected character (left minion or the hero)
E/R, T/Y, and U/I keys: Increase/decrease the red, green, and blue channels of the selected material property
O/P keys: Increase/decrease the shininess of the selected material property
To understand the two additional light types: directional lights and spotlights
To examine the illumination results from all three different light types
To experience controlling the parameters of all three light types
To support the three different light types in the engine and GLSL shaders
As with the previous projects, the integration of the new functionality will begin with the GLSL shader. You must modify the GLSL IllumShader and LightShader fragment shaders to support the two new light types.
Begin by editing illum_fs.glsl and defining constants for the three light types. Notice that to support proper communications between the GLSL shader and the engine, these constants must have identical values as the corresponding enumerated data defined in the light.js file.
Expand the light struct to accommodate the new light types. While the directional light requires only a Direction variable, a spotlight requires a Direction, inner and outer angles, and a DropOff variable. As will be detailed next, instead of the actual angle values, the cosines of the inner and outer angles are stored in the struct to facilitate efficient implementation. The DropOff variable controls how rapidly light drops off between the inner and outer angles of the spotlight. The LightType variable identifies the type of light that is being represented in the struct.
Define an AngularDropOff() function to compute the angular attenuation for the spotlight:
The parameter lgt is a spotlight in the Light struct, lgtDir is the direction of the spotlight (or Light.Direction normalized), and L is the light vector of the current position to be illuminated. Note that since the dot product of normalized vectors is the cosine of the angle between the vectors, it is convenient to represent all angular displacements by their corresponding cosine values and to perform the computations based on cosines of the angular displacements. Figure 8-23 illustrates the parameters involved in angular attenuation computation.
The lgtDir is the direction of the spotlight, while the light vector, L, is the vector from the position being illumined to the position of the spotlight.

Computing the angular attenuation of a spotlight
The following code is based on cosine of angular displacements. It is important to remember that given two angles α and β, where both are between 0 and 180 degrees, if α > β, then, cos α < cos β.
The cosL is the dot product of L with lgtDir; it records the angular displacement of the position currently being illuminated.
The num variable stores the difference between cosL and cosOuter. A negative num would mean that the position currently being illuminated is outside the outer cone where the position will not be lit and thus no further computation is required.
If the point to be illuminated is within the inner cone, cosL would be greater than lgt.CosInner, and full strength of the light, 1.0, will be returned.
If the point to be illuminated is in between the inner and outer cone angles, use the smoothstep() function to compute the effective strength from the light.
Modify the ShadedResults() function to handle each separate case of light source type before combining the results into a color:
You can now modify the GLSL light_fs fragment shader to support the two new light types. The modifications involved are remarkably similar to the changes made for illum_fs, where constant values that correspond to light types are defined, the Light struct is extended to support directional and spotlights, and the angular and distant attenuation functions are defined to properly compute the strengths from the light. Please refer to the light_fs.glsl source code file for details of the implementation.
Edit light.js in the src/engine/lights folder to define and export an enumerated data type for the different light types. It is important that the enumerated values correspond to the constant values defined in the GLSL illum_fs and light_fs shaders.
Modify the constructor to define and initialize the new variables that correspond to the parameters of directional light and spotlight.
Define the get and set accessors for the new variables. The exhaustive listing of these functions is not shown here. Please refer to the light.js source code file for details .
Edit shader_light_at.js to import the eLightType enumerated type from light.js:
Modify the _setShaderReferences() function to set the references to the newly added light properties:
Modify the loadToShader() function to load the newly added light variables for the directional light and spotlight. Notice that depending upon the light type, the values of some variables may not be transferred to the GLSL shader. For example, the parameters associated with angular attenuation, the inner and outer angles, and the drop-off will be transferred only for spotlights.
Note, for mInnerRef and mOuterRef, the cosines of half the angles are actually computed and passed. Inner and outer angles are the total angular spreads of the spotlight where the half of these angles describe the angular displacements from the light direction. For this reason, cosines of the half angles will actually be used in the computations. This optimization relieves the GLSL fragment shaders from recomputing the cosine of these angles on every invocation.
The main goals of the MyGame level are to test and provide functionality for manipulating the new light types. The modifications involved are straightforward; my_game_lights.js is modified to create all three light types, and my_game_light_control.js is modified to support the manipulation of the direction of the selected light when the arrow and space keys are pressed simultaneously. The implementation of these simple changes is not shown here. Please refer to the source code files for details.
You can run the project and interactively control the lights to examine the corresponding effects. There are four light sources defined, each illuminating all objects in the scene. Light source 0 is a point light, 1 is a directional light, and 2 and 3 are spotlights.
You can examine the effect from a directional light by typing the 1 key to select the light. Now hold the spacebar while taking turns pressing the left/right or up/down keys to swing the direction of the directional light. You will notice drastic illumination changes on the boundary edges of the 3D geometric shapes in the background image, together with occasional prominent red spots of specular reflections. Now, type the H key to switch off the directional light and observe the entire scene becomes darker. Without any kinds of attenuation, directional lights can be used as effective tools for brightening the entire scene.
Type the 2 or 3 key to select one of the spotlights, once again, by holding the spacebar while taking turns pressing the left/right or up/down keys to swing the direction of the spotlight. With the spotlight, you will observe the illuminated region swinging and changing shapes between a circle (when the spotlight is pointing perpendicularly toward the background image) and different elongated ellipses. The arrow keys will move the illuminated region around. Try experimenting with the C/V and B/N keys to increase/decrease the inner and outer cone angles. Notice that if you set the inner cone angle to be larger than the outer one, the boundary of the illuminated region becomes sharp where lighting effects from the spotlight will drop off abruptly. You can consider switching off the direction light, light 1, for a clearer observation of the spotlight effects.
Try experimenting with the different light settings, including overlapping the light illumination regions and setting the light intensities, the K and L keys, to negative numbers. While impossible in the physical world, negative intensity lights are completely valid options in a game world.
Shadow is the result of light being obstructed or occluded. As an everyday phenomenon, shadow is something you observe but probably do not give much thought to. However, shadow plays a vital role in the visual perception system of humans. For example, the shadows of objects convey important cues of relative sizes, depths, distances, orderings, and so on. In video games, proper simulation of shadows can increase the quality of appearance and fidelity. For example, you can use shadows to properly convey the distance between two game objects or the height that the hero is jumping.
Shadows can be simulated by determining the visibility between the position to be illuminated and each of the light source positions in the environment. A position is in shadow with respect to a light source if something occludes it from the light source or the position is not visible from the light source. Computationally, this is an expensive operation because general visibility determination is an O(n) operation, where n is the number of objects in the scene, and this operation must be performed for every pixel being illuminated. Algorithmically, this is a challenging problem because the solutions for visibility must be available within the fragment shader during illumination computation, again, for every pixel being illuminated.
Because of the computation and algorithmic challenges, instead of simulating shadow according to the physical world, many videogames approximate or create shadow-like effects for only selected objects based on dedicated hardware resources. In this section, you will learn about approximating shadows by selecting dedicated shadow casters and receivers based on the WebGL stencil buffer.

Hero casting shadow on the minion but not on the background
Shadow caster : This is the object that causes the shadow. In the Figure 8-24 example, the Hero object is the shadow caster.
Shadow receiver : This is the object that the shadow appears on. In the Figure 8-24 example, the Minion object is the shadow receiver.
Shadow caster geometry : This is the actual shadow, in other words, the darkness on the shadow receiver because of the occlusion of light. In the Figure 8-24 example, the dark imprint of the hero appearing on the minion behind the actual hero object is the shadow caster geometry.

The three participating elements of shadow simulation: the caster, the caster geometry, and the receiver
Given the three participating elements, the shadow simulation algorithm is rather straightforward: compute the shadow caster geometry, render the shadow receiver as usual, render the shadow caster geometry as a dark shadow caster object over the receiver, and, finally, render the shadow caster as usual. For example, to render the shadow in Figure 8-24, the dark hero shadow caster geometry is first computed based on the positions of the light source, the Hero object (shadow caster), and the Minion object (shadow receiver). After that, the Minion object (shadow receiver) is first rendered as usual, followed by rendering the shadow caster geometry as the Hero object with a dark constant color, and lastly the Hero object (shadow caster) is rendered as usual.
Take note that shadow is actually a visual effect where colors on objects appear darker because light energy is obstructed. The important point to note is that when a human observes shadows, there are no new objects or geometries involved. This is in stark contrast to the described algorithm, where shadows are simulated by the shadow caster geometry, a dark color object. This dark color object does not actually exist in the scene. It is algorithmically created to approximate the visual perception of light being occluded. This creation and rendering of extra geometry to simulate the results of human visual perception, while interesting, has its own challenges.

Shadow caster extends beyond the bounds of shadow receiver
Fortunately, the WebGL stencil buffer is designed specifically to resolve these types of situations. The WebGL stencil buffer can be configured as a 2D array of on/off switches with the same pixel resolution as the canvas that is displayed on the web browser. With this configuration, when stencil buffer checking is enabled, the pixels in the canvas that can be drawn on will be only those with corresponding stencil buffer pixels that are switched on.

The WebGL stencil buffer
With the support of the WebGL stencil buffer, shadow simulation can now be specified accordingly by identifying all shadow receivers and by grouping corresponding shadow casters with each receiver. In the Figure 8-24 example, the Hero object is grouped as the shadow caster of the minion shadow receiver. In this example, for the background object to receive a shadow from the hero, it must be explicitly identified as a shadow receiver, and the Hero object must be grouped with it as a shadow caster. Notice that without explicitly grouping the minion object as a shadow caster of the background shadow receiver, the minion will not cast a shadow on the background.
As will be detailed in the following implementation discussion, the transparencies of the shadow casters and receivers and the intensity of the casting light source can all affect the generation of shadows. It is important to recognize that this shadow simulation is actually an algorithmic creation with effects that can be used to approximate human perception. This procedure does not describe how shadows are formed in the real world, and it is entirely possible to create unrealistic dramatic effects such as casting transparent or blue-colored shadows.
The listed code renders the shadow receiver and all the shadow caster geometries without rendering the actual shadow caster objects. The B1, B2, and B3 steps switch on the stencil buffer pixels that correspond to the shadow receiver. This is similar to switching on the pixels that are associated with the white triangle in Figure 8-27, enabling the region that can be drawn. The loops of steps C and D point out that a separate geometry must be computed for each shadow casting light source. By the time step D1 draws the shadow caster geometry, with the stencil buffer containing the shadow receiver imprint and checking enabled, only pixels occupied by the shadow receiver will be enabled to be drawn on in the canvas.

Running the Shadow Shaders project
WASD keys: Move the hero character on the screen
Number keys 0, 1, 2, and 3: Select the corresponding light source
Arrow keys: Move the currently selected light; note that this has no effect on the directional light (light 1).
Arrow keys with spacebar pressed: Change the direction of the currently selected light; note that this has no effect on the point light (light 0).
Z/X key: Increases/decreases the light z position; note that this has no effect on the directional light (light 1).
C/V and B/N keys: Increase/decrease the inner and outer cone angles of the selected light; note that these only affect the two spotlights in the scene (lights 2 and 3).
K/L key: Increases/decreases the intensity of the selected light.
H key: Toggles the selected light on/off.
Number keys 5 and 6: Select the left minion and the hero
Number keys 7, 8, and 9: Select the Ka, Kd, and Ks material properties of the selected character (left minion or the hero)
E/R, T/Y, and U/I keys: Increase/decrease the red, green, and blue channels of the selected material property
O/P keys: Increase/decrease the shininess of the selected material property
Understand shadows can be approximated by algorithmically defining and rendering explicit geometries
Appreciate the basic operations of the WebGL stencil buffer
Understand the simulation of shadows with shadow caster and receiver
Implement the shadow simulation algorithm based on the WebGL stencil buffer
Two separate GLSL fragment shaders are required to support the rendering of shadow, one for drawing the shadow caster geometry onto the canvas and one for drawing the shadow receiver into the stencil buffer.
The GLSL shadow_caster_fs fragment shader supports the drawing of the shadow caster geometries. Refer to Figure 8-25; the shadow caster geometry is the piece of geometry that fakes being the shadow of the shadow caster. This geometry is typically scaled by the engine according to its distance from the shadow caster; the further from the caster, the larger this geometry.
In the src/glsl_shaders folder, create a file shadow_caster_fs.glsl. Since all light types can cast shadow, existing light structures must be supported. Now, copy the Light struct and light type constants from light_fs (not shown). These data structure and constants must be exactly the same such that the corresponding interfacing shader in the engine can reuse existing utilities that support LightShader. The only difference is since a shadow caster geometry must be defined for each light source, the uLight array size is exactly 1 in this case.
Define constants for shadow rendering. The kMaxShadowOpacity is how opaque shadows should be, and kLightStrengthCutOff is a cutoff threshold where a light with intensity less than this value will not cast shadows.
To properly support shadow casting from the three different light types, AngularDropOff() and DistanceDropOff(), functions must also be defined in exactly the same manner as those in light_fs (and illum_fs). You can copy these functions from light_fs. Note that since there is only one light source in the uLight array, you can remove the light parameter from these functions and refer directly to uLight[0] in the computation. This parameter replacement is the only modification required, and thus, the code is not shown here.
Remember that shadow is observed because of light occlusion and is independent from the color of the light source. Now, modify the LightStrength() function to compute the light strength arriving at the position to be illuminated instead of a shaded color.
Compute the color of the shadow in the main() function based on the strength of the light source. Notice that no shadows will be cast if the light intensity is less than kLightStrengthCutOff and that the actual color of the shadow is not exactly black or opaque. Instead, it is a blend of the programmer-defined uPixelColor and the sampled transparency from the texture map.
Under the src/glsl_shaders folder, create shadow_receiver_fs.glsl, and define a sampler2D object to sample the color texture map of the shadow receiver object. In addition, define the constant kSufficientlyOpaque to be the threshold where fragments with less opacity will be treated as transparent and discarded. Stencil buffer pixels that correspond to discarded fragments will remain off and thus will not be able to receive shadow geometries.
Implement the main() function to sample the texture of shadow receiver object and test for opacity threshold in determining if shadow could be received:
First, only one new engine shader type is required for supporting shadow_caster_fs. With the strategic variable naming in the shadow_receiver_fs shader, the existing SpriteShader object can be used to communicate with the shadow_receiver_fs GLSL fragment shader.
Second, no new Renderable classes are required. The Renderable classes are designed to support the drawing and manipulation of game objects with the corresponding shaders. In this way, Renderable objects are visible to the players. In the case of shadow shaders, shadow_caster_fs draws shadow caster geometries, and shadow_receiver_fs draws the shadow receiver geometry into the stencil butter. Notice that neither of the shaders is designed to support drawing of objects that are visible to the players. For these reasons, there is no need for the corresponding Renderable objects.
Under the src/engine/shaders folder, create shadow_caster_shader.js; define the ShadowCasterShader class to inherit from SpriteShader. Since each shadow caster geometry is created by one casting light source, define a single light source for the shader.
Override the activate() function to ensure the single light source is properly loaded to the shader:
Define a function to set the current camera and light source for this shader:
Modify shader_resources.js in the src/engine/core folder to import ShadowCasterShader, and define the constants and variables for the two new shadow-related shaders.
Edit the createShaders() function to define engine shaders to interface to the new GLSL fragment shaders. Notice that both of the engine shaders are based on the texture_vs GLSL vertex shader. In addition, as discussed, a new instance of the engine SpriteShader is created to interface to the shadow_receiver_fs GLSL fragment shader.
The rest of the modifications to the shader_resources.js file are routine, including defining accessors, loading and unloading the GLSL source code files, cleaning up the shaders, and exporting the accessors. The detailed listings of these are not included here because you saw similar changes on many occasions. Please refer to the source code file for the actual implementations.
Edit the gl.js file in the src/engine/core folder to enable and configure WebGL stencil buffer during engine initialization. In the init() function, add the request for the allocation and configuration of stencil and depth buffers during WebGL initialization. Notice that the depth buffer, or z buffer, is also allocated and configured. This is necessary for proper shadow caster support, where a shadow caster must be in front of a receiver, or with a larger z depth in order to cast shadow on the receiver.
Continue working with gl.js; define functions to begin, end, and disable drawing with the stencil buffer. Remember to export these new stencil buffer support functions.
Edit the engine access file, index.js, in the src/engine folder to clear the stencil and depth buffers when clearing the canvas in the clearCanvas() function:
As described when defining ShadowCasterShader, Renderable classes should not be defined to pair with the shadow caster and receiver shaders as that would allow game developers the capabilities to manipulate the algorithmically created objects as regular game objects. Instead, the ShadowCaster and ShadowReceiver classes are introduced to allow the game developers to create shadows without granting access to manipulate the underlying geometries.
Instead of the familiar Renderable class hierarchy, the ShadowCaster class is defined to encapsulate the functionality of the implicitly defined shadow caster geometry. Recall from Figure 8-25, the shadow caster geometry is derived algorithmically for each shadow casting light sources based on the positions of the shadow caster, a Renderable, and the shadow receiver, another Renderable, objects.
Create the src/engine/shadows folder for organizing shadow-related support files and the shadow_caster.js file in the folder.
Define the ShadowCaster class and the constructor to initialize the instance variables and constants required for caster geometry computations:
Implement the draw() function to compute and draw a shadow caster geometry for each of the light sources that illuminates the Renderable object of mShadowCaster:
The casterRenderable is the Renderable object that is actually casting the shadow. The following are the four main steps of the draw() function:
Step A saves the caster Renderable state, transform, shader, and color, and sets it into a shadow caster geometry by setting its shader to a ShadowCasterShader (mCasterShader) and its color to that of shadow color.
Step B iterates through all light sources illuminating the casterRenderable and looks for lights that are switched on and casting shadow.
Step C, for each shadow producing light, calls the _computeShadowGeometry() function to compute an appropriately size and positioned shadow caster geometry and renders it as a SpriteRenderable. With the replaced ShadowCasterShader and shadow color, the rendered geometry appears as the shadow of the actual casterRenderable.
Step D restores the state of the casterRenderable.
Define the _computeShadowGeometry() function to compute the shadow caster geometry based on the mShadowCaster, the mShadowReceiver, and a casting light source. Although slightly intimidating in length, the following function can be logically separated into four regions. The first region declares and initializes the variables. The second and third regions are the two cases of the if statement that handle the computation of transform parameters for directional and point/spotlights. The last region sets the computed parameters to the transform of the caster geometry, cxf.

Computing the shadow caster geometry
Region 2: Computes parallel projection according to the directional light. The if statement within this region is to ensure no shadow is computed when the light direction is parallel to the xy plan or when the light is in the direction from the shadow receiver toward the shadow caster. Notice that for dramatic effects, the shadow caster geometry will be moderately scaled.
Region 3: Computes projection from the point or spotlight position. The two if statements within this region are to ensure that the shadow caster and receiver are on the same side of the light position, and for the purpose of maintaining mathematical stability, neither is very close to the light source.
Region 4: Uses the computed distToReceiver and scale to set the transform of the shadow caster or cxf.
The ShadowCaster object is meant for game developers to define and work with shadow. So, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
Create a new file, shadow_receiver.js, in the src/engine/shadows folder; define the ShadowReceiver class. In the constructor, initialize the constants and variables necessary for receiving shadows. As discussed, the mReceiver is a GameObject with at least a SpriteRenderable reference and is the actual receiver of the shadow. Notice that mShadowCaster is an array of ShadowCaster objects. These objects will cast shadows on the mReceiver.
Define the addShadowCaster() function to add a game object as a shadow caster for this receiver:
Define the draw() function to draw the receiver and all the shadow caster geometries:
This function implements the outlined shadow simulation algorithm and does not draw the actual shadow caster. Notice that the mReceiver object is drawn twice, in steps A and B2. Step A, the first draw() function, renders the mReceiver to the canvas as usual. Step B enables the stencil buffer for drawing where all subsequent drawings will be directed to switching on stencil buffer pixels. For this reason, the draw() function at step B2 uses the ShadowReceiverShader and switches on all pixels in the stencil buffer that corresponds to the mReceiver object. With the proper stencil buffer setup, in step C, the draw() function calls to the mShadowCaster will draw the corresponding shadow caster geometries only into the pixels that are covered by the receiver.
Lastly, once again, the ShadowReceiver object is designed for the client game developers to create shadows. So, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
renderable.js: Both the ShadowCaster and ShadowReceiver objects require the ability to swap the shaders to render the objects for shadow simulation purpose. This swapShader() function is best realized in the root of the Renderable hierarchy.
light.js: The Light source now defines mCastShadow, a boolean variable, and the associated getter and setter, indicating if the light should cast shadow.
camera_main.js: The Camera WC center must now be located at some z distance away. A kCameraZ constant is defined for this purpose and used in the mCameraMatrix computation in the setViewAndCameraMatrix() function.
transform.js: The Transform class must be modified to support being cloneTo() and the manipulation of a z-depth value.
There are two important aspects to testing the shadow simulation. First, you must understand how to program and create shadow effects based on the implementation. Second, you must verify that Renderable objects can serve as shadow casters and receivers. The MyGame level test case is similar to the previous project with the exception of the shadow setup and drawing.
LightRenderable: mLgtMinionShadow is created with mLgtMinon as a receiver, which has a reference to a LightRenderable object.
IllumRenderable: mBgShadow and mMinionShadow are created with mBg and mIllumMinion being the receivers where both have references to IllumRenderable object.
Note that in order to observe shadow on an object, an explicit corresponding ShadowReceiver must be created and followed by explicitly adding ShadowCaster objects to the receiver. For example, mLgtMinionShadow defines the mLgtMinion object to be a receiver, where only the mIllumHero and mLgtHero will cast shadows on this object. Lastly, notice that mLgtMinon and mIllumMinion are both receivers and casters of shadows.
It is important to note the draw ordering. All three shadow receivers are drawn first. Additionally, among the three receivers, the mBgShadow object is the actual background and thus is the first being drawn. Recall that in the definition of the ShadowReceiver class, the draw() function also draws the receiver object. For this reason, there is no need to call the draw() function of mLgtMinion, mIllumMinion, and mBg objects.
The rest of the MyGame level is largely similar to previous projects and is not listed here. Please refer to the source code for the details.
You can now run the project and observe the shadows. Notice the effect of the stencil buffer where the shadow from the mIllumHero object is cast onto the minion and yet not on the background. Press the WASD keys to move both of the Hero objects. Observe how the shadows offer depth and distance cues as they move with the two hero objects. The mLgtHero on the right is illuminated by all four lights and thus casts many shadows. Try selecting and manipulating each of the lights, such as moving or changing the direction or switching the light on/off to observe the effects on the shadows. You can even try changing the color of the shadow (in shadow_caster.js) to something dramatic, such as to bright blue [0, 0, 5, 1], and observe shadows that could never exist in the real world.
This chapter guided you to develop a variation of the simple yet complete Phong illumination model for the game engine. The examples were organized to follow the three terms of the Phong illumination model: ambient, diffuse, and specular. The coverage of light sources was strategically intermixed to ensure proper illumination can be observed for every topic discussed.
The first example in this chapter on ambient illumination introduced the idea of interactively controlling and fine-tuning the color of the scene. The following two examples on light sources presented the notion that illumination, an algorithmic approach to color manipulation, can be localized and developed in the engine infrastructure for supporting the eventual Phong illumination model. The example on diffuse reflection and normal mapping was critical because it enabled illumination computation based on simple physical models and simulation of an environment in 3D.
The Phong illumination model and the need for a per-object material property were presented in the specular reflection example. The halfway vector version of the Phong illumination model was implemented to avoid computing the light source reflection vector for each pixel. The light source type project demonstrated how subtle but important illumination variations can be accomplished by simulating different light sources in the real world. Finally, the last example explained that accurate shadow computation is nontrivial and introduced an approximation algorithm. The resulting shadow simulation, though inaccurate from a real-world perspective and with limitations, can be aesthetically appealing and is able to convey many of the same vital visual cues.
The first four chapters of this book introduced the basic foundations and components of a game engine. Chapters 5, 6, and 7 extended the core engine functionality to support drawing, game object behaviors, and camera controls, respectively. This chapter complements Chapter 5 by bringing the engine’s capability in rendering higher-fidelity scenes to a new level. Over the next three chapters, this complementary pattern will be repeated. Chapter 9 will introduce physical behavior simulation, Chapter 10 will discuss particle effects, and Chapter 11 will complete the engine development with more advanced support for the camera including tiling and parallax.
The work you did in the “Game Design Considerations” section of Chapter 7 to create a basic well-formed game mechanic will ultimately need to be paired with the other elements of game design to create something that feels satisfying for players. In addition to the basic game loop, you’ll need to think about your game’s systems, setting, and meta-game and how they’ll help determine the kinds of levels you design. As you begin to define the setting, you’ll begin exploring ideas for visual and audio design.

Playdead and Double Eleven’s Limbo, a 2D side-scrolling game making clever use of background lighting and chiaroscuro techniques to convey tension and horror. Lighting can be both programmatically generated and designed into the color palettes of the images themselves by the visual artist and is frequently a combination of the two (image copyright Playdead media; please see www.playdead.com/limbo for more information)
Lighting is also often used as a core element of the game loop in addition to setting the mood; a game where the player is perhaps navigating in the dark with a virtual flashlight is an obvious example, but lights can also indirectly support game mechanics by providing important information about the game environment. Red pulsing lights often signal dangerous areas, certain kinds of green environment lights might signal either safe areas or areas with deadly gas, flashing lights on a map can help direct players to important locations, and the like.
In the Simple Global Ambient project, you saw the impact that colored environment lighting has on the game setting. In that project, the hero character moves in front of a background of metallic panels, tubes, and machinery, perhaps the exterior of a space ship. The environment light is red and can be pulsed—notice the effect on mood when the intensity is set to a comparatively low 1.5 vs. when it’s set to something like a supersaturated 3.5, and imagine how the pulsing between the two values might convey a story or increase tension. In the Simple Light Shader project, a light was attached to the hero character (a point light in this case), and you can imagine that the hero must navigate the environment to collect objects to complete the level that are visible only when illuminated by the light (or perhaps activate objects that switch on only when illuminated).
The Multiple Lights project illustrated how various light sources and colors can add considerable visual interest to an environment (sometimes referred to as localized environment lighting). Varying the types, intensities, and color values of lights often makes environments appear more alive and engaging because the light you encounter in the real world typically originates from many different sources. The other projects in this chapter all served to similarly enhance the sense of presence in the game level; as you work with diffuse shaders, normal maps, specularity, different light types, and shadows, consider how you might integrate some or all of these techniques into a level’s visual design to make game objects and environments feel more vibrant and interesting.

The simple game mechanic project, without lighting. Recall that the player controls the circle labeled with a P and must activate each of the three sections of the lock in proper sequence to disengage the barrier and reach the reward

The addition of a movable “flashlight” that shines a special beam

The player is able to directly activate the objects as in the first iteration of the mechanic, but the corresponding section of the lock now remains inactive

The player moves the flashlight under one of the shapes to reveal a hidden clue (#1)
From the gameplay point of view, any object in a game environment can be used as a tool; your job as a designer is to ensure the tool follows consistent, logical rules the player can first understand and then predictively apply to achieve their goal. In this case, it’s reasonable to assume that players will explore the game environment looking for tools or clues; if the flashlight is an active object, players will attempt to learn how it functions in the context of the level.

With the flashlight revealing the hidden symbol, the player can now activate the object (#2), and a progress bar (#3) on the lock indicates the player is on the right track to complete a sequence

The player activates the second of the three top sections in the correct order (#4), and the progress bar confirms the correct sequence by lighting another section (#5). In this implementation, the player would not be able to activate the object with two dots before activating the object with one dot (the rules require activating like objects in order from one to three dots)

The third of the three top sections is revealed with the flashlight beam and activated by the player (#6), thereby activating the top section of the lock (#7). Once the middle and lower sections of the lock have been similarly activated, the barrier is disabled and players can claim the reward
Note that you’ve changed the feedback players receive slightly from the first iteration of the game loop: you originally used the progress bar to signal overall progress toward unlocking the barrier, but you’re now using it to signal overall progress toward unlocking each section of the lock. The flashlight introduces an extra step into the causal chain leading to the level solution, and you’ve now taken a one-step elemental game loop and made something considerably more complex and challenging while maintaining logical consistency and following a set of rules that players can first learn and then predictively apply. In fact, the level is beginning to typify the kind of puzzle found in many adventure games: the game screen was a complex environment filled with a number of movable objects; finding the flashlight and learning that its beam reveals hidden information about objects in the game world would become part of the game setting itself.
It’s important to be aware that as gameplay complexity increases, so also increases the complexity of the interaction model and the importance of providing players with proper audiovisual feedback to help them make sense of their actions (recall from Chapter 1 that the interaction model is the combination of keys, buttons, controller sticks, touch gestures, and the like that the player uses to accomplish game tasks). In the current example, the player is now capable of controlling not just the hero character but also the flashlight. Creating intuitive interaction models is a critical component of game design and often much more complex than designers realize; as one example, consider the difficulty in porting many PC games designed for a mouse and keyboard to a game console using buttons and thumb sticks or a mobile device using only touch. Development teams often pour thousands of hours of research and testing into control schemes, yet they still frequently miss the mark; interaction design is tricky to get right and often requires thousands of hours of testing and refinement even for fairly basic actions, so when possible, you should make use of established conventions. Keep the two golden rules in mind when you design interactions: first, use known and tested patterns when possible unless you have a compelling reason to ask players to learn something new; second, keep the number of unique actions players must remember to a minimum. Decades of user testing have clearly shown that players don’t enjoy relearning basic key combinations for tasks that are similar across titles (which is why so many games have standardized on WASD for movement, e.g.), and similar data is available showing how easily players can become overwhelmed when you ask them to remember more than a few simple unique button combinations. There are exceptions, of course; many classic arcade fighting games, for example, use dozens of complex combinations, but those genres are targeted to a specific kind of player who considers mastering button combinations to be a fundamental component of what makes an experience fun. As a general rule, most players prefer to keep interaction complexity as streamlined and simple as possible if it’s not an intentional component of play.
There are a number of ways to deal with controlling multiple objects. The most common pattern for our flashlight would likely be for the player to “equip” it; perhaps if the player moves over the flashlight and clicks the left mouse button, it becomes a new ability for the player that can be activated by pressing one of the keyboard keys or by clicking the right mouse button. Alternately, perhaps the hero character can move around the game screen freely with the WASD keys, while other active objects like the flashlight are first selected with a left-mouse-click and then moved by holding the left mouse button and dragging them into position. There are similarly a variety of ways to provide the player with contextual feedback that will help teach the puzzle logic and rules (in this case, we’re using the ring around the lock as a progress bar to confirm players are following the correct sequence). As you experiment with various interaction and feedback models, it’s always a good idea to review how other games have handled similar tasks, paying particular attention to things you believe to work especially well.
In the next chapter, you’ll investigate how your game loop can evolve once again by applying simple physics to objects in the game world.
Recognize the significant computational complexity and cost of simulating real-world physical interactions
Understand that typical game engine physics components approximate physical interaction based on simple geometries such as circles and rectangles
Implement accurate collisions of circle and rectangular geometric shapes
Approximate Newtonian motion formulation with Symplectic Euler Integration
Resolve interpenetrating collisions based on a numerically stable relaxation method
Compute and implement responses to collisions that resemble the behavior of rigid bodies in the real world
In a game engine, the functionality of simulating energy transfer is often referred to as the physics, physics system, physics component, or physics engine. Game engine physics components play an important role in many types of games. The range of topics within physics for games is broad and includes but is not limited to areas such as rigid body, soft body, fluid dynamics, and vehicle physics. Believable physical behaviors and interactions of game objects have become key elements of many modern PC and console games, as well as, more recently, browser and smartphone games, for example, the bouncing of a ball, the wiggling of a jelly block, the ripples on a lake, or the skidding of a car. The proper simulation and realistic renditions of these are becoming common expectations.
Unfortunately, accurate simulations of the real world can involve details that are overwhelming and require in-depth disciplinary knowledge where the underlying mathematical models can be complicated and the associated computational costs prohibitive. For example, the skid of a car depends on its speed, the tire properties, etc.; the ripples on a lake depend on its cause, the size of the lake, etc.; the wiggle of a jelly block depends on its density, the initial deformation, etc. Even in the very simple case, the bounce of a ball depends on its material, the state of inflation, and, theoretically, even on the particle concentrations of the surrounding air. Modern game engine physics components address these complexities by restricting the types of physical interaction and simplifying the requirements for the simulation computation.
Physics engines typically restrict and simulate isolated types of physical interaction and do not support general combinations of interaction types. For example, the proper simulation of a ball bouncing (rigid body) often will not support the ball colliding and jiggling a jelly block (soft body) or accurately simulate the ripple effects caused by the ball interaction with fluid (fluid dynamics). That is, typically a rigid body physics engine does not support interactions with soft body objects, fluids, or vehicles. In the same manner, a soft body physics engine usually does not allow interactions with rigid body or other types of physical objects.
Assumes objects are continuous geometries with uniformly distributed mass where the center of mass is located at the center of the geometric shape
Approximates object material properties with straightforward bounciness and friction
Dictates that objects do not change shape during interactions
Limits the simulation to a selective subset of objects in the game scene
Based on this set of assumptions, a rigid body physics simulation, or a rigid body simulation, is capable of capturing and reproducing many familiar real-world physical interactions such as objects bouncing, falling, and colliding, for example, a fully inflated bouncing ball or a simple Lego block bouncing off of a desk and landing on a hardwood floor. These types of rigid body physical interactions can be reliably simulated in real time as long as deformation does not occur during collisions.
Objects consisting of multiple geometric parts, for example, an arrow
Objects with nontrivial material properties, for example, magnetism
Objects with nonuniform mass distribution, for example, a baseball bat
Objects that change shapes during collision, for example, rubber balls
Of all real-world physical object interaction types, rigid body interaction is the best understood, most straightforward to approximate solutions for, and least challenging to implement. This chapter focuses only on rigid body simulation.
Rigid shape and bounds: Define the RigidShape class to support an optimized simulation by performing computation on separate and simple geometries instead of the potentially complex Renderable objects. This topic will be covered by the first project, the Rigid Shapes and Bounds project.
The collisions between circle shapes: the Circle Collisions and CollisionInfo project
The collisions between rectangle shapes: the Rectangle Collisions project
The collisions between rectangle and circle shapes: the Rectangle and Circle Collisions project
Movement: Approximates integrals that describe motions in a world that is updated at fixed intervals. The topic on motion will be covered by the Rigid Shape Movements project.
Interpenetration of colliding objects: Addresses the interpenetration between colliding rigid shapes with a numerically stable solution to incrementally correct the situation. This topic is presented in the Collision Position Correction project.
Collision resolution: Models the responses to collision with the Impulse Method. The Impulse Method will be covered in two projects, first the simpler case without rotations in the Collision Resolution project and finally with considerations for rotation in the Collision Angular Resolution project.
The computation involved in simulating the interactions between arbitrary rigid shapes can be algorithmically complicated and computationally costly. For these reasons, rigid body simulations are typically based on a limited set of simple geometric shapes, for example, rigid circles and rectangles. In typical game engines, these simple rigid shapes can be attached to geometrically complex game objects for an approximated simulation of the physical interactions between those game objects, for example, attaching rigid circles on spaceships and performing rigid body physics simulations on the rigid circles to approximate the physical interactions between the spaceships.
From real-world experience, you know that simple rigid shapes can interact with one another only when they come into physical contact. Algorithmically, this observation is translated into detecting collisions between rigid shapes. For a proper simulation, every shape must be tested for collision with every other shape. In this way, the collision testing is an O(n2) operation, where n is the number of shapes that participate in the simulation. As an optimization for this costly operation, rigid shapes are often bounded by a simple geometry, for example, a circle, where the potentially expensive collision computation is only invoked when the bounds of the shapes overlap.

Running the Rigid Shapes and Bounds project
The controls of the project are as follows:
G key: Randomly create a new rigid circle or rectangle
T key: Toggles textures on all objects
R key: Toggles the drawing of RigidShape
B key: Toggles the drawing of the bound on each RigidShape
Left-/right-arrow key: Sequences through and selects an object
WASD keys: Move the selected object.
Z/X key: Rotates the selected object.
Y/U key: Increases/decreases RigidShape size of the selected object; this does not change the size of the corresponding Renderable object.
To define the RigidShape classes and integrate with GameObject
To demonstrate that a RigidShape represents a corresponding Renderable geometry on the same GameObject
To lay the foundation for building a rigid shape physics simulator
To define an initial scene for testing the physics component
minion_sprite.png is for the minion and hero objects.
platform.png and wall.png are the horizontal and vertical boarder objects in the test scene.
target.png is displayed over the currently selected object.
You will begin building this project by first setting up implementation support. First, organize the engine source code structure with new folders for anticipation of increases in complexity. Second, define debugging utilities for visualization and verification of correctness. Third, extend library support for rotating rigid shapes.
In anticipation for the new components, in the src/engine folder , create the components folder and move the input.js component source code file into this folder. This folder will contain the source code for physics and other components to be introduced in later chapters. You will have to edit camera_input.js, loop.js, and index.js to update the source code file location change of input.js.
In the src/core folder, create debug_draw.js, import from LineRenderable, and define supporting constants and variables for drawing simple shapes as line segments:
Define the init() function to initialize the objects for drawing. The mUnitCirclePos are positions on the circumference of a unit circle, and mLine variable is the line object that will be used for drawing.
Define the drawLine(), drawCrossMarker(), drawRectangle(), and drawCircle() functions to draw the corresponding shape based on the defined mLine object. The source code for these functions is not relevant to the physics simulation and is not shown. Please refer to the project source code folder for details.
Remember to export the defined functions:
A valid alternative for initializing debug drawing is in the createShaders() function of the shader_resources module after all the shaders are created. However, importing from debug_draw.js in shader_resources.js would create a circular import: debug_draw imports from LineRenderable that attempts to import from shader_resources.
You are now ready to define RigidShape to be the base class for the rectangle and circle rigid shapes. This base class will encapsulate all the functionality that is common to the two shapes .
Start by creating a new subfolder, rigid_shapes, in src/engine. In this folder, create rigid_shape.js, import from debug_draw, and define drawing colors and the RigidShape class.
Define the constructor to include instance variables shared by all subclasses. The xf parameter is typically a reference to the Transform of the Renderable represented by this RigidShape. The mType variable will be initialized by subclasses to differentiate between shape types, for example, circle vs. rectangle. The mBoundRadius is the radius of the circular bound for collision optimization, and mDrawBounds indicates if the circular bound should be drawn.
Define appropriate getter and setter functions for the instance variables :
Define the boundTest() function to determine if the circular bounds of two shapes have overlapped. As illustrated in Figure 9-2, a collision between two circles can be determined by comparing the sum of the two radii, rSum, with the distance, dist, between the centers of the circles. Once again, this is a relatively efficient operation designed to precede the costlier accurate collision computation between two shapes.

Circle collision detection: (a) No collision. (b) Collision detected
Define the update() and draw() functions. For now, update() is empty. When enabled, the draw() function draws the circular bound and an “X” marker at the center of the bound .
Renderable objects encode geometric information of a shape based on a Transform operator being applied on the unit square. For example, a rotated rectangle is encoded as a scaled and rotated unit square. As you have experienced, this representation, where vertices of the unit square remain constant together with the matrix transformation support from the GLSL vertex shader, is effective and efficient for supporting the drawing of transformed shapes.
RigidShapes are Renderable objects designed for interactions where the underlying representation must support extensive mathematical computations. In this case, it is more efficient to explicitly represent and update the vertices of the underlying geometric shape. For example, instead of a scaled and rotated square, the vertex positions of the rectangle can be explicitly computed and stored. In this way, the actual vertex positions are always readily available for mathematical computations. For this reason, RigidRectangle will define and maintain the vertices of a rectangle explicitly.
In the src/rigid_shapes folder, create rigid_rectangle.js to import from rigid_rectangle_main.js and to export the RigidRectangle class. This is the RigidRectangle class access file where users of this class should import from.
Now, create rigid_rectangle_main.js in the src/rigid_shapes folder to import RigidShape and debugDraw, and define RigidRectangle to be a subclass of RigidShape.
Define the constructor to initialize the rectangle dimension, mWidth by mHeight, and mType. It is important to recognize that the vertex positions of the rigid rectangle are controlled by the Transform referenced by mXform. In contrast, the width and height dimensions are defined independently by mWidth and mHeight. This dimension separation allows the designer to determine how tightly a RigidRectangle should wrap the corresponding Renderable. Notice that the actual vertex and face normal of the shape are computed in the setVertices() and computeFaceNormals() functions. The definition of face normal will be detailed in the following steps:
Define the setVertices() function to set the vertex positions based on the dimension defined by mXform. As illustrated in Figure 9-3, the vertices on the rectangle is defined as index 0 being the top-left, 1 being top-right, 2 being bottom-right, and index 3 corresponds to the bottom-left vertex position.

The vertices and face normals of a rectangle
Define the computeFaceNormals() function . Figure 9-3 shows that the face normals of a rectangle are vectors that are perpendicular to the edges and point away from the center of the rectangle. In addition, notice the relationship between the indices of the face normals and the corresponding vertices. Face normal index 0 points in the same direction as the vector from vertex 2 to 1. This direction is perpendicular to the edge formed by vertices 0 and 1. In this way, the face normal of index 0 is perpendicular to the first edge, and so on. Notice that the face normal vectors are normalized with length of 1. The face normal vectors will be used later for determining collisions.
Define the dimension and position manipulation functions. In all cases, the vertices and face normals must be recomputed (rotateVertices() calls computeFaceNormals()) and that it is critical to ensure that the vertex positions and the state of mXform are consistent.
Now, define the draw() function to draw the edges of the rectangle as line segments, and the update() function to update the vertices of the rectangle. The vertices and face normals must be recomputed because, as you may recall from the RigidShape base class constructor discussion, the mXfrom is a reference to the Transform of a Renderable object; the game may have manipulated the position or the rotation of the Transform. To ensure RigidRectangle consistently reflect the potential Transform changes, the vertices and face normals must be recomputed at each update.
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client .
In the src/rigid_shapes folder, create rigid_circle.js to import from rigid_circle_main.js and to export the RigidCircle class. This is the RigidCircle class access file where users of this class should import from.
Now, create rigid_circle_main.js in the src/rigid_shapes folder to import RigidShape and debugDraw, and define RigidCircle to be a subclass of RigidShape:
Define the constructor to initialize the circle radius, mRadius, and mType. Similar to the dimension of a RigidRectangle, the radius of RigidCircle is defined by mRadius and is independent from the size defined by the mXfrom. Note that the radii of the RigidCircle, mRadius, and the circular bound, mBoundRadius, are defined separately. This is to ensure future alternatives to separate the two.
Define the getter and setter of the dimension:
Define the function to draw the circle as a collection of line segments along the circumference. To properly visualize the rotation of the circle, a bar is drawn from the center to the rotated vertical circumference position .
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client .
Edit GameObject.js to remove the support for speed, mSpeed, as well as the corresponding setter and getter functions and the rotateObjPointTo() function . Through the changes in the rest of this chapter, the game object behaviors will be supported by the rigid body physics simulation. Make sure to leave the other variables and functions alone; they are defined to support appearance and to detect texture overlaps, pixelTouches().
In the constructor, define new instance variables to reference to a RigidShape and to provide drawing options:
Define getter and setter for mRigidBody and functions for toggling drawing options:
Refine the draw() and update() functions to respect the drawing options and to delegate GameObject behavior update to the RigidShape class:
Edit the game_object_set.js file to modify the GameObjectSet class to support the toggling of different drawing options for the entire set:
RigidShape is designed to approximate and to participate on behalf of a Renderable object in the rigid shape simulation. For this reason, it is essential to create and test different combinations of RigidShape types, which includes circles and rectangles, with all combinations of Renderable types, more specifically, TextureRenderable, SpriteRenderable, and SpriteAnimateRenderable. The proper functioning of these combinations can demonstrate the correctness of the RigidShape implementation and allow you to visually examine the suitability as well as the limitations of approximating Renderable objects with simple circles and rectangles.
The overall structure of the test program, MyGame, is largely similar to previous projects where the details of the source code can be distracting and is not listed here. Instead, the following describes the tested objects and how these objects fulfill the specified requirements. As always, the source code files are located in src/my_game folder, and the supporting object classes are located in src/my_game/objects folder.
The testing of imminent collisions requires the manipulation of the positions and rotations of each object. The WASDObj class, implemented in wasd_obj.js, defines the WASD keys movement and Z/X keys rotation control of a GameObject. The Hero class, a subclass of WASDObj implemented in hero.js, is a GameObject with a SpriteRenderable and a RigidRectangle. The Minion class, also a subclass of WASDObj in minion.js, is a GameObject with SpriteAnimateRenderable and is wrapped by either a RigidCircle or a RigidRectangle. Based on these supporting classes, the created Hero and Minion objects encompass different combinations of Renderable and RigidShape types allowing you to visually inspect the accuracy of representing complex textures with different RigidShapes.
The vertical and horizontal bounds in the game scene are GameObject instances with TextureRenderable and RigidRectangle created by the wallAt() and platformAt() functions defined in my_game_bounds.js file. The constructor, init(), draw(), update(), etc., of MyGame are defined in the my_game_main.js file with largely identical functionality as in previous testing projects.
You can now run the project and observe the created RigidShape objects. Notice that by default, only RigidShape objects are drawn. You can verify this by typing the T key to toggle on the drawing of the Renderable objects. Notice how the textures of the Renderable objects are bounded by the corresponding RigidShape instances. You can type the R key to toggle off the drawing of the RigidShape objects. Normally, this is what the players of a game will observe, with only the Renderable and without the RigidShape objects being drawn. Since the focus of this chapter is on the rigid shapes and the simulation of their interactions, the default is to show the RigidShape and not the Renderable objects.
Now type the T and R keys again to toggle back the drawing of RigidShape objects. The B key shows the circular bounds of the shapes. The more accurate and costly collision computations to be discussed in the next few sections will only be incurred between objects when these bounds overlap.
You can try using the WASD key to move the currently selected object around, by default with the Hero in the center. The Z/X and Y/U keys allow you to rotate and change the dimension of the Hero. Toggle on the texture, with the T key, to verify that rotation and movement are applied to both the Renderable and its corresponding RigidShape and that the Y/U keys only change the dimension of the RigidShape. This allows the designer to control over how tightly to wrap the Renderable with the corresponding RigidShape. You can type the left-/right-arrow keys to select and work with any of the objects in the scene. Finally, the G key creates new Minion objects with either a RigidCircle or a RigidRectangle.
Lastly, notice that you can move any selected object to any location, including overlapping with another RigidShape object. In the real world, the overlapping, or interpenetration, of rigid shape objects can never occur, while in the simulated digital world, this is an issue that must be addressed. With the functionality of the RigidShape classes verified, you can now examine how to compute the collision between these shapes.
In order to simulate the interactions of rigid shapes, you must first detect which of the shapes are in physical contact with one another or which are the shapes that have collided. In general, there are two important issues to be addressed when working with rigid shape collisions: computation cost and the situations when the shapes overlap, or interpenetrate. In the following, the broad and narrow phase methods are explained as an approach to alleviate the computational cost, and collision information is introduced to record interpenetration conditions such that they can be resolved. This and the next two subsections detail the collision detection algorithms and implementations of circle-circle, rectangle-rectangle, and circle-rectangle collisions.
As discussed when introducing the circular bounds for RigidShape objects , in general, every object must be tested for collision with every other object in the game scene. For example, if you want to detect the collisions between five objects, A, B, C, D, and E, you must perform four detection computations for the first object, A, against objects B, C, D, and E. With A and B’s results computed, next you must perform three collision detections between the second object B, against objects C, D, and E; followed by two collisions for the third object, C; and then, finally, one for the fourth object, D. The fifth object, E, has already been tested against the other four. This testing process, while thorough, has its drawbacks. Without dedicated optimizations, you must perform O(n2) operations to detect the collisions between n objects.
In rigid shape simulation, a detailed collision detection algorithm involving intensive computations is required. This is because accurate results must be computed to support effective interpenetration resolution and realistic collision response simulation. A broad phase method optimizes this computation by exploiting the proximity of objects to rule out those that are physically far apart from each other and thus, clearly, cannot possibly collide. This allows the detailed and computationally intensive algorithm, or the narrow phase method, to be deployed for objects that are physically close to each other.
A popular broad phase method uses axis-aligned bounding boxes (AABBs) or bounding circles to approximate the proximity of objects. As detailed in Chapter 6, AABBs are excellent for approximating objects that are aligned with the major axes, but have limitations when objects are rotated. As you have observed from running the previous project with the B key typed, a bounding circle is a circle that centers around and completely bounds an object. By performing the straightforward bounding box/circle intersection computations, it becomes possible to focus only on objects with overlapping bounds as the candidates for narrow phase collision detection operations.
There are other broad phase methods that organize objects either with a spatial structure such as a uniform grid or a quad-tree or into coherent groups such as hierarchies of bounding colliders. Results from broad phase methods are typically fed into mid phase and finally narrow phase collision detection methods. Each phase narrows down candidates for the eventual collision computation, and each subsequent phase is incrementally more accurate and more expensive.
In addition to reporting if objects have collided, a collision detection algorithm should also compute and return information that can be used to resolve and respond to the collision. As you have observed when testing the previous project, it is possible for RigidShape objects to overlap in space, or interpenetrate. Since real-world rigid shape objects cannot interpenetrate, recording the details and resolving RigidShape overlaps is of key importance.

Collision information

Running the CollisionInfo and Circle Collisions project
G key: Randomly create a new rigid circle or rectangle
C key: Toggles the drawing of all CollisionInfo
T key: Toggles textures on all objects
R key: Toggles the drawing of RigidShape
B key: Toggles the drawing of the bound on each RigidShape
Left-/right-arrow key: Sequences through and selects an object.
WASD keys: Move the selected object.
Z/X key: Rotates the selected object.
Y/U key: Increases/decreases RigidShape size of the selected object; this does not change the size of the corresponding Renderable object.
To understand the strengths and weaknesses of broad phase collision detection
To build the infrastructure for computing inter-circle collisions
To define and work with collision conditions via the CollisionInfo class
To understand and implement circle collision detection algorithm
In the src/engine/rigid_shape folder, create the collision_info.js file, import from debugDraw, declare the drawing color to be magenta, and define the CollisionInfo class:
Define the constructor with instance variables that correspond to those illustrated in Figure 9-4 for collision depth, normal, and start and end positions:
Define the getter and setter for the variables:
Create a function to flip the direction of the collision normal. This function will be used to ensure that the normal is always pointing toward the object that is being tested for collision.
Define a draw() function to visualize the start, end, and collision normal in magenta:
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
RigidShape classes must be updated to support collisions. Since the abstract base shape, RigidShape, does not contain actual geometric information, the actual collision functions must be implemented in the rectangle and circle classes.
Modify rigid_rectangle.js to import from the new source code file:
In the src/engine/rigid_shapes folder, create the rigid_rectangle_collision.js file, import CollisionInfo and RigidRectangle, and define the collisionTest() function to always return a collision failed status. Collisions with RigidRectangle shape will always fail until the next subsection.
Remember to export the extended RigidRectangle class for the clients:
In the src/engine/rigid_shape folder, create the rigid_circle_collision.js file, import RigidCircle, and define the collisionTest() function to always return a collision failed status if the otherShape is not a RigidCircle; otherwise, call and return the status of collideCircCirc(). For now, a RigidCircle does not know how to collide with a RigidRectangle.
Define the collideCircCirc() function to detect the collision between two circles and to compute the corresponding collision information when a collision is detected. There are three cases to the collision detection: no collision (step 1), collision with centers of the two circles located at different positions (step 2), and collision with the two centers located at exactly the same position (step 3). The following code shows step 1, the detection of no collision; notice that this code also corresponds to the cases as illustrated in Figure 9-2.
When a collision is detected, if the two circle centers are located at different positions (step 2), the collision depth and normal can be computed as illustrated in Figure 9-6. Since c2 is the reference to the other RigidShape, the collision normal is a vector pointing from c1 toward c2 or in the same direction as vFrom1to2. The collision depth is the difference between rSum and dist, and the start position for c1 is simply c2-radius distance away from the center of c2 along the negative mFrom1to2 direction.

Details of a circle-circle collision
The last case for two colliding circles is when both circle centers are located at exactly the same position (step 3). In this case, the collision normal is defined to be the negative y direction, and the collision depth is simply the larger of the two radii.
In the src/engine/components folder, create the physics.js file, import CollisionInfo and declare variables to support computations that are local to this file.
Define the collideShape() function to trigger the collision detection computation. Take note the two tests prior to the actual calling of shape collisionTest(). First, check to ensure the two shapes are not actually the same object. Second, call to the broad phase boundTest() method to determine the proximity of the shapes. Notice that the last parameter, infoSet, when defined will contain all CollisionInfo objects for all successful collisions. This is defined to support visualizing the CollisionInfo objects for verification and debugging purposes.
Define utility functions to support the game developer: processSet() to perform collision determination between all objects in the same GameObjectSet, processObjToSet() to check between a given GameObject and objects of a GameObjectSet, and processSetToSet() to check between all objects in two different GameObjectSets.
Now, export all the defined functionality:
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
Edit my_game_main.js; in the constructor, define the array for storing CollisionInfo and a new flag indicating if CollisionInfo should be drawn:
Modify the update() function to trigger the collision tests:
Modify the draw() function to draw the created CollisionInfo array when defined:
Remember to update the drawControlUpdate() function to support the C key for toggling of the drawing of the CollisionInfo objects:
You can now run the project to examine your collision implementation between RigidCircle shapes in the form of the resulting CollisionInfo objects. Remember that you have only implemented circle-circle collisions. Now, use the left-/right-arrow keys to select and work with a RigidCircle object. Use the WASD keys to move this object around to observe the magenta line segment representing the collision normal and depth when it overlaps with another RigidCircle. Try typing the Y/U keys to verify the correctness of CollisionInfo for shapes with different radii. Now, type the G key to create a few more RigidCircle objects. Try moving the selected object and increase its size such that it collides with multiple RigidCircle objects simultaneously and observe that a proper CollisionInfo is computed for every collision. Finally, note that you can toggle the drawing of CollisionInfo with the C key.
You have now implemented circle collision detection, built the required engine infrastructure to support collisions, and verified the correctness of the system. You are now ready to learn about Separating Axis Theorem (SAT) and implement a derived algorithm to detect collisions between rectangles.
Two convex polygons are not colliding if there exists a line (or axis) that is perpendicular to one of the given edges of the two polygons that when projecting all edges of the two polygons onto this axis results in no overlaps of the projected edges.
In other words, given two convex shapes in 2D space, iterate through all of the edges of the convex shapes, one at a time. For each of the edges, derive a line (or axis) that is perpendicular to the edge, project all the edges of the two convex shapes onto this line, and compute for the overlaps of the projected edges. If you can find one of the perpendicular lines where none of the projected edges overlaps, then the two convex shapes do not collide.

A line where projected edges do not overlap
When projecting all of the edges of the shapes onto these two lines/axes, note that the projection results on the Y axis overlap, while there is no overlap on the X axis. Since there exists one line that is perpendicular to one of the rectangle edges where the projected edges do not overlap, the SAT concludes that the two given rectangles do not collide.
The main strength of algorithms derived from the SAT is that for non-colliding shapes it has an early exit capability. As soon as an axis with no overlapping projected edge is detected, an algorithm can report no collision and does not need to continue with the testing for other axes. In the case of Figure 9-7, if the algorithm began with processing the X axis, there would be no need to perform the computation for the Y axis.
Step 1. Compute face normals : Compute the perpendicular axes or face normals for projecting the edges. Using rectangles as an example, Figure 9-8 illustrates that there are four edges, and each edge has a corresponding perpendicular axis. For example, A1 is the corresponding axis for and thus is perpendicular to the edge eA1. Note that in your RigidRectangle class, mFaceNormal, or face normals, are the perpendicular axes A1, A2, A3, and A4.

Rectangle edges and face normals

Project each vertex onto face normals (shows A1)
Step 3. Identify bounds : Identifies the min and max bounds for the projected vertices of each convex shape. Continue with the rectangle example; Figure 9-10 shows the min and max positions for each of the two rectangles. Notice that the min/max positions are defined with respect to the direction of the given axis.

Identify the min and max bound positions for each rectangle
Step 4. Determine overlaps : Determines if the two min/max bounds overlap. Figure 9-11 shows that the two projected bounds do indeed overlap. In this case, the algorithm cannot conclude and must proceed to process the next face normal. Notice that as illustrated in Figure 9-8, processing of face normal B1 or B3 will result in a deterministic conclusion of no collision.

Test for overlaps of projected edges (shows A1)
The given algorithm is capable of determining if a collision has occurred with no additional information. Recall that after detecting a collision, the physics engine must also resolve potential interpenetration and derive a response for the colliding shapes. Both of these computations require additional information—the collision information as introduced in Figure 9-4. The next section introduces an efficient SAT-based algorithm that computes support points to both inform the true/false outcome of the collision detection and serve as the basis for deriving collision information.

Support points of face normals
In general, the support point for a given face normal may be different during every update cycle and thus must be recomputed during each collision invocation. In addition, and very importantly, it is entirely possible for a face normal to not have a defined support point.
A support point is defined only when the measured distance along the face normal has a negative value. For example, in Figure 9-12, the face normal B1 of shape B does not have a corresponding support point on shape A. This is because all vertices on shape A are positive distances away from the corresponding edge eB1 when measured along B1. The positive distances signify that all vertices of shape A are in front of the edge eB1. In other words, the entire shape A is in front of the edge eB1 of shape B; and thus, the two shapes are not physically touching; and thus, they are not colliding.
It follows that, when computing the collision between two shapes, if any of the face normals does not have a corresponding support point, then the two shapes are not colliding. Once again, the early exit capability is an important advantage—the algorithm can return a decision as soon as the first case of undefined support point is detected.
For convenience of discussion and implementation, the distance between a support point and the corresponding edge is referred to as the support point distance, and this distance is computed as a positive number. In this way, the support point distance is actually measured along the negative face normal direction. This will be the convention followed in the rest of the discussions in this book.

Axis of least penetration and the corresponding collision information
The collision information is simply the smaller collision depth from the earlier two results. You are now ready to implement the support point SAT algorithm.

Running the Rectangle Collisions project
G key: Randomly create a new rigid circle or rectangle
C key: Toggles the drawing of all CollisionInfo
T key: Toggles textures on all objects
R key: Toggles the drawing of RigidShape
B key: Toggles the drawing of the bound on each RigidShape
Left-/right-arrow key: Sequences through and selects an object.
WASD keys: Move the selected object.
Z/X key: Rotates the selected object.
Y/U key: Increases/decreases RigidShape size of the selected object; this does not change the size of the corresponding Renderable object.
To gain insights into and implement the support point SAT algorithm
To continue with completing narrow phase collision detection implementation.
After this project, your game engine will be able to collide between circle shapes and between rectangle shapes while still not supporting collisions between circle and rectangle shapes. This will be one step closer to completing the implementation of narrow phase collision detection for rigid shapes. The remaining functionality, detecting circle-rectangle collisions, will be covered in the next subsection.
In the src/engine/rigid_shapes folder, edit rigid_rectangle_collision.js to define local variables. These are temporary storage during computations; they are statically allocated and reused to avoid the cost of repeated dynamic allocation during each invocation.
Create a new function findSupportPoint() to compute a support point based on, dir, the negated face normal direction, ptOnEdge, a position on the given edge (e.g., a vertex). The listed code marches through all the vertices; compute vToEdge, the vector from vertices to ptOnEdge; project this vector onto the input dir; and record the largest positive projected distant. Recall that dir is the negated face normal direction, and thus, the largest positive distant corresponds to the furthest vertex position. Note that it is entirely possible for all of the projected distances to be negative. In such cases, all vertices are in front of the input dir, a support point does not exist for the given edge, and thus, the two rectangles do not collide.
With the ability to locate a support point for any face normal, the next step is the find the axis of least penetration with the findAxisLeastPenetration() function . Recall that the axis of least penetration is the support point with the least support point distant. The listed code loops over the four face normals, finds the corresponding support point and support point distance, and records the shortest distance. The while loop signifies that if a support point is not defined for any of the face normals, then the two rectangles do not collide.
You can now implement the collideRectRect() function by computing the axis of least penetration with respect to each of the two rectangles and choosing the smaller of the two results:
Complete the implementation by modifying the collisionTest() function to call the newly defined collideRectRect() function to compute the collision between two rectangles:
You can now run the project to test your implementation. You can use the left-/right-arrow keys to select any rigid shape and use the WASD keys to move the selected object. Once again, you can observe the magenta collision information between overlapping rectangles or overlapping circles. Remember that this line shows the least amount of positional correction needed to ensure that there is no overlap between the shapes. Type the Z/X keys to rotate and the Y/U keys to change the size of the selected object, and observe how the collision information changes accordingly.
At this point, only circle-circle and rectangle-rectangle collisions are supported, so when circles and rectangles overlap, there are no collision information shown. This will be resolved in the next project.
The support point algorithm does not work with circles because a circle does not have identifiable vertex positions. Instead, you will implement an algorithm that detects collisions between a rectangle and a circle according to the relative position of the circle’s center with respect to the rectangle.
Before discussing the actual algorithm, as illustrated in Figure 9-15, it is convenient to recognize that the area outside an edge of a rectangle can be categorized into three distinct regions by extending the connecting edges. In this case, the dotted lines separated the area outside the given edge into RG1, the region to the left/top; RG2, the region to the right/bottom; and RG3, the region immediately outside of the given edge.
Step A: Compute the edge on the rectangle that is closest to the circle center.
Step B: If the circle center is inside the rectangle, collision is detected.
Step C1: If in Region RG1, distance between the circle center and top vertex determines if a collision has occurred.
Step C2: If in Region RG2, distance between the circle center and bottom vertex determines if a collision has occurred.
Step C3: If in Region RG3, perpendicular distance between the center and the edge determines if a collision has occurred.

The three regions outside a given edge of a rectangle

Running the Rectangle and Circle Collisions project
G key: Randomly creates a new rigid circle or rectangle
C key: Toggles the drawing of all CollisionInfo
T key: Toggles textures on all objects
R key: Toggles the drawing of RigidShape
B key: Toggles the drawing of the bound on each RigidShape
Left-/right-arrow key: Sequences through and selects an object.
WASD keys: Move the selected object.
Z/X key: Rotates the selected object.
Y/U key: Increases/decreases RigidShape size of the selected object; this does not change the size of the corresponding Renderable object.
To understand and implement the rectangle circle collision detection algorithm
To complete the narrow phase collision detection implementation for circle and rectangle shapes
Update the RigidRectangle access file to import from the latest source code file. In the src/engine/rigid_shapes folder, edit rigid_rectangle.js to replace the import to be from the latest source code file.
In the same folder, create the rigid_rectangle_circle_collision.js file to import from rigid_rectangle_collision.js such that new collision function can be appended to the class:
Define a new function, checkCircRectVertex() , to process regions RG1 and RG2. As illustrated in the left diagram of Figure 9-17, the parameter v1 is the vector from vertex position to circle center. The right diagram of Figure 9-17 shows that a collision occurs when dist, the length of v1, is less than r, the radius. In this case, the collision depth is simply the difference between r and dist.

Left: condition when center is in region RG1. Right: the corresponding collision information
Define collideRectCirc() function to detect the collision between a rectangle and a circle. The following code shows the declaration of local variables and the five major steps, A to C3, that must be performed. The details of each steps are discussed in the rest of this subsection.
Step A, compute the nearest edge. The nearest edge can be found by computing the perpendicular distances between the circle center and each edge of the rectangle. This distance is simply the projection of the vector, from each vertex to the circle center, onto the corresponding face normal. The listed code iterates through all of the vertices computing the vector from the vertex to the circle center and projects the computed vector to the corresponding face normal.

Left: center inside the rectangle will result in all negative projected length. Right: center outside the rectangle will result in at least one positive projected length
Step B, if the circle center is inside the rectangle, then collision is detected and the corresponding collision information can be computed and returned:
Step C1, determine and process if the circle center is in Region RG1. As illustrated in the left diagram of Figure 9-17, Region RG1 can be detected when v1, the vector between the center and vertex, is in the opposite direction of v2, the direction of the edge. This condition is computed in the following listed code:
Steps C2 and C3, differentiate and process for Regions RG2 and RG3. The listed code performs complementary computation for the other vertex on the same rectangle edge for Region RG2. The last region for the circle center to be located in would be the area immediately outside the nearest edge. In this case, the bestDistance computed previously in step A is the distance between the circle center and the given edge. If this distance is less than the circle radius, then a collision has occurred.
In the src/engine/rigid_shapes folder, edit rigid_rectangle_collision.js, and modify the collisionTest() function to call the newly defined collideRectCirc() when the parameter is a circle shape:
In the same folder, edit rigid_circle_collision.js, modify the collisionTest() function to call the newly defined collideRectCirc() when the parameter is a rectangle shape:
You can now run the project to test your implementation. You can create new rectangles and circles, move, and rotate them to observe the corresponding collision information.
You have finally completed the narrow phase collision detection implementation and can begin to examine the motions of these rigid shapes.
pnew = pcurrent + displacement

Movement based on constant displacements
vnew = vcurrent + ∫ a(t)dt
pnew = pcurrent + ∫ v(t)dt
These two equations represent Newtonian-based movements where v(t) is the velocity that describes the change in position over time and a(t) is the acceleration that describes the change in velocity over time.
Notice that both velocity and acceleration are vector quantities encoding both the magnitude and direction. The magnitude of a velocity vector defines the speed, and the normalized velocity vector identifies the direction that the object is traveling. An acceleration vector lets you know whether an object is speeding up or slowing down as well as the changes in the object’s traveling directions. Acceleration is changed by the forces acting upon an object. For example, if you were to throw a ball into the air, the gravitational force would affect the object’s acceleration over time, which in turn would change the object’s velocity.
vnew = vcurrent + acurrent ∗ dt
pnew = pcurrent + vcurrent ∗ dt

Explicit (left) and Symplectic (right) Euler Integration
vnew = vcurrent + acurrent ∗ dt
pnew = pcurrent + vnew ∗ dt
The right diagram of Figure 9-20 illustrates that with the Symplectic Euler Integration, the new position pnew is computed based on the newly computed velocity, vnew.

Running the Rigid Shape Movements project
V key: Toggles motion of all objects
H key: Injects random velocity to all objects
G key: Randomly creates a new rigid circle or rectangle
C key: Toggles the drawing of all CollisionInfo
T key: Toggles textures on all objects
R key: Toggles the drawing of RigidShape
B key: Toggles the drawing of the bound on each RigidShape
Left-/right-arrow key: Sequences through and selects an object.
WASD keys: Move the selected object.
Z/X key: Rotates the selected object.
Y/U key: Increases/decreases RigidShape size of the selected object; this does not change the size of the corresponding Renderable object.
Up-/down-arrow key + M: Increase/decrease the mass of the selected object.
To complete the implementation of RigidShape classes to include relevant physical attributes
To implement movement approximation based on Symplectic Euler Integration
In addition to implementing Symplectic Euler Integration, this project also guides you to define attributes required for collision simulation and response, such as mass, inertia, friction, etc. As will be explained, each of these attributes will play a part in the simulation of object collision responses. This straightforward information is presented here to avoid distracting discussion of the more complex concepts to be covered in the subsequent projects.
In the rest of this section, you will first define relevant physical attributes to complete the RigidShape implementation. After that, you will focus on building Symplectic Euler Integration support for approximating movements.
As mentioned, in order to allow focused discussions of the more complex concepts in the later sections, the attributes for supporting collisions and the corresponding supporting functions are introduced in this project. These attributes are defined in the rigid shape classes.
In the constructor of the RigidShape class, define variables representing acceleration, velocity, angular velocity, mass, rotational inertia, restitution (bounciness), and friction. Notice that the inverse of the mass value is actually stored for computation efficiency (by avoiding an extra division during each update). Additionally, notice that a mass of zero is used to represent a stationary object.
Define the setMass() function to set the mass of the object. Once again, for computational efficiency, the inverse of the mass is stored. Setting the mass of an object to zero or negative is a signal that the object is stationary with zero acceleration and will not participate in any movement computation. Notice that when the mass of an object is changed, you would need to call updateInertia() to update its rotational inertia, mInertial. Rotational inertia is geometric shape specific, and the implementation of updateIntertia() is subclass specific.
Define getter and setter functions for all of the other corresponding variables. These functions are straightforward and are not listed here.
For the convenience of debugging, define a function getCurrentState() to retrieve variable values as text and a function userSetsState() to allow interactive manipulations of the variables:
Edit rigid_circle_main.js in the src/engine/rigid_shapes folder to modify the RigidCircle class to define the updateInertia() function . This function calculates the rotational inertia of a circle when its mass has changed.
Update the RigidCircle constructor and incShapeSize() function to call the updateInertia() function:
Edit rigid_rectangle_main.js in the src/engine/rigid_shapes folder to define the updateInertia() function:
Similar to the RigidCircle class, update the constructor and incShapeSize() function to call the updateInertia() function:
With the RigidShape implementation completed, you are now ready to define the support for movement approximation.
In the src/engine/rigid_shapes folder, edit rigid_shape.js to define the travel() function to implement Symplectic Euler Integration for movement. Notice how the implementation closely follows the listed equations where the updated velocity is used for computing the new position. Additionally, notice the similarity between linear and angular motion where the location (either a position or an angle) is updated by a displacement that is derived from the velocity and time step. Rotation will be examined in detail in the last section of this chapter.
Modify the update() function to invoke travel() when the object is not stationary, mInvMass of 0, and when motion of the physics component is switched on:
The modification to the MyGame class involves supporting new user commands for toggling system-wide motion, injecting random velocity, and setting the scene stationary boundary objects to rigid shapes with zero mass. The injecting of random velocity is implemented by the randomizeVelocity() function defined in my_game_bounds.js file.
All updates to the MyGame class are straightforward. To avoid unnecessary distraction, the details are not shown. As always, you can refer to the source code files in the src/my_game folder for implementation details.
You can now run the project to test your implementation. In order to properly observe and track movements of objects, initially motion is switched off. You can type the V key to enable motion when you are ready. When motion is toggled on, you can observe a natural-looking free-falling movement for all objects. You can type G to create more objects and observe similar free-fall movements of the created objects.
Notice that when the objects fall below the lower platform, they are regenerated in the central region of the scene with a random initial upward velocity. Observe the objects move upward until the y component of the velocity reaches zero, and then they begin to fall downward as a result of gravitational acceleration. Typing the H key injects new random upward velocities to all objects resulting in objects decelerating while moving upward.
Try typing the C key to observe the computed collision information when objects overlap or interpenetrate. Pay attention and note that interpenetration occurs frequently as objects travel through the scene. You are now ready to examine and implement how to resolve object interpenetration in the next section.

A rigid rectangle in continuous motion
You can see one such challenge in Figure 9-22. Imagine a thin wall existed in the space between the current and the next update. You would expect the object to collide and stop by the wall in the next update. However, if the wall was sufficiently thin, the object would appear to pass right through the wall as it jumped from one position to the next. This is a common problem faced in many game engines. A general solution for these types of problems can be algorithmically complex and computationally intensive. It is typically the job of the game designer to mitigate and avoid this problem with well-designed (e.g., appropriate size) and well-behaved (e.g., appropriate traveling speed) game objects.

The interpenetration of colliding objects
In the context of game engines, collision resolution refers to the process that determines object responses after a collision, including strategies to resolve the potential interpenetration situations that may have occurred. Notice that in the real world, interpenetration of rigid objects can never occur since collisions are strictly governed by the laws of physics. As such, resolutions of interpenetrations are relevant only in a simulated virtual world where movements are approximated and impossible situations may occur. These situations must be resolved algorithmically where both the computational cost and resulting visual appearance should be acceptable.
In general, there are three common methods for responding to interpenetrating collisions. The first is to simply displace the objects from one another by the depth of penetration. This is known as the Projection Method since you simply move positions of objects such that they no longer overlap. While this is simple to calculate and implement, it lacks stability when many objects are in proximity and overlap with each other. In this case, the simple resolution of one pair of interpenetrating objects can result in new penetrations with other nearby objects. However, the Projection Method is still often implemented in simple engines or games with simple object interaction rules. For example, in a game of Pong, the ball never comes to rest on the paddles or walls and remains in continuous motion by bouncing off any object it collides with. The Projection Method is perfect for resolving collisions for these types of simple object interactions.
The second method, the Impulse Method , uses object velocities to compute and apply impulses to cause the objects to move in the opposite directions at the point of collision. This method tends to slow down colliding objects rapidly and converges to relatively stable solutions. This is because impulses are computed based on the transfer of momentum, which in turn has a damping effect on the velocities of the colliding objects.
The third method, the Penalty Method , models the depth of object interpenetration as the degree of compression of a spring and approximates an acceleration to apply forces to separate the objects. This last method is the most complex and challenging to implement.
For your engine, you will be combining the strengths of the Projection and Impulse Methods. The Projection Method will be used to separate the interpenetrating objects, while the Impulse Method will be used to compute impulses to reduce the object velocities in the direction that caused the interpenetration. As described, the simple Projection Method can result in an unstable system, such as objects that sink into each other when stacked. You will overcome this instability by implementing a relaxation loop where, in a single update cycle, interpenetrated objects are separated incrementally via repeated applications of the Projection Method.
With a relaxation loop, each application of the Projection Method is referred to as a relaxation iteration. During each relaxation iteration, the Projection Method reduces the interpenetration incrementally by a fixed percentage of the total penetration depth. For example, by default, the engine sets relaxation iterations to 15, and each relaxation iteration reduces the interpenetration by 80 percent. This means that within one update function call, after the movement integration approximation, the collision detection and resolution procedures will be executed 15 times. While costly, the repeated incremental separation ensures a stable system.

Running the Collision Position Correction project
P key: Toggles penetration resolution for all objects
V key: Toggles motion of all objects
H key: Injects random velocity to all objects
G key: Randomly creates a new rigid circle or rectangle
C key: Toggles the drawing of all CollisionInfo
T key: Toggles textures on all objects
R key: Toggles the drawing of RigidShape
B key: Toggles the drawing of the bound on each RigidShape
Left-/right-arrow key: Sequences through and selects an object.
WASD keys: Move the selected object.
Z/X key: Rotates the selected object.
Y/U key: Increases/decreases RigidShape size of the selected object; this does not change the size of the corresponding Renderable object.
Up-/down-arrow key + M: Increase/decrease the mass of the selected object.
To implement positional correction with relaxation iteration
To work with the computed collision information and appreciate its importance
To understand and experience implementing interpenetration resolution
Edit physics.js to define variables and the associated getters and setters for positional correction rate, relaxation loop count, and, toggling the positional correction computation. Make sure to export the newly defined functions.
Define the positionalCorrection() function to move and reduce the overlaps between objects by the predefined rate, mPosCorrectionRate. To properly support object momentum in the simulation, the amount in which each object moves is inversely proportional to their masses. That is, upon collision, an object with a larger mass will be moved by an amount that is less than the object with a smaller mass. Notice that the direction of movement is along the collision normal as defined in by the collisionInfo object .
Modify the collideShape() function to perform positional correction when a collision is detected. Notice that collision detection is performed only when at least one of the objects is with nonzero masses.
Integrate a loop in all three utility functions, processObjToSet(), processSetToSet(), and processSet(), to execute relaxation iterations when performing the positional corrections:
The MyGame class must be modified to support the new P key command, to toggle off initial motion, positional correction, and to spawn initial objects in the central region of the game scene to guarantee initial collisions. These modifications are straightforward and details are not shown. As always, you can refer to the source code files in the src/my_game folder for implementation details.
You can now run the project to test your implementation. Notice that by default, motion is off, positional correction is off, and showing of collision information is on. For these reasons, you will observe the created rigid shapes clumping in the central region of the game scene with many associated magenta collision information.
Now, type the P key and observe all of the shapes being pushed apart with all overlaps resolved. You can type the G key to create additional shapes and observe the shapes continuously push each other aside to ensure no overlaps. A fun experiment to perform is to toggle off positional correction, followed by typing the G key to create a large number of overlapping shapes and then to type the P key to observe the shapes pushing each other apart.
If you switch on motion with the V key, you will first observe all objects free falling as a result of the gravitational force. These objects will eventually come to a rest on one of the stationary platforms. Next, you will observe the magenta collision depth increasing continuously in the vertical direction. This increase in size is a result of the continuously increasing downward velocity as a result of the downward gravitational acceleration. Eventually, the downward velocity will grow so large that in one update the object will move past and appear to fall right through the platform. What you are observing is precisely the situation discussed in Figure 9-22. The next subsection will discuss responses to collision and address this ever-increasing velocity.
Lastly, notice that the utility functions defined in the physics component, the processSet(), processObjToSet(), and processSetToSet(), these are designed to detect and resolve collisions. While useful, these functions are not designed to report on if a collision has occurred—a common operation supported by typical physics engines. To avoid distraction from the rigid shape simulation discussion, functions to support simple collision detection without responses are not presented. At this point, you have the necessary knowledge to define such functions, and it is left as an exercise for you to complete.
With a proper positional correction system, you can now begin implementing collision resolution and support behaviors that resemble real-world situations. In order to focus on the core functionality of a collision resolution system, including understanding and implementing the Impulse Method and ensuring system stability, you will begin by examining collision responses without rotations. After the mechanics behind simple impulse resolution are fully understood and implemented, the complications associated with angular impulse resolutions will be examined in the next section.
In the following discussion, the rectangles and circles will not rotate as a response to collisions. However, the concepts and implementation described can be generalized in a straightforward manner to support rotational collision responses. This project is designed to help you understand the basic concepts of impulse-based collision resolutions.
Mass: Is the amount of matter in an object or how dense an object is.
Force: Is any interaction or energy imparted on an object that will change the motion of that object.
Relative velocity: Is the difference in velocity between two traveling shapes.
Coefficient of restitution : The ratio of relative velocity from after and before a collision. This is a measurement of how much kinetic energy remains after an object bounces off another, or bounciness.
Coefficient of friction : The ratio of the force of friction between two bodies. In your very simplistic implementation, friction is applied directly to slow down linear motion or rotation.
Impulse: Accumulated force over time that can cause a change in the velocity. For example, resulting from a collision.
Object rotations are described by their angular velocities and will be examined in the next section. In the rest of this section, the term velocity is used to refer to the movements of objects or their linear velocity.
toward the wall on its right. At stage 2, the circle is colliding with the wall, and at stage 3, the circle has been reflected and is traveling away from the wall with velocity
.
Collision between a circle and a wall in a perfect world
, into the components that are perpendicular and parallel to the colliding wall. In general, the perpendicular direction to a collision is referred to as the collision normal,
, and the direction that is tangential to the collision position is the collision tangent
. This decomposition can be seen in the following equation:
can be expressed as a linear combination of the normal and tangent components of
as follows:
Notice the negative sign in front of the
component. You can see in Figure 9-25 that the
component for vector
points in the opposite direction to that of
as a result of the collision. Additionally, notice that in the tangent direction
,
continues to point in the same direction. This is because the tangent component is parallel to the wall and is unaffected by the collision. This analysis is true in general for any collisions in a perfect world with no friction and no loss of kinetic energy.

Collision between two traveling circles
In the case of Figure 9-26, before the collision, object A is traveling with velocity
, while object B with velocity
. The normal direction of the collision,
, is defined to be the vector between the two circle centers, and the tangent direction of the collision,
, is the vector that is tangential to both of the circles at the point of collision. To resolve this collision, the velocities for objects A and B after the collision,
and
, must be computed.

:
(1)
(2)

where the goal is to derive a solution for
and
, the individual velocities of the colliding objects after a collision. You are now ready to model a solution to approximate
and
.
The restitution coefficient, e, describes bounciness or the proportion of the velocity that is retained after a collision. A restitution value of 1.0 would mean that speeds will be the same from before and after a collision. In contrast, friction is intuitively associated with the proportion lost or the slow down after a collision. For example, a friction coefficient of 1.0 would mean infinite friction where a velocity of zero will result from a collision. For consistency of the formulae, the coefficient f in Equation (2) is actually 1 minus the intuitive friction coefficient.
to
after contacting with another object. Conveniently, this is the definition of an impulse, as can be seen in the following:

(3)


(4)
Take a step back from the math and think about what this formula states. It makes intuitive sense. The equation states that the change in velocity is inversely proportional to the mass of an object. In other words, the more mass an object has, the less its velocity will change after a collision. The Impulse Method implements this observation.
Recall that Equations (1) and (2) describe the relative velocity after collision according to the collision normal and tangent directions independently. The impulse, being a vector, can also be expressed as a linear combination of components in the collision normal and tangent directions, jN and jT:

(5)
(6)
Note that jN and jT are the only unknowns in these two equations where the rest of the terms are either defined by the user or can be computed based on the geometric shapes. That is, the quantities
,
, mA, and mB are defined by the user, and
and
can be computed.
The
and
vectors are normalized and perpendicular to each other. For this reason, the vectors have a value of 1 when dotted with themselves and a value of 0 when dotted with each other.
vector on both sides of Equations (5) and (6):


is simply
and that
is
, and this equation simplifies to the following:

(7)
vector on both sides of Equations (5) and (6):

is
and
is
derive the following equation:

(8)

Running the Collision Resolution project
P key: Toggles penetration resolution for all objects
V key: Toggles motion of all objects
H key: Injects random velocity to all objects
G key: Randomly creates a new rigid circle or rectangle
C key: Toggles the drawing of all CollisionInfo
T key: Toggles textures on all objects
R key: Toggles the drawing of RigidShape
B key: Toggles the drawing of the bound on each RigidShape
Left-/right-arrow key: Sequences through and selects an object.
WASD keys: Move the selected object.
Z/X key: Rotates the selected object.
Y/U key: Increases/decreases RigidShape size of the selected object; this does not change the size of the corresponding Renderable object.
Up-/down-arrow key + M/N/F: Increase/decrease the mass/restitution/friction of the selected object.
To understand the details of the Impulse Method
To implement the Impulse Method in resolving collisions
Edit physics.js and define the resolveCollision() function to resolve the collision between RigidShape objects, a and b, with collision information recorded in the collisionInfo object:
The listed code follows the solution derivation closely:
Steps A and B: Compute the relative velocity and its normal component. When this normal component is positive, it signifies that two objects are moving away from each other and thus collision resolution is not necessary.
Step C: Computes the collision tangent direction and the tangent component of the relative velocity.
Step D: Uses the averages of the coefficients for impulse derivation. Notice the subtraction by one when computing the newFriction for maintaining consistency with Equation (2).
Step E: Follows the listed Equations (7) and (8) to compute the normal and tangent components of the impulse.
Step F: Solves for the resulting velocities by following Equations (5) and (6).
Edit collideShape() to invoke the resolveCollision() function when a collision is detected and position corrected:
The modifications to the MyGame class are trivial, mainly to toggle both motion and positional correction to be active by default. Additionally, initial random rotations of the created RigidShape objects are disabled because at this point, collision response does not support rotation. As always, you can refer to the source code files in the src/my_game folder for implementation details.
You should test your implementation in three ways. First, ensure that moving shapes collide and behave naturally. Second, try changing the physical properties of the objects. Third, observe the collision resolution between shapes that are in motion and shapes that are stationary with infinite mass (the surrounding walls and stationary platforms). Remember that only linear velocities are considered and rotations will not result from collisions.
Now, run the project and notice that the shapes fall gradually to the platforms and floor with their motions coming to a halt after slight rebounds. This is a clear indication that the base case for Euler Integration, collision detection, positional correction, and resolution all are operating as expected. Press the H key to excite all shapes and the C key to display the collision information. Notice the wandering shapes and the walls/platforms interact properly with soft bounces and no apparent interpenetrations.
Use the left/right arrow to select an object and adjust its restitution/friction coefficients with the N/F and up-/down-arrow keys. For example, adjust the restitution to 1 and friction to 0. Now inject velocity with the H key. Notice how the object seems extra bouncy and, with a friction coefficient of 0, seems to skid along platforms/floors. You can try different coefficient settings and observe corresponding bounciness and slipperiness.
The stability of the system can be tested by increasing the number of shapes in the scene with the G key. The relaxation loop count of 15 continuously and incrementally pushes interpenetrating shapes apart during each iteration. For example, you can toggle off movement and positional corrections with the V and P keys and create multiple, for example, 10 to 20, overlapping shapes. Now toggle on motion and positional corrections and observe a properly functioning system.
In the next project, you will improve the resolution solution to consider angular velocity changes as a result of collisions.
Now that you have a concrete understanding and have successfully implemented the Impulse Method for collision responses with linear velocities, it is time to integrate the support for the more general case of rotations. Before discussing the details, it is helpful to relate the correspondences of Newtonian linear mechanics to that of rotational mechanics. That is, linear displacement corresponds to rotation, velocity to angular velocity, force to torque, and mass to rotational inertia or angular mass. Rotational inertia determines the torque required for a desired angular acceleration about a rotational axis.
The following discussion focuses on integrating rotation in the Impulse Method formulation and does not attempt to present a review on Newtonian mechanics for rotation. Conveniently, integrating proper rotation into the Impulse Method does not involve the derivation of any new algorithm. All that is required is the formulation of impulse responses with proper consideration of rotational attributes.
of object A, is actually the velocity of the shape at its center location. In the absence of rotation, this velocity is constant throughout the object and can be applied to any position. However, as illustrated in Figure 9-28, when the movement of an object includes angular velocity,
, its linear velocity at a position P,
, is actually a function of the relative position between the point and the center of rotation of the shape or the positional vector
.

Linear velocity at a position in the presence of rotation
Angular velocity is a vector that is perpendicular to the linear velocity. In this case, as linear velocity is defined on the X/Y plane,
is a vector in the z direction. Recall from discussions in the “Introduction” section of this chapter, the very first assumption made was that rigid shape objects are continuous geometries with uniformly distributed mass where the center of mass is located at the center of the geometric shape. This center of mass is the location of the axis of rotation. For simplicity, in your implementation,
will be stored as a simple scalar representing the z-component magnitude of the vector.
and
colliding with object A at position P. By now, you know that the linear velocities at point P before the collision for the two objects are as follows:
(9)
(10)

Colliding shapes with angular velocities
(11)
(12)
where
and
, and
and
are the linear and angular velocities for objects A and B after the collision, and the derivation of a solution for these quantities is precisely the goal of this section.


(1)
(2)
(13)
(14)
and
are relative velocities at collision position P from before and after the collision. It is still true that these vectors are defined by the difference in velocities for objects A and B from before,
and
, and after,
and
, the collision at the collision position P on each object.
(15)
(16)
You are now ready to generalize the Impulse Method to support rotation and to derive a solution to approximate the linear and angular velocities:
,
,
, and 
, scaled by the inverse of their corresponding masses, mA and mB. This change in linear velocities is descripted in Equations (3) and (4), relisted as follows:
(3)
(4)
and
, can be described as follows, where
and
are the positional vectors of each object:
(17)
(18)
and
, or as shown:
Substituting this expression into Equation (17) results in the following:

(19)
(20)
(5)
(6)
(21)
(22)
It is important to reiterate that the changes to both linear and angular velocities are described by the same impulse,
. In other words, the normal and tangent impulse components jN and jT in Equations (21) and (22) are the same quantities, and these two are the only unknowns in these equations where the rest of the terms are values either defined by the user or can be computed based on the geometric shapes. That is, the quantities
,
, mA, mB,
,
, IA, and IB, are defined by the user and
,
,
, and
can be computed. You are now ready to derive the solutions for jN and jT.
In the following derivation, it is important to remember the definition of triple scalar product identity; this identity states that given vectors,
,
, and,
, the following is always true:

The normal component of the impulse, jN, can be approximated by assuming that the contribution from the angular velocity tangent component is minimal and can be ignored and isolating the normal components from Equations (21) and (22). For clarity, you will work with one equation at a time and begin with Equation (21) for object A.
vector on both sides of Equation (21) to isolate the normal components:
is a unit vector and is perpendicular to
, and let
; then, this equation can be rewritten as follows:
(23)
:
=

:
(24)
vector on both sides of the equation; the following can be derived:
(25)


(26)
vector to both sides of the equations:

(27)

Running the Collision Angular Resolution project
P key: Toggles penetration resolution for all objects
V key: Toggles motion of all objects
H key: Injects random velocity to all objects
G key: Randomly creates a new rigid circle or rectangle
C key: Toggles the drawing of all CollisionInfo
T key: Toggles textures on all objects
R key: Toggles the drawing of RigidShape
B key: Toggles the drawing of the bound on each RigidShape
Left-/right-arrow key: Sequences through and selects an object.
WASD keys: Move the selected object.
Z/X key: Rotates the selected object.
Y/U key: Increases/decreases RigidShape size of the selected object; this does not change the size of the corresponding Renderable object.
Up-/down-arrow key + M/N/F: Increase/decrease the mass/restitution/friction of the selected object.
To understand the details of angular impulse
To integrate rotation into your collision resolution
To complete the physics component
The cross product between a linear velocity on the x-y plane,
, and an angular velocity along the z axis,
,
, is a vector on the x-y plane.
Step A: Compute relative velocity. As highlighted in Figure 9-29 and Equations (9) and (10), in the presence of angular velocity, it is important to determine the collision position (Step A1) and compute linear velocities
and
at the collision position (Step A2).
Step B: Determine relative velocity in the normal direction. A positive normal direction component signifies that the objects are moving apart and the collision is resolved.
Step C: Compute the collision tangent direction and the tangent direction component of the relative velocity.
Step D: Determine the effective coefficients by using the average of the colliding objects. As in the previous project, for consistency, friction coefficient is one minus the values form the RigidShape objects.
Step E: Impulse in the normal and tangent directions, these are computed by following Equations (26) and (27) exactly.
Step F: Update linear and angular velocities. These updates follow Equations (5), (6), (19), and (20) exactly.
Run the project to test your implementation. The shapes that you insert into the scene now rotate, collide, and respond in fashions that are similar to the real world. A circle shape rolls around when other shapes collide with them, while a rectangle shape should rotate naturally upon collision. The interpenetration between shapes should not be visible under normal circumstances. However, two situations can still cause observable interpenetrations: first, a small relaxation iteration, or second, your CPU is struggling with the number of shapes. In the first case, you can try increasing the relaxation iteration to prevent any interpenetration.
With the rotational support, you can now examine the effects of mass differences in collisions. With their abilities to roll, collisions between circles are the most straightforward to observe. Wait for all objects to be stationary and use the arrow key to select one of the created circles; type the M key with up arrow to increase its mass to a large value, for example, 20. Now select another object and use the WASD key to move and drop the selected object on the high-mass circle. Notice that the high-mass circle does not have much movement in response to the collision. For example, chances are a collision does not even cause the high-mass circle to roll. Now, type the H key to inject random velocities to all objects and observe the collisions. Notice that the collisions with the high-mass circle are almost like collisions with stationary walls/platforms. The inversed mass and rotational inertia modeled by the Impulse Method is capable of successfully capturing the collision effects of objects with different masses.
Now your 2D physics engine implementation is completed. You can continue testing by creating additional shapes to observe when your CPU begins to struggle with keeping up real-time performance.
This chapter has guided you through understanding the foundation behind a working physics engine. The complicated physical interactions of objects in the real world are greatly simplified by focusing only on rigid body interactions or rigid shape simulations. The simulation process assumes that objects are continuous geometries with uniformly distributed mass where their shapes do not change during collisions. The computationally costly simulation is performed only on a selected subset of objects that are approximated by simple circles and rectangles.
A step-by-step derivation of the relevant formulae for the simulations is followed by a detailed guide to the building of a functioning system. You have learned to extract collision information between shapes, formulate and compute shape collisions based on the Separating Axis Theorem, approximate Newtonian motion integrals with the Symplectic Euler Integration, resolve interpenetrations of colliding objects based on numerically stable gradual relaxations, and derive and implement collision resolution based on the Impulse Method.
Now that you have completed your physics engine, you can carefully examine the system and identify potentials for optimization and further abstractions. Many improvements to the physics engine are still possible. This is especially true from the perspective of supporting game developers with the newly defined and powerful functionality. For example, most physics engines also support straightforward collision detections without any responses. This is an important missing functionality from your physics component. While your engine is capable of simulating collisions results as is, the engine does not support responding to the simple, and computationally much lower cost, question of if objects have collided. As mentioned, this can be an excellent exercise.
Though simple and missing some convenient interface functions, your physics component is functionally complete and capable of simulating rigid shape interactions with visually pleasant and realistic results. Your system supports intuitive parameters including object mass, acceleration, velocity, restitution, and friction that can be related to behaviors of objects in the real world. Though computationally demanding, your system is capable of supporting a nontrivial number of rigid shape interactions. This is especially the case if the game genre only required one or a small set, for example, the hero and friendly characters, interacting with the rest of the objects, for example, the props, platforms, and enemies.
The puzzle level in the examples to this point has focused entirely on creating an understandable and consistent logical challenge; we’ve avoided burdening the exercise with any kind of visual design, narrative, or fictional setting (design elements traditionally associated with enhancing player presence) to ensure we’re thinking only about the rules of play without introducing distractions. However, as you create core game mechanics, it’s important to understand how certain elements of gameplay can contribute directly to presence; the logical rules and requirements of core game mechanics often have a limited effect on presence until they’re paired with an interaction model, sound and visual design, and a setting. As discussed in Chapter 8, lighting is an example of a presence-enhancing visual design element that can also be used directly as a core game mechanic, and introducing physics to game world objects is similarly a presence-enhancing technique that’s perhaps even more often directly connected to gameplay.

Rovio’s Angry Birds requires players to launch projectiles from a slingshot in a virtual world that models gravity, mass, momentum, and object collision detection. The game physics are a fundamental component of the game mechanic and enhance the sense of presence by assigning physical world traits to virtual objects
The projects in Chapter 9 introduce you to the powerful ability of physics to bring players into the game world. Instead of simply moving the hero character like a screen cursor, the player can now experience simulated inertia, momentum, and gravity requiring the same kind of predictive assessments around aiming, timing, and forward trajectory that would exist when manipulating objects in the physical world, and game objects are now capable of colliding in a manner familiar to our physical world experience. Even though specific values might take a detour from the real world in a simulated game space (e.g., lower or higher gravity, more or less inertia, and the like), as long as the relationships are consistent and reasonably analogous to our physical experience, presence will typically increase when these effects are added to game objects. Imagine, for example, a game level where the hero character was required to push all the robots into a specific area within a specified time limit while avoiding being hit by projectiles. Imagine the same level without physics and it would of course be a very different experience.

The level as it currently stands includes a two-step puzzle first requiring players to move a flashlight and reveal hidden symbols; the player must then activate the shapes in the correct sequence to unlock the barrier and claim the reward
There is, of course, some sense of presence conveyed by the current level design: the barrier preventing players from accessing the reward is “impenetrable” and represented by a virtual wall, and the flashlight object is “shining” a virtual light beam that reveals hidden clues in the manner perhaps that a UV light in the real world might reveal special ink. Presence is frankly weak at this stage of development, however, as we have yet to place the game experience in a setting and the intentionally generic shapes don’t provide much to help a player build their own internal narrative. Our current prototype uses a flashlight-like game object to reveal hidden symbols, but it’s now possible to decouple the game mechanic’s logical rules from the current implementation and describe the core game mechanic as “the player must explore the environment to find tools required to assemble a sequence in the correct order.”

The game screen now shows just one instance of each part of the lock (top, middle, bottom), and the hero character moves in the manner of a traditional jumping 2D platformer. The six platforms on the left and right are stationary, and the middle platform moves up and down, allowing the player to ascend to higher levels. (This image assumes the player is able to “jump” the hero character between platforms on the same level but cannot reach higher levels without using the moving platform.)
We’re now evolving gameplay to include a dexterity challenge—in this case, timing the jumps—yet it retains the same logical rules from the earlier iteration: the shapes must be activated in the correct order to unlock the barrier blocking the reward. Imagine the player experiences this screen for the first time; they’ll begin exploring the screen to learn the rules of engagement for the level, including the interaction model (the keys and/or mouse buttons used to move and jump the hero character), whether missing a jump results in a penalty (e.g., the loss of a “life” if the hero character misses a jump and falls off the game screen), and what it means to “activate” a shape and begin the sequence to unlock the barrier.

The introduction of a force field blocking access to the upper platforms (#1) can significantly increase the challenge of the platformer component. In this design, the player must activate the switch (represented with a lightbulb in #2) to disable the force field and reach the first and third shapes
The introduction of a force field opens a variety of interesting possibilities to increase the challenge. The player must time the jump from the moving platform to the switch before hitting the force field, and the shapes must be activated in order (requiring the player to first activate top right, then the bottom right, and then the top left). Imagine a time limit is placed on the deactivation when the switch is flipped and that the puzzle will reset if all shapes aren’t activated before the force field is reengaged.
We’ve now taken an elemental mechanic based on a logical sequence and adapted it to support an action platformer experience. At this stage of development, the mechanic is becoming more interesting and beginning to feel more like a playable level, but it’s still lacking setting and context; this is a good opportunity to explore the kind of story we might want to tell with this game. Are we interested in a sci-fi adventure, perhaps a survival horror experience, or maybe a series of puzzle levels with no connected narrative? The setting will not only help inform the visual identity of the game but can also guide decisions on the kinds of challenges we create for players (e.g., are “enemies” in the game working against the player, will the gameplay continue focusing on solving logic puzzles, or perhaps both?). A good exercise to practice connecting a game mechanic to a setting is to pick a place (e.g., the interior of a space ship) and begin exploring gameplay in that fictional space and defining the elements of the challenge in a way that makes sense for the setting. For a game on a spaceship, perhaps, something has gone wrong and the player must make their way from one end of the ship to the other while neutralizing security lasers through the clever use of environment objects. Experiment with applying the spaceship setting to the current game mechanic and adjusting the elements in the level to fit that theme: lasers are just one option, but can you think of other uses of our game mechanic that don’t involve an unlocking sequence? Try applying the game mechanic to a range of different environments to begin building your comfort for applying abstract gameplay to specific settings.
Remember also that including object physics in level designs isn’t always necessary to create a great game; sometimes you may want to subvert or completely ignore the laws of physics in the game worlds you create. The final quality of your game experience is the result of how effectively you harmonize and balance the nine elements of game design; it’s not about the mandatory implementation of any one design option. Your game might be completely abstract and involve shapes and forms shifting in space in a way that has no bearing on the physical world, but your use of color, audio, and narrative might still combine to create an experience with a strong presence for players. However, if you find yourself with a game environment that seeks to convey a sense of physicality by making use of objects that people will associate with things found in the physical world, it’s worth exploring how object physics might enhance the experience.
Understand the fundamentals of a particle, a particle emitter, and a particle system
Appreciate that many interesting physical effects can be modeled based on a collection of dedicated particles
Approximate the basic behavior of a particle such that the rendition of a collection of these particles resemble a simple explosion-like effect
Implement a straightforward particle system that is integrated with the RigidShape system of the physics component
So far in your game engine, it is assumed that the game world can be described by a collection of geometries where all objects are Renderable instances with texture, or animated sprite, and potentially illuminated by light sources. This game engine is powerful and capable of describing a significant portion of objects in the real world. However, it is also true that it can be challenging for your game engine to describe many everyday encounters, for example, sparks, fire, explosions, dirt, dust, etc. Many of these observations are transient effects resulting from matters changing physical states or a collection of very small-size entities reacting to physical disturbances. Collectively, these observations are often referred to as special effects and in general do not lend themselves well to being represented by fixed-shape geometries with textures.
Particle systems describe special effects by emitting a collection of particles with properties that may include position, size, color, lifetime, and strategically selected texture maps. These particles are defined with specific behaviors where once emitted, their properties are updated to simulate a physical effect. For example, a fire particle may be emitted to move in an upward direction with reddish color. As time progresses, the particle may decrease in size, slow the upward motion, change its color toward yellow, and eventually disappear after certain number of updates. With strategically designed update functions, the rendition of a collection of such particles can resemble a fire burning.
In this chapter, you will study, design, and create a simple and flexible particle system that includes the basic functionality required to achieve common effects, such as explosions and magical spell effects. Additionally, you will implement a particle shader to properly integrate your particles within your scenes. The particles will collide and interact accordingly with the RigidShape objects. You will also discover the need for and define particle emitters to generate particles over a period of time such as a campfire or torch.
The main goal of this chapter is to understand the fundamentals of a particle system: attributes and behaviors of simple particles, details of a particle emitter, and the integration with the rest of the game engine. This chapter does not lead you to create any specific types of special effects. This is analogous to learning an illumination model in Chapter 8 without the details of creating any lighting effects. The manipulation of light source parameters and material properties to create engaging lighting conditions and the modeling of particle behaviors that resemble specific physical effects are the responsibilities of the game developers. The basic responsibility of the game engine is to define sufficient fundamental functionality to ensure that the game developers can accomplish their job.
A particle is a textured position without dimensions. This description may seem contradictory because you have learned that a texture is an image and images are always defined by a width and height and will definitely occupy an area. The important clarification is that the game engine logic processes a particle as a position with no area, while the drawing system displays the particle as a texture with proper dimensions. In this way, even though an actual displayed area is shown, the width and height dimensions of the texture are ignored by the underlying logic.
In addition to a position, a particle also has properties such as size (for scaling the texture), color (for tinting the texture), and life span. Similar to a typical game object, each particle is defined with behaviors that modify its properties during each update. It is the responsibility of this update function to ensure that the rendition of a collection of particles resembles a familiar physical effect. A particle system is the entity that controls the spawning, updating, and removal of each individual particle. In your game engine, particle systems will be defined as a separate component, just like the physics component.
In the following project, you will first learn about the support required for drawing a particle object. After that, you will examine the details of how to create an actual particle object and define its behaviors. A particle is a new type of object for your game engine and requires the support of the entire drawing system, including custom GLSL shaders, default sharable shader instance, and a new Renderable pair.

Running the Particles project
Q key: To spawn particles at the current mouse position
E key: To toggle the drawing of particle bounds
To understand the details of how to draw a particle and define its behavior
To implement a simple particle system
minion_sprite.png defines the sprite elements for the hero and the minions.
platform.png defines the platforms, floor, and ceiling tiles.
wall.png defines the walls.
target.png identifies the currently selected object.
Particles are textured positions with no area. However, as discussed in the introduction, your engine will draw each particle as a textured rectangle. For this reason, you can simply reuse the existing texture vertex shader texture_vs.glsl.
Under the src/glsl_shaders folder, create a new file and name it particle_fs.glsl.
Similar to the texture fragment shader defined in texture_fs.glsl, you need to declare uPixelColor and vTexCoord to receive these values from the game engine and define the uSampler to sample the texture:
Now implement the main function to accumulate colors without considering global ambient effect. This serves as one approach for computing the colors of the particles. This function can be modified to support different kinds of particle effects.
Begin by editing the shader_resources.js file in the src/engine/core folder to define the constant, variable, and accessing function for the default particle shader:
In the init() function, make sure to load the newly defined particle_fs GLSL fragment shader:
With the new GLSL fragment shader, particle_fs, properly loaded, you can instantiate a new particle shader when the createShaders() function is called:
In the cleanUp() function, remember to perform the proper cleanup and unload operations:
Lastly, do not forget to export the newly defined function:
With the default particle shader class defined to interface to the GLSL particle_fs shader, you can now create a new Renderable object type to support the drawing of particles. Fortunately, the detailed behaviors of a particle, or a textured position, are identical to that of a TextureRenderable with the exception of the different shader. As such, the definition of the ParticleRenderable object is trivial.
Edit default_resources.js in the src/engine/resources folder, add an import from texture.js to access the texture loading functionality, and define a constant string for the location of the particle texture map and an accessor for this string:
In the init() function, call the texture.load() function to load the default particle texture map:
In the cleanUp() function, make sure to unload the default texture:
Finally, remember to export the accessor:
With this integration, the default particle texture file will be loaded into the resource_map during system initialization. This default texture map can be readily accessed with the returned value from the getDefaultPSTexture() function .
With the drawing infrastructure defined, you can now define the engine component to manage the behavior of the particle system. For now, the only functionality required is to include a default system acceleration for all particles.
Before continuing, make sure to update the engine access file, index.js, to allow game developer access to the newly defined functionality.
You are now ready to define the actual particle, its default behaviors, and the class for a collection of particles.
Particles are lightweight game objects with simple properties wrapping around ParticleRenderable for drawing. To properly support motion, particles also implement movement approximation with the Symplectic Euler Integration.
Begin by creating the particles subfolder in the src/engine folder. This folder will contain particle-specific implementation files.
In the src/engine/particles folder, create particle.js, and define the constructor to include variables for position, velocity, acceleration, drag, and drawing parameters for debugging:
Define the draw() function to draw the particle as a TextureRenderable and a drawMarker() debug function to draw an X marker at the position of the particle:
You can now implement the update() function to compute the position of the particle based on Symplectic Euler Integration, where the scaling with the mDrag variable simulates drags on the particles. Notice that this function also performs incremental changes to the other parameters including color and size. The mCyclesToLive variable informs the particle system when it is appropriate to remove this particle.
Define simple get and set accessors. These functions are straightforward and are not listed here.
In the src/engine/particles folder, create particle_set.js, and define ParticleSet to be a subclass of GameObjectSet:
Override the draw() function of GameObjectSet to ensure particles are drawn with additive blending:
Recall from Chapter 5 that the default gl.blendFunc() setting implements transparency by blending according to the alpha channel values. This is referred to as alpha blending. In this case, the gl.blendFunc() setting simply accumulates colors without considering the alpha channel. This is referred to as additive blending. Additive blending often results in oversaturation of pixel colors, that is, RGB components with values of greater than the maximum displayable value of 1.0. The oversaturation of pixel color is often desirable when simulating intense brightness of fire and explosions.
Override the update() function to ensure expired particles are removed:
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
There are two important observations to be made on the _createParticle() function. First, the random() function is used many times to configure each created Particle. Particle systems utilize large numbers of similar particles with slight differences to build and convey the desired visual effect. It is important to avoid any patterns by using randomness. Second, there are many seemingly arbitrary numbers used in the configuration, such as setting the life of the particle to be between 30 and 230 or setting the final red component to a number between 3.5 and 4.5. This is unfortunately the nature of working with particle systems. There is often quite a bit of ad hoc experimentation. Commercial game engines typically alleviate this difficulty by releasing a collection of preset values for their particle systems. In this way, game designers can fine-tune specific desired effects by adjusting the provided presets.
Run the project and press the Q key to observe the generated particles. It appears as though there is combustion occurring underneath the mouse pointer. Hold the Q key and move the mouse pointer around slowly to observe the combustion as though there is an engine generating flames beneath the mouse. Type the E key to toggle the drawing of individual particle positions. Now you can observe a green X marking the position of each of the generated particles.
If you move the mouse pointer rapidly, you can observe individual pink circles with green X centers changing color while dropping toward the floor. Although all particles are created by the _createParticle() function and share the similar behaviors of falling toward the floor while changing color, every particle appears slightly different and does not exhibit any behavior patterns. You can now clearly observe the importance of integrating randomness in the created particles.
There are limitless variations to how you can modify the _createParticle() function. For example, you can change the explosion-like effect to steam or smoke simply by changing the initial and final color to different shades of gray and transparencies. Additionally, you can modify the default particle texture by inverting the color to create black smoke effects. You could also modify the size change delta to be greater than 1 to increase the size of the particles over time. There are literally no limits to how particles can be created. The particle system you have implemented allows the game developer to create particles with customized behaviors that are most suitable to the game that they are building.
Lastly, notice that the generated particles do not interact with the RigidShape objects and appears as though the particles are drawn over the rest of the objects in the game scene. This issue will be examined and resolved in the next project.
An approach to integrate particles into a game scene is for the particles to follow the implied rules of the scene and interact with the non-particle objects accordingly. The ability to detect collisions is the foundation for interactions between objects. For this reason, it is sometimes important to support particle collisions with the other, non-particle game objects.
Since particles are defined only by their positions with no dimensions, the actual collision computations can be relatively straightforward. However, there are typically a large number of particles; as such, the number of collisions to be performed can also be numerous. As a compromise and optimization in computational costs, particles collisions can be based on RigidShape instead of the actual Renderable objects. This is similar to the case of the physics component where the actual simulation is based on simple rigid shapes in approximating the potentially geometrically complicated Renderable objects.

Running the Particle Collisions project
Q key: To spawn particles at the current mouse position
E key: To toggle the drawing of particle bounds
1 key: To toggle Particle/RigidShape collisions
To understand and resolve collisions between individual particle positions and RigidShape objects
To build a particle engine component that supports interaction with RigidShape
Edit particle_system.js to define and initialize temporary local variables for resolving collisions with RigidShape objects. The mCircleCollider object will be used to represent individual particles in collisions.
Define the resolveCirclePos() function to resolve the collision between a RigidCircle and a position by pushing the position outside of the circle shape:
Define the resolveRectPos() function to resolve the collision between a RigidRectangle and a position by wrapping the mCircleCollider local variable around the position and invoking the RigidCircle to RigidRectangle collision function. When interpenetration is detected, the position is pushed outside of the rectangle shape according to the computed mCollisionInfo.
Implement resolveRigidShapeCollision() and resolveRigidShapeSetCollision() to allow convenient invocation by client game developers. These functions resolve collisions between a single or a set of RigidShape objects and a ParticleSet object.
Lastly, remember to export the newly defined functions:
As in previous projects, you can run the project and create particles with the Q and E keys. However, notice that the generated particles do not overlap with any of the objects. You can even try moving your mouse pointer to within the bounds of one of the RigidShape objects and then type the Q key. Notice that in all cases, the particles are generated outside of the shapes.
You can try typing the 1 key to toggle collisions with the rigid shapes. Note that with collisions enabled, the particles somewhat resemble the amber particles from a fire or an explosion where they bounce off the surfaces of RigidShape objects in the scene. When collision is toggled off, as you have observed from the previous project, the particles appear to be burning or exploding in front of the other objects. In this way, collision is simply another parameter for controlling the integration of the particle system with the rest of the game engine.
You may find it troublesome to continue to press the Q key to generate particles. In the next project, you will learn about generation of particles over a fixed period of time.
With your current particle system implementation, you can create particles at a specific point and time. These particles can move and change based on their properties. However, particles can be created only when there is an explicit state change such as a key click. This becomes restricting when it is desirable to persist the generation of particles after the state change, such as an explosion or firework that persists for a short while after the creation of a new RigidShape object. A particle emitter addresses this issue by defining the functionality of generating particles over a time period.

Running the Particle Emitters project
Q key: To spawn particles at the current mouse position
E key: To toggle the drawing of particle bounds
1 key: To toggle Particle/RigidShape collisions
To understand the need for particle emitters
To experience implementing particle emitters
In the src/engine/particles folder, create particle_emitter.js; define the ParticleEmitter class with a constructor that receives the location, number, and how to emit new particles. Note that the mParticleCreator variable expects a callback function. When required, this function will be invoked to create a particle.
Define a function to return the current status of the emitter. When there are no more particles to emit, the emitters should be removed.
Create a function to actually create or emit particles. Take note of the randomness in the number of particles that are actually emitted and the invocation of the mParticleCreator() callback function. With this design, it is unlikely to encounter patterns in the number of particles that are created over time. In addition, the emitter defines only the mechanisms of how, when, and where particles will be emitted and does not define the characteristics of the created particles. The function pointed to by mParticleCreator is responsible for defining the actual behavior of each particle .
Lastly, remember to update the engine access file, index.js, to allow game developer access to the ParticleEmitter class.
Edit particle_set.js in the src/engine/particles folder, and define a new variable for maintaining emitters:
Define a function for instantiating a new emitter. Take note of the func parameter. This is the callback function that is responsible for the actual creation of individual Particle objects.
Modify the update function to loop through the emitter set to generate new particles and to remove expired emitters:
This is a straightforward test of the correct functioning of the ParticleEmitter object. The MyGame class update() function is modified to create a new ParticleEmitter at the position of the RigidShape object when the G or H key is pressed. In this way, it will appear as though an explosion has occurred when a new RigidShape object is created or when RigidShape objects are assigned new velocities.
In both cases, the _createParticle() function discussed in the first project of this chapter is passed as the argument for the createrFunc callback function parameter in the ParticleEmitter constructor.
Run the project and observe the initial firework-like explosions at the locations where the initial RigidShape objects are created. Type the G key to observe the accompanied explosion in the general vicinity of the newly created RigidShape object. Alternatively, you can type the H key to apply velocities to all the shapes and observe explosion-like effects next to each RigidShape object. For a very rough sense of what this particle system may look like in a game, you can try enabling texturing (with the T key), disabling RigidShape drawing (with the R key), and typing the H key to apply velocities. Observe that it appears as though the Renderable objects are being blasted by the explosions.
Notice how each explosion persists for a short while before disappearing gradually. Compare this effect with the one resulting from a short tapping of the Q key, and observe that without a dedicated particle emitter, the explosion seems to have fizzled before it begins.
Allowing the position of the emitter to change over time, for example, attaching the emitter to the end of a rocket
Allowing emitter to affect the properties of the created particles, for example, changing the acceleration or velocity of all created particles to simulate wind effects
Based on the simple and yet flexible particle system you have implemented, you can now experiment with all these ideas in a straightforward manner.
There are three simple takeaways from this chapter. First, you have learned that particles, positions with an appropriate texture and no dimensions, can be useful in describing interesting physical effects. Second, the capability to collide and interact with other objects assists with the integration and placement of particles in game scenes. Lastly, in order to achieve the appearance of familiar physical effects, the emitting of particles should persist over some period of time.
You have developed a simple and yet flexible particle system to support the consistent management of individual particles and their emitters. Your system is simple because it consists of a single component, defined in particle_system.js, with only three simple supporting classes defined in the src/engine/particles folder. The system is flexible because of the callback mechanism for the actual creation of particles where the game developers are free to define and generate particles with any arbitrary behaviors.
The particle system you have built serves to demonstrate the fundamentals. To increase the sophistication of particle behaviors, you can subclass from the simple Particle class, define additional parameters, and amend the update() function accordingly. To support additional physical effects, you can consider modifying or subclassing from the ParticleEmitter class and emit particles according to your desired formulations.

Visual techniques like those shown in this graphic are often used in graphic novels to represent various fast-moving or high-impact actions like explosions, punches, crashes, and the like; similar visual techniques have also been used quite effectively in film and video games
Particle effects can also be used either in realistic ways that mimic how we’d expect them to behave in the real world or in more creative ways that have no connection to real-world physics. Try using what you’ve learned from the examples in this chapter and experiment with particles in your current game prototype as we left it in Chapter 9: can you think of some uses for particles in the current level that might support and reinforce the presence of existing game elements (e.g., sparks flying if the player character touches the force field)? What about introducing particle effects that might not directly relate to gameplay but enhance and add interest to the game setting?
Implement background tiling with any image in any given camera WC bounds
Understand parallax and simulate motion parallax with parallax scrolling
Appreciate the need for layering objects in 2D games and support layered drawing
By this point, your game engine is capable of illuminating 2D images to generate highlights and shadows and simulating basic physical behaviors. To conclude the engine development for this book, this chapter focuses on the general support for creating the game world environment with background tiling and parallax as well as relieving the game programmers from having to manage draw order.
Background images or objects are included to decorate the game world to further engage the players. This often requires the image being vast in scale with subtle visual complexities. For example, in a side-scrolling game, the background must always be present, and simple motion parallax can create the sense of depth and further capture the players’ interests.

Tiling of a strategically drawn background image

Parallax: objects appearing at different positions when observed from different viewpoints
This chapter presents a general algorithm for tiling the camera WC bounds and describes an abstraction for hiding the details of parallax scrolling. With the increase in visual complexity of the background, this chapter discusses the importance of and creates a layer manager to alleviate game programmers from the details of draw ordering.

Generating tiled background for camera WC bounds
There are many ways to compute the required tiling for a given background object and the camera WC bounds. A simple approach is to determine the tile position that covers the lower-left corner of the WC bound and tile in the positive x and y directions.

Running the Tiled Objects project
WASD keys: Move the Dye character (the hero) to pan the WC window bounds
To experience working with multiple layers of background
To implement the tiling of background objects for camera WC window bounds
You can find the following external resources in the assets folder. The fonts folder contains the default system fonts and six texture images: minion_sprite.png, minion_sprite_normal.png, bg.png, bg_normal.png, bg_layer.png, and bg_layer_normal.png. The Hero and Minion objects are represented by sprite elements in the minion_sprite.png image, and bg.png and bg_layer.png are two layers of background images. The corresponding _normal files are the normal maps.
Create a new file in the src/engine/game_objects folder and name it tiled_game_object.js. Add the following code to construct the object. The mShouldTile variable provides the option to stop the tiling process.
Define the getter and setter functions for mShouldTile:
Define the function to tile and draw the Renderable object to cover the WC bounds of the aCamera object:
The _drawTile() function computes and repositions the Renderable object to cover the lower-left corner of the camera WC bounds and tiles the object in the positive x and y directions. Note the following :
Steps A and B compute the position and dimension of the tiling object and the camera WC bounds.
Step C computes the dx and dy offsets that will translate the Renderable object with bounds that cover the lower-left corner of the aCamera WC bounds. The calls to the Math.ceil() function ensure that the computed dx and dy are integer multiples of the Renderable width and height. This is important to ensure there are no overlaps or gaps during tiling.
Step D saves the original position of the Renderable object before offsetting and drawing it. Step E offsets the Renderable object to cover the lower-left corner of the camera WC bounds.
Step F computes the number of repeats required, and step G tiles the Renderable object in the positive x and y directions until the results cover the entire camera WC bounds. The calls to the Math.ceil() function ensure that the computed nx and ny, the number of times to tile in the x and y directions, are integers.
Step H resets the position of the tiled object to the original location.
Override the draw() function to call the _drawTile() function when tiling is enabled:
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
MyGame should test for the correctness of object tiling. To test multiple layers of tiling, two separate instances of TiledGameObject and Camera are created. The two TiledGameObject instances are located at different distances from the cameras (z depth) and are illuminated by different combinations of light sources. The added second camera focused on one of the Hero objects.
In the listed code, the two cameras are first created in step A, followed by the creation and initialization of all the light sources in the _initializeLights() function. Step C defines bgR as a TiledGameObject with an IllumRenderable that is being illuminated by one light source. Step D defines the second TiledGameObject based on another IllumRenderable that is being illuminated by four light sources. Since the mShouldTile variable of the TileGameObject class defaults to true, both of the tile objects will tile the camera that they are drawing to.
You can now run the project and move the Hero object with the WASD keys. As expected, the two layers of tiled backgrounds are clearly visible. You can switch off the illumination to the farther background by selecting and turning off light source 1 (type the 1 key followed by the H key). Move the Hero object to pan the cameras to verify that the tiling and the background movement behaviors are correct in both of the cameras.
An interesting observation is that while the two layers of backgrounds are located at different distances from the camera, when the camera pans, the two background images scroll synchronously. If not for the differences in light source illumination, it would appear as though the background is actually a single image. This example illustrates the importance of simulating motion parallax.

Top view of a scene with two background objects at different distances

Top view of parallax scrolling with stationary camera

Top view of parallax scrolling with the camera in motion
It is important to note that in the described approach to implement parallax scrolling for a moving camera, stationary background objects are displaced. There are two limitations to this approach. First, the object locations are changed for the purpose of conveying visual cues and do not reflect any specific game state logic. This can create challenging conflicts if the game logic requires the precise control of the movements of the background objects. Fortunately, background objects are usually designed to serve the purposes of decorating the environment and engaging the players. Background objects typically do not participate in the actual gameplay logic. The second limitation is that the stationary background objects are actually in motion and will appear so when viewed from cameras other than the one causing the motion parallax. When views from multiple cameras are necessary in the presence of motion parallax, it is important to carefully coordinate them to avoid player confusion.

Running the Parallax Objects project
P key: Toggles the drawing of a second camera that is not in motion to highlight background object movements in simulating parallax scrolling
WASD keys: Move the Dye character (the hero) to pan the WC window bounds
To understand and appreciate motion parallax
To simulate motion parallax with parallax scrolling
Create parallax_game_object.js in the src/engine/game_objects folder, and add the following code to construct the object:
Define the getter and setter functions for mParallaxScale. Notice the clamping of negative values; this variable must be positive.
Override the update() function to implement parallax scrolling:
Define the _refPosUpdate() function:
Define the function to translate the object to implement parallax scrolling. The negative delta is designed to move the object in the same direction as that of the camera. Notice the variable f is 1 minus the inverse of mParallaxScale.
When mParallaxScale is less than 1, the inverse is greater than 1 and f becomes a negative number. In this case, when the camera moves, the object will move in the opposite direction and thus create the sensation that the object is in front of the default distance.
Conversely, when mParallaxScale is greater than 1, its inverse will be less than 1 and result in a positive f with a value of less than 1. In this case, the object will be moving in the same direction as the camera, only slower. A larger mParallaxScale would correspond to f value being closer to 1, and the movement of the object will be closer to that of the camera, or the object will appear to be at a further distance from the camera.
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
The mBg object is created as a ParallaxGameObject with a scale of 5, mBgL1 with a scale of 3, and mFront with a scale of 0.9. Recall that scale is the second parameter of the ParallaxGameObject constructor. This parameter signifies the object distance from the camera, with values greater than 1 being farther from and values less than 1 being closer to the default distance. In this case, mBg is the furthest away from the camera while mBgL1 is closer. Regardless, both are still behind the default distance. The mFront object is the closest to the camera and in front of the default distance or in front of the Hero object.
You can now run the project and observe the darker foreground layer partially blocking the Hero and Minion objects. You can move the Hero object to pan the camera and observe the two background layers scrolling at different speeds. The mBg object is farther away and thus scrolls slower than the mBgL1 object. You will also notice the front-layer parallax scrolls at a faster speed than all other objects, and as a result, panning the camera reveals different parts of the stationary Minion objects.
Press the P key to enable the drawing of the second camera. Notice that when the Hero is stationary, the view in this camera is as expected, not moving. Now, if you move the Hero object to pan the main camera, note the foreground and background objects in the second camera view are also moving and exhibit motion parallax even though the second camera is not moving! As game designers, it is important to ensure this side effect does not confuse the player.
Heads-up display (HUD) layer : Typically, closest to the camera displaying essential user interface information
Foreground or front layer: The layer in front of the game objects for decorative or partial occlusion of the game objects
Actor layer: The default distance layer in Figure 11-5, where all game objects reside
Shadow receiver layer: The layer behind the actor layer to receive potential shadows
Background layer: The decorative background
Each layer will reference all objects defined for that layer, and these objects will be drawn in the order they were inserted into the layer, with the last inserted drawn last and covering objects before it. This section presents the Layer engine component to support the described five layers to relieve game programmers from the details of managing updates and drawings the objects. Note that the number of layers a game engine should support is determined by the kinds of games that the engine is designed to build. The five layers presented are logical and convenient for simple games. You may choose to expand the number of layers in your own game engine.

Running the Layer Manager project
P key: Toggles the drawing of a second camera that is not in motion to highlight background object movements in simulating parallax scrolling
WASD keys : Move the Dye character (the hero) to pan the WC window bounds
To appreciate the importance of layering in 2D games
To develop a layer manager engine component
Create a new file in the src/engine/components folder and name it layer.js. This file will implement the Layer engine component.
Define enumerators for the layers:
Define appropriate constants and instance variables to keep track of the layers. The mAllLayers variable is an array of GameObjectSet instances representing each of the five layers.
Define an init() function to create the array of GameObjectSet instances:
Define a cleanUp() function to reset the mAllLayer array:
Define functions to add to, remove from, and query the layers. Note the addAsShadowCaster() function assumes that the shadow receiver objects are already inserted into the eShadowReceiver layer and adds the casting object to all receivers in the layer.
Define functions to draw a specific layer or all the layers, from the furthest to the nearest to the camera:
Define a function to move a specific object such that it will be drawn last (on top):
Define functions to update a specific layer or all the layers:
Remember to export all the defined functionality:
Lastly, remember to update the engine access file, index.js, to forward the newly defined functionality to the client.
You must modify the rest of the game engine slightly to integrate the new Layer component .
Define update functions for objects that may appear as members in Layer: Renderable and ShadowReceiver.
Modify the unload() function to clean up the Layer:
Modify the init() function to add the game objects to the corresponding layers in the Layer component:
Modify the draw() function to rely on the Layer component for the actual drawings:
Modify the update() function to rely on the Layer component for the actual update of all game objects:
You can now run the project and observe the same output and interactions as the previous project. The important observation for this project is in the implementation. By inserting game objects to the proper layers of the Layer component during init(), the draw() and update() functions of a game level can be much cleaner. The simpler and cleaner update() function is of special importance. Instead of being crowded with mundane game object update() function calls, this function can now focus on implementing the game logic and controlling the interactions between game objects.
This chapter explained the need for tiling and introduced the TileGameObject to implement a simple algorithm that tiles and covers a given camera WC bound. The basics of parallax and approaches to simulate motion parallax with parallax scrolling were introduced. Motion parallax with stationary and moving cameras were examined, and solutions were derived and implemented. You learned that computing movements relative to the camera motions to displace background objects results in visually pleasing motion parallax but may cause player confusion when viewed from different cameras. With shadow computations introduced earlier and now parallax scrolling, game programmers must dedicate code and attention to coordinate the drawing order of different types of objects. To facilitate the programmability of the game engine, the Layer engine component is presented as a utility tool to relieve game programmers from managing the drawing of the layers.
The game engine proposed by this book is now complete. It can draw objects with texture maps, sprite animations, and even support illumination from various light sources. The engine defines proper abstractions for simple behaviors, implements mechanisms to approximate and accurately compute collisions, and simulates physical behaviors. Views from multiple cameras can be conveniently displayed over the same game screens with manipulation functionality that is smoothly interpolated. Keyboard/mouse input is supported, and now background objects can scroll without bounds with motion parallax simulated.
The important next step, to properly test your engine, is to go through a simple game design process and implement a game based on your newly completed game engine.
In previous sections, you’ve explored how developing one simple game mechanic from the ground-up can lead in many directions and be applied to a variety of game types. Creative teams in game design studios frequently debate which elements of game design take the lead in the creative process: writers often believe story comes first, while many designers believe that story and everything else must be secondary to gameplay. There’s no right or wrong answer, of course; the creative process is a chaotic system and every team and studio is unique. Some creative directors want to tell a particular story and will search for mechanics and genres that are best suited to supporting specific narratives, while others are gameplay purists and completely devoted to a culture of “gameplay first, next, and last.” The decision often comes down to understanding your audience; if you’re creating competitive multiplayer first-person shooter experience, for example, consumers will have specific expectations for many of the core elements of play, and it’s usually a smart move to ensure that gameplay drives the design. If you’re creating an adventure game designed to tell a story and provide players with new experiences and unexpected twists, however, story and setting might lead the way.
Many game designers (including seasoned veterans as well as those new to the discipline) begin new projects by designing experiences that are relatively minor variations on existing well-understood mechanics; while there are sound reasons for this approach (as in the case of AAA studios developing content for particularly demanding audiences or a desire to work with mechanics that have proven to be successful across many titles), it tends to significantly limit exploration into new territory and is one reason why many gamers complain about creative stagnation and a lack of gameplay diversity between games within the same genre. Many professional game designers grew up enjoying certain kinds of games and dreamed about creating new experiences based on the mechanics we know and love, and several decades of that culture has focused much of the industry around a comparatively few numbers of similar mechanics and conventions. That said, a rapidly growing independent and small studio community has boldly begun throwing long-standing genre convention to the wind in recent years and easily accessible distribution platforms like mobile app stores and Valve’s Steam have opened opportunities for a wide range of new game mechanics and experiences to flourish.
If you continue exploring game design, you’ll realize there are relatively few completely unique core mechanics but endless opportunities for innovating as you build those elemental interactions into more complex causal chains and add unique flavor and texture through elegant integration with the other elements of game design. Some of the most groundbreaking and successful games were created through exercises very much like the mechanic exploration you’ve done in these “Game Design Considerations” sections; Valve’s Portal, for example, is based on the same kind of “escape the room” sandbox you have been exploring and is designed around a similarly simple base mechanic. What made Portal such a breakthrough hit? While many things need to come together to create a hit game, Portal benefitted from a design team that started building the experience from the most basic mechanic and smartly increased complexity as they became increasingly fluent in its unique structure and characteristics, instead of starting at the 10,000-foot level with a codified genre and a predetermined set of design rules.
Of course, nobody talks about Portal without also mentioning the rogue artificial intelligence character GLaDOS and her Aperture Laboratories playground: setting, narrative, and audiovisual design are as important to the Portal experience as the portal-launching game mechanic, and it’s hard to separate the gameplay from the narrative given how skillfully intertwined they are. The projects in this chapter provide a good opportunity to begin similarly situating the game mechanic from the “Game Design Considerations” sections in a unique setting and context: you’ve probably noticed many of the projects throughout this book are building toward a sci-fi visual theme, with a spacesuit-wearing hero character, a variety of flying robots, and now in Chapter 11 the introduction of parallax environments. While you’re not building a game with the same degree of environment and interaction complexity as Portal, that doesn’t mean you don’t have the same opportunity to develop a highly engaging game setting, context, and cast of characters.
The first thing you should notice about the Tiled Objects project is the dramatic impact on environment experience and scale compared to earlier projects. The factors enhancing presence in this project are the three independently moving layers (hero character, moving wall, and stationary wall) and the seamless tiling of the two background layers. Compare the Tiled Objects project to the Shadow Shaders project from Chapter 8, and notice the difference in presence when the environment is broken into multiple layers that appear to move in an analogous (if not physically accurate) way to how you experience movement in the physical world. The sense of presence is further strengthened when you add multiple background layers of parallax movement in the Parallax Objects project; as you move through the physical world, the environment appears to move at different speeds, with closer objects seeming to pass by quickly while objects toward the horizon appear to move slowly. Parallax environment objects simulate this effect, adding considerable depth and interest to game environments. The Layer Manager project pulls things together and begins to show the potential for a game setting to immediately engage the imaginations of players. With just a few techniques, you’re able to create the impression of a massive environment that might be the interior of an ancient alien machine, the outside of a large spacecraft, or anything else you might care to create. Try using different kinds of image assets with this technique: exterior landscapes, underwater locations, abstract shapes, and the like would all be interesting to explore. You’ll often find inspiration for game settings by experimenting with just a few basic elements, as you did in Chapter 11.
Pairing environment design (both audio and visual) with interaction design (and occasionally the inclusion of haptic feedback-like controller vibrations) is an approach you can use to create and enhance presence, and the relationship that environments and interactions have with the game mechanic contributes the majority of what players experience in games. Environment design and narrative context create the game setting, and as previously mentioned, the most successful and memorable games achieve an excellent harmony between game setting and player experience. At this point, the game mechanic from the “Game Design Considerations” section in Chapter 9 has been intentionally devoid of any game setting context, and you’ve only briefly considered the interaction design, leaving you free to explore any setting that captures your interest. In Chapter 12, you’ll further evolve the sci-fi setting and image assets used in the main chapter projects with the unlocking mechanic from the “Game Design Considerations” section to create a fairly advanced 2D platformer game-level prototype.
The projects included in the main sections of Chapters 1 to 11 began with simple shapes and slowly introduced characters and environments to illustrate the concepts of each chapter; those projects focused on individual behaviors and techniques (such as collision detection, object physics, lighting, and the like) but lacked the kind of structured challenges necessary to deliver a full gameplay experience. The projects in the “Design Considerations” sections demonstrate how to introduce the types of logical rules and challenges required to turn basic behaviors into well-formed game mechanics. This chapter now changes the focus to emphasize the design process from an early concept through a functional prototype, bringing together and extending the work done in earlier projects by using some characters and environments from prior chapters along with the basic idea for the unlocking platform game from the “Design Considerations” section of Chapter 11. As with earlier chapters, the design framework utilized here begins with a simple and flexible starting template and adds complexity incrementally and intentionally to allow the game to grow in a controlled manner.
The design exercises have until now avoided consideration of most of the nine elements of game design described in the “How Do You Make a Great Video Game?” section of Chapter 1 and instead focused on crafting the basic game mechanic in order to clearly define and refine the core characteristics of the game itself. The design approach used in this book is a ground-up framework that emphasizes first working with an isolated game mechanic prior to the consideration of the game’s genre or setting; when you begin incorporating a setting and building out levels that include additional design elements on top of the core mechanic, the gameplay will grow and evolve in unique directions as you grow the game world. There are endless potential variations for a game’s mechanic and the associated gameplay loops you design. You’ll be surprised by how differently the same foundational elements of gameplay develop and evolve based on the kind of creative choices you make.

The 2D implementation from Chapter 11
The player must jump the hero character (the circle with the letter p in the center of Figure 12-1) on the moving elevator platform (#1 in Figure 12-1) and jump off to the middle platform of the right column before touching the energy field.
The player activates the off switch for the energy field by colliding the hero character with it (#2, represented by the lightbulb icon in Figure 12-1).
When the energy field is switched off, the player rides the elevator platform to the top (#3) and jumps the hero to the top platform in the right column.
The player collides the hero with the small circle that represents the top third of the lock icon (#4), activating the corresponding part of the lock icon and making it glow.
The player jumps the hero back on the elevator platform (#5) and then jumps the hero to the bottom platform in the right column.
The player collides the hero with the shape corresponding to the middle section of the lock icon (#6), activating the corresponding part of the lock icon and making it glow. Two-thirds of the lock icon now glows, signaling progress.
The player jumps the hero on the elevator platform once again (#7) and then jumps the hero to the top platform in the left column.
The player collides the hero with the shape corresponding to the bottom section of the lock icon (#8), activating the final section of the icon and unlocking the barrier.
Writing out this sequence (or game flow diagram) may seem unnecessary given the mock-up screens you’ve created. It’s important, however, for designers to understand everything the player must do in exact order and detail to ensure you’re able to tune, balance, and evolve the gameplay without becoming mired in complexity or losing sight of how the player makes their way through the level. It’s clear from diagramming the previous game flow, for example, that the elevator platform is the centerpiece of this level and is required to complete every action; this is great information to have available in a schematic representation and game flow description because it provides an opportunity to intelligently refine the gameplay logic in a way that allows you to visualize the effect of each change on the overall flow of the level.
You could continue building out the mechanic to make the level more interesting and challenging (e.g., you might include a timer on the energy field’s off switch requiring players to collide with all the lock parts within a limited amount of time). However, at this stage of concept development, it’s often helpful to take a step back from gameplay and begin considering game setting and genre, using those elements to help inform how the game mechanic evolves from here.

Concepts from Chapter 11
Note there isn’t anything specific about the game mechanic you’ve been creating that would necessarily lead you in a sci-fi direction; game mechanics are abstract interactive structures and can typically integrate with any kind of setting or visual style. In this case, the authors chose a setting that takes place on a spaceship, so this chapter will use that motif as the setting for the game prototype. As you proceed through the design process, consider exploring alternate settings: how might the game mechanic from Chapter 11 be adapted to a jungle setting, a contemporary urban location, a medieval fantasy world, or an underwater metropolis?
Now is a good time to begin assigning some basic fictional background to evolve and extend the game mechanic in unique ways that enhance the setting you choose (don’t worry if this is unclear at the moment; the mechanism will become more apparent as you proceed with the level design). Imagine, for example, that the hero character is a member of the crew on a large spaceship and that she must complete a number of objectives to save the ship from exploding. Again, there is nothing about the current state of the game mechanic driving this narrative; the design task at this stage includes brainstorming some fictional context that propels the player through the game and captures their imagination. Using the few concept art assets already created/provided, the hero could just as easily be participating in a race, looking for something that was lost, exploring an abandoned alien vessel, or any of a million other possibilities.

The introduction of several visual design elements supporting the game setting and evolving narrative
Although you’ve made only a few minor substitutions and don’t yet have the visual elements anchored in an environment, Figure 12-3 conveys quite a bit more fictional context and contributes significantly more to presence than the abstract shapes of Figure 12-1. The hero character now suggests a scale that will naturally be contextualized as players benchmark relative sizes against the human figure, which brings the relative size of the entire game environment into focus for players. The implementation of object physics for the hero character as described in Chapter 10 also becomes an important component of play: simulated gravity, momentum, and the like connect players viscerally to the hero character as they move them through the game world. By implementing the design as described in Figure 12-3, you’ve already accomplished some impressive cognitive feats that support presence simply by adding a few visual elements and some object physics.
At this point in the design process , you’ve sufficiently described the game’s core mechanic and setting to begin expanding the single screen into a full-level concept. It’s not critical at this stage to have a final visual style defined, but including some concept art will help guide how the level grows. (Figure 12-3 provides a good visual representation for the amount of gameplay that will take place on a single screen given the scale of objects.) This is also a good stage to “block in” the elements from Figure 12-3 in a working prototype to begin getting a sense for how movement feels (e.g., the speed the hero character runs, the height the hero can jump, and so on), the scale of objects in the environment, the zoom level of the camera, and the like. There’s no need to include interactions and behaviors such as the lock components or the energy field at this stage because you haven’t yet designed how the level will play. For now you’re experimenting with basic hero character movement, object placement, and collision. The next set of tasks includes laying out the full level and tuning all the interactions.

The level design grows to include an additional playable area
Recall from the Simple Camera Manipulations project in Chapter 7 that you can “push” the game screen forward by moving the character close to the edge of the bound region, which allows you to design a level that extends far beyond the dimensions of a single static screen. You might choose to keep this level contained to the original game screen size, of course, and increase the complexity of the timing-based agility and logical sequence challenges (and indeed it’s a good design exercise to challenge yourself to work within space constraints), but for the purposes of this design a horizontal scrolling presentation adds interest and challenge.

Expands the layout to use the additional screen real estate, the diagram represents the entire length of stage 1 with the player able to see approximately 50 percent of the full level at any time. The camera scrolls the screen forward or backward as the player moves the hero character toward the screen bound regions. Note: for the moving platforms shown, darker arrows represent direction, and lighter arrows represent the range of the platform’s movement
Now that the level has some additional space to work with, there are several factors to evaluate and tune. For example, the scale of the hero character in Figure 12-5 has been reduced to increase the number of vertical jumps that can be executed on a single screen. Note that at this point you also have the opportunity to include additional vertical gameplay in the design if desired, implementing the same mechanism used to move the camera up and down that you used to move it left and right; many 2D platformer games allow players to move through the game world both horizontally and vertically. This level prototype will limit movement to the x plane (left and right) for simplicity although you can easily extend the level design to include vertical play in future iterations and/or subsequent levels.

The most efficient sequence to unlock the barrier
Did your sequence match Figure 12-6, or did you have extra steps? There are many potential paths players can take to complete this level, and it’s likely that no two players will take the same route (the only requirement from the mechanic design is that the lock sections be activated in order from top to bottom).
In this stage of the design is when the puzzle-making process really begins to open up; Figure 12-6 shows the potential to create highly engaging gameplay with only the few basic elements you’ve been working with. The authors use the previous template and similar variations in brainstorming sessions for many kinds of games—introducing one or two novel elements to a well-understood mechanic and exploring the impact new additions have on gameplay—and the results often open exciting new directions. As an example, you might introduce platforms that appear and disappear, platforms that rotate after a switch is activated, a moving energy field, teleporting stations, and so on. The list of ways you can build out this mechanic is of course limitless, but there is enough definition with the current template that adding a single new element is fairly easy to experiment with and test, even on paper.
There are two factors with the newly expanded level design that increase the challenge. First, the addition of the horizontally moving platform requires players to time the jump to the “elevator” platform more precisely (if they jump while the platform is ascending, there is little time to deactivate the energy field before it zaps them). The second factor is less immediately evident but equally challenging: only a portion of the level is visible at any time, so the player is not able to easily create a mental model of the entire level sequence, like they can when the entire layout is visible on a single screen. It’s important for designers to understand both explicit challenges (such as requiring players to time jumps between two moving platforms) and less obvious (and often unintentional) challenges such as being able to see only part of the level at any given time. Think back to a game you’ve played where it felt like the designers expected you remember too many elements; that kind of frustration is often the result of unintentional challenges overburdening what the player can reasonably hold in short-term memory.
As a designer you need to be aware of hidden challenges and areas of unintentional frustration or difficulty; these are key reasons why it’s vital to observe people playing your game as early and often as possible. As a general rule, any time you’re 100 percent certain you’ve designed something that makes perfect sense, at least half the people who play your game will tell you exactly the opposite. Although a detailed discussion of the benefits of user testing is outside the scope of this book, you should plan to observe people playing your game from the earliest proof of concept all the way to final release. There is no substitute for the insights you’ll gain from watching different people play what you’ve designed.

The addition of a “floor” to the game world significantly changes the level challenge

Two new object types are introduced to the level: a shooting robot (#1) that moves vertically and fires at a constant rate and a patrolling robot (#2) that moves back and forth in a specific range
You’ve now reached a turning point in the design of this level, where the setting is starting to exert a significant influence on the evolution of the mechanic and game loop. The core of the mechanic hasn’t changed from Chapter 11, and this level is still fundamentally about activating sections of a lock in the proper sequence to remove a barrier, but the moving platforms and attacking enemies are additional obstacles and are strongly influenced by the particular setting you’ve chosen.
Of course, you certainly could have added the attacking enemy behavior while still working with abstract shapes and pure mechanics. It’s worth noting, however, that the more complex and multistaged a mechanic becomes, the more the setting will need to conform to the implementation; this is why transitioning from purely abstract mechanic design to laying out a level (or part of a level) in the context of a particular setting when the mechanic is still fairly elemental is helpful. Designers typically want the game mechanic to feel deeply integrated with the game setting, so it’s beneficial to allow both to develop in tandem. Finding that sweet spot can be challenging: sometimes the mechanic leads the design, but as the setting evolves, it often will move into the driver’s seat. Bring in the setting too soon and you lose focus on refining pure gameplay; bring in the setting too late and game world may feel like an afterthought or something that’s bolted on.
Returning to the current design as represented in Figure 12-8, you now have all the elements required to create a truly engaging sequence situated in an emerging setting. You can also fairly easily tune the movement and placement of individual units to make things more or less challenging. Players will need to observe patterns of movement for both platforms and enemies to time their jumps so they can navigate the level without getting zapped or rammed, all while discovering and solving the unlocking puzzle. Note how quickly the level went from trivially easy to complete to potentially quite challenging: working with multiple moving platforms adds an element of complexity, and the need to use timing for jumps and to avoid attacking enemies—even the simple enemies from Figure 12-8 that are locked into basic movement patterns—opens nearly unlimited possibilities to create devious puzzles in a controlled and intentional way.
If you haven’t already, now is a good time to prototype your level design (including interactions) in code to validate gameplay. For this early-level prototype, it’s only important that major behaviors (running, jumping, projectile firing, moving platforms, object activations, and the like) and steps required to complete the level (puzzle sequences) are properly implemented. Some designers insist at this stage that players who have never encountered the level before should be able to play through the entire experience and fully understand what they need to do with little or no assistance, while others are willing to provide direction and fill in gaps around missing onscreen UI and incomplete puzzle sequences. It’s common practice to playtest and validate major sections of gameplay at this stage and provide playtesters with additional guidance to compensate for incomplete UI or unimplemented parts of a sequence. As a general rule, the less you need to rely on over-the-shoulder guidance for players at this stage, the better your insights into the overall design will be. The amount of the early-level prototype you’ll implement at this stage also depends on the size and complexity of your design. Large and highly complex levels may be implemented and tested in several (or many) pieces before the entire level can be played through at once, but even in the case of large and complex levels, the goal is to have the full experience playable as early as possible.
If you’ve been exploring the working prototype included with this book, you’ll discover some minor variations between the design concepts in this chapter and the playable level (the energy field was not included in the working prototype, e.g.). Consider exploring alternate design implementations with the included assets; exploration and improvisation are key elements of the creative-level design process. How many extensions of the current mechanic can you create?
The prototype you’ve been building in this chapter would serve as an effective proof of concept for a full game at its current level of development, but it’s still missing many elements typically required for a complete game experience (including visual detail and animations, sounds, scoring systems, win conditions, menus and user interface [UI] elements, and the like). In game parlance, the prototype level is now at the blockout-plus stage (blockout is a term used to describe a prototype that includes layout and functional gameplay but lacks other design elements; the inclusion of some additional concept art is the “plus” here). It’s now a good time to begin exploring audio, scoring systems, menu and onscreen UI, and the like. If this prototype were in production at a game studio, a small group might take the current level to a final production level of polish and completeness while another team worked to design and prototype additional levels. A single level or a part of a level that’s taken to final production is referred to as a vertical slice, meaning that one small section of the game includes everything that will ship with the final product. Creating a vertical slice is helpful to focus the team on what the final experience will look, feel, and sound like and can be used to validate the creative direction with playtesters.
Although you’ve begun integrating some visual design assets that align with the setting and narrative, the game typically will have few (if any) final production assets at this time and any animations will be either rough or not yet implemented (the same is true for game audio). While it’s good practice to have gameplay evolve in parallel with the game setting, studios don’t want to burn time and resources creating production assets until the team is confident that the level design is locked and they know what objects are needed and where they’ll be placed.
You should now have a fairly well-described layout and sequence for your level design (if you’ve been experimenting with a different layout compared to what’s shown in the examples, make sure you have a complete game flow described as in Figures 12-1 and 12-6.) At this point in the project, you can confidently begin “rezzing in” production assets (rezzing in is a term used by game studios to mean increasing the resolution—in this case, the visual polish and overall production quality of the level—over time). Rezzing-in is typically a multistage process that begins when the major elements of the level design are locked, and it can continue for most of the active production schedule. There are often hundreds (or thousands) of individual assets, animations, icons, and the like that will typically need to be adjusted multiple times based on the difference between how they appear outside the game build and inside the game build. Elements that appear to harmonize well in isolation and in mockups often appear quite differently after being integrated into the game.
The process of rezzing-in assets can be tedious and frustrating (there always seems to be an order of magnitude more assets than you think there will be). It can also be challenging to make things look as awesome in the game as they do in an artist’s mockups. However, it’s typically a satisfying experience when it all start to come together: something magical happens to a level design as it transitions from blockout to polished production level, and there will usually be one build where a few key visual assets have come in that make the team remark “Wow, now this feels like our game!” For AAA 3D games, these “wow” moments frequently happen as high-resolution textures are added to 3D models and as complex animations, lighting, and shadows bring the world to life; for the current prototype level, adding a parallaxing background and some localized lighting effects should really make the spaceship setting pop.
The working prototype included with this book represents a build of the final game that would typically be midway between blockout and production polish. The hero character includes several animation states (idle, run, jump), localized lighting on the hero and robots adds visual interest and drama, the level features a two-layer parallaxing background with normal maps that respond to the lighting, and major game behaviors are in place. You can build upon this prototype and continue to polish the game or modify it how you see fit.
Many new game designers (and even some veteran designers) make the mistake of treating audio as less important than the visual design, but as every gamer knows, bad audio in some cases can mean the difference between a game you love and a game you stop playing after a short time. As with visual design, audio often contributes directly to the game mechanic (e.g., countdown timers, warning sirens, positional audio that signals enemy location), and background scores enhance drama and emotion in the same way that directors use musical scores to support the action on film). However, audio in mobile games is often considered optional because many players mute the sound on their mobile devices. Well-designed audio, however, can have a dramatic impact on presence even for mobile games. In addition to sounds corresponding to game objects (walking sounds for characters who walk, shooting sounds for enemies who fire, popping sounds for things that pop, and the like), contextual audio attached to in-game actions is an important feedback mechanism for players. Menu selections, activating in-game switches, and the like should all be evaluated for potential audio support. As a general rule, if an in-game object responds to player interaction, it should be evaluated for contextual audio.
Audio designers work with level designers to create a comprehensive review of game objects and events that require sounds, and as the visuals rezz in, the associated sounds will typically follow. Game sounds often lag behind visual design because audio designers want to see what they’re creating sounds for; it’s difficult to create a “robot walking” sound, for example, if you can’t see what the robot looks like or how it moves. In much the same way that designers want to tightly integrate the game setting and mechanic, audio engineers want to ensure that the visual and audio design work well together.
The current prototype uses a common interaction model: A and D keys on the keyboard move the character right and left, and the spacebar is used to jump. Object activations in the world happen simply by colliding the hero character with the object, and the design complexity is fairly low for those interactions. Imagine, however, that as you continue building out the mechanic (perhaps in later levels), you include the ability for the character to launch projectiles and collect game objects to store in inventory. As the range of possible interactions in the game expands, complexity can increase dramatically and unintentional challenges (as mentioned previously) can begin to accumulate, which can lead to bad player frustration (as opposed to “good” player frustration, which as discussed earlier results from intentionally designed challenges).
It’s also important to be aware of the challenges encountered when adapting interaction models between different platforms. Interactions designed initially for mouse and keyboard often face considerable difficulty when moving to a game console or touch-based mobile device. Mice and keyboard interaction schemes allow for extreme precision and speed of movement compared to the imprecise thumb sticks of game controllers, and although touch interactions can be precise, mobile screens tend to be significantly smaller and obscured by fingers covering the play area. The industry took many years and iterations to adapt the first-person-shooter (FPS) genre from using mice and keyboards to game consoles, and FPS conventions for touch devices remain highly variable more than a decade after the first mobile FPS experiences launched (driven in part by the differences in processing capabilities and screen sizes of the many phones and tablets on the market). If you plan to deliver a game across platforms, make sure you consider the unique requirements of each as you’re developing the game.
The current prototype has few systems to balance and does not yet incorporate a meta-game, but imagine adding elements that require balancing such as variable-length timers for object activations or the energy field. If you’re unsure what this means, consider the following scenario: the hero character has two potential ways to deactivate the energy field, and each option is a trade-off. The first option perhaps deactivates the energy field permanently but spawns more enemy robots and considerably increases the difficulty in reaching the target object, while the second option does not spawn additional robots but only deactivates the energy field for a short time, requiring players to choose the most efficient path and execute nearly perfect timing. To balance effectively between the two options, you need to understand the design and degree of challenge associated with each system (unlimited vs. limited time). Similarly, if you added hit points to the hero character and made the firing robot create x amount of damage while the charging minion creates y amount of damage per hit, you’d want to understand the relative trade-offs between paths to objectives, perhaps making some paths less dangerous but more complex to navigate, while others might be faster to navigate but more dangerous.
As with most other aspects of the current design, there are many directions you could choose to pursue in the development of a meta-game; what might you provide to players for additional positive reinforcement or overarching context as they played through a full game created in the style of the prototype level? As one example, imagine that players must collect a certain number of objects to access the final area and prevent the ship from exploding. Perhaps each level has one object that required players to solve a puzzle of some kind before they could access it, and only after collecting the object would they then be able to solve the door-unlocking component of the level. Alternatively, perhaps each level has an object players can access to unlock cinematics and learn more about what happened on the ship for it to reach such a dire state. Or perhaps players are able to disable enemy robots in some way and collect points, with a goal to collect as many points as possible by the end of the game. Perhaps you’ll choose to forego traditional win and loss conditions entirely. Games don’t always focus on explicit win and loss conditions as a core component of the meta-game, and for a growing number of contemporary titles, especially indie games, it’s more about the journey than the competitive experience (or the competitive element becomes optional). Perhaps you can find a way to incorporate both a competitive aspect (e.g., score the most points or complete each level in the shortest time) and meta-game elements that focus more on enhancing play.
A final note on systems and meta-game: player education (frequently achieved by in-game tutorials) is often an important component of these processes. Designers become intimately acquainted with how the mechanics they design function and how the controls work, and it’s easy (and common) to lose awareness of how the game will appear to someone who encounters it for the first time. Early and frequent playtests help provide information about how much explanation players will require in order to understand what they need to do, but most games require some level of tutorial support to help teach the rules of the game world. Tutorial design techniques are outside the scope of this book, but it’s often most effective to teach players the logical rules and interactions of the game as they play through an introductory level or levels. It’s also more effective to show players what you want them to do rather than making them read long blocks of text (research shows that many players never access optional tutorials and will dismiss tutorials with excessive text without reading them; one or two very short sentences per tutorial event are a reasonable target). If you were creating an in-level tutorial system for your prototype, how would you implement it? What do you think players would reasonably discover on their own vs. what you might need to surface for them in a tutorial experience?
Game UI design is important not just from a functionality perspective (in-game menus, tutorials, and contextually important information such as health, score, and the like) but also as a contributor to the overall setting and visual design of the experience. Game UI is a core component of visual game design that’s frequently overlooked by new designers and can mean the difference between a game people love and a game nobody plays. Think back to games you’ve played that make use of complex inventory systems or that have many levels of menus you must navigate through before you can access common functions or items; can you recall games where you were frequently required to navigate through multiple sublevels to complete often-used tasks? Or perhaps games that required you to remember elaborate button combinations to access common game objects?

Most UI elements in Visceral Games’ Dead Space 3 are represented completely within the game setting and fiction, with menus appearing as holographic projections invoked by the hero character or on objects in the game world (image copyright Electronic Arts)
Many games choose to house their UI elements in reserved areas of the game screen (typically around the outer edges) that don’t directly interact with the game world; however, integrating the visual aesthetic with the game setting is another way to contribute directly to the presence of the game. Imagine the current sci-fi prototype example with a fantasy-themed UI and menu system, using the kind of medieval aesthetic design and calligraphic fonts used for a game like Bioware’s Dragon Age, for example; the resulting mismatch would be jarring and likely to pull players out of the game setting. User interface design is a complex discipline that can be challenging to master; you’ll be well served, however, by spending focused time to ensure intuitive, usable, and aesthetically appropriate UI integration into the game worlds you create.
At this stage, you’ve added just a basic narrative wrapper to the prototype example: a hero character must complete a number of objectives to prevent their spaceship from exploding. At the moment, you haven’t explicitly shared this narrative with players at all, and they have no way of knowing the environment is on a spaceship or what the objective might be other than perhaps eventually unlocking the door at the far right of the screen. Designers have a number of options for exposing the game narrative to players; you might create an introductory cinematic or animated sequence that introduces players to the hero character, their ship, and the crisis, perhaps choosing something simple like a pop-up window at the start of the level with brief introduction text that provides players with the required information. Alternatively, you might not provide any information about what’s happening when the game starts but instead choose to slowly reveal the dire situation of the ship and the objectives over time as the player proceeds through the game world. You could even choose to keep any narrative elements implied, allowing players to overlay their own interpretation. As with many other aspects of game design, there’s no single way to introduce players to a narrative and no universal guidance for how much (or how little) narrative might be required for a satisfying experience.
Narrative can also be used by designers to influence the way levels are evolved and built out even if those elements are never exposed to players. In the case of this prototype, it’s helpful as the designer to visualize the threat of an exploding ship to propel the hero character through a series of challenges with a sense of urgency; players however might experience a well-constructed side-scrolling action platformer only with a series of devilishly clever levels. You might create additional fiction around robots that have been infected with a virus, causing them to turn against the hero as a reason for their attack behavior (as just one example). By creating a narrative framework for the action to unfold within, you’re able to make informed decisions about ways to extend the mechanic that feel nicely integrated into the setting even if you don’t share all the background with players.
Of course, some game experiences have virtually no explicit narrative elements either exposed to players or not and are simply implementations of novel mechanics. Games like Zynga’s Words with Friends and Gabriele Cirulli’s hyper-casual 2048 are examples of game experiences purely based on a mechanic with no narrative wrapper.
If you continue developing this prototype, how much narrative would you choose to include, and how much would you want to expose to players to make the game come alive?
If you’ve completed playing through stage 1 of the included prototype, you’ll enter a second room with a large moving unit; this is a sandbox with a set of assets for you to explore. The prototype implementation includes just some basic behaviors to spark your imagination: a large, animated level boss unit hovers in the chamber and produces a new kind of enemy robot that seeks out the hero character, spawning a new unit every few seconds.

A possible second stage the hero character can enter after unlocking the door in stage 1. This concept includes a large “boss” unit with three nodes; one objective for this stage might be to disable each of the nodes to shut the boss down
It’s a bit of a shortcut to begin the mechanic exploration with the diagram in Figure 12-10, but because you’ve already identified the setting and a number of visual elements, it can be helpful to continue developing new stages with some of the visual assets already in place. The diagram includes the same kind of platforms used in stage 1, but what if, for example, this area had no gravity and the hero character was able to fly freely? Compare this area with stage 1, and think about how you might slightly alter the experience to mix things up a bit without fundamentally changing the game; you’ve ideally become fairly fluent with the sequencing mechanic from stage 1, and the experience in stage 2 can be a greater or lesser evolution of that mechanic.
If you choose to include the hero-seeking flying robot units, the game flow diagram will become more complex than the model used in stage 1 because of the unpredictable movement of the new robot types. You may also want to consider a mechanism for the hero character to eliminate the robot units (perhaps even working the removal of robot units into the mechanic for disabling the nodes on the boss). If you find your designs becoming difficult to describe as part of an explicit and repeatable game flow, it may signal that you’re working with more complex systems and may need to evaluate them in a playable prototype before you can effectively balance their integration with other components of the level. Of course, you can also reuse conventions and units from stage 1; you might choose to combine patrolling robots with hero-seeking robots and an energy field, for example, creating a challenging web of potential risks for the player to navigate as they work to disable the boss nodes.
You might also decide that the main objective for the level is to enable the boss nodes in order to unlock the next stage or level of the game. You can extend the narrative in any direction you like, so units can be helpful or harmful, objectives can involve disabling or enabling, the hero character can be running toward something or away from something, or any other possible scenario you can imagine. Remember, narrative development and the level design will play on each other to drive the experience forward, so stay alert for inspiration as you become increasingly fluent with the level designs for this prototype.
Game design is unique among the creative arts in the ways it requires players to become active partners in the experience, which can change dramatically depending on who the player is. Although some games share quite a bit in common with cinema (especially as story-driven games have become more popular), there’s always an unpredictable element when the player controls the on-screen action to a greater or lesser extent. Unlike movies and books, video games are interactive experiences that demand constant two-way engagement with players, and poorly designed mechanics or levels with unclear rules can block players from enjoying the experience you’ve created.
The design methodology presented in this book focuses first on teaching you the letters of the design alphabet (basic interactions), leading into the creation of words (game mechanics and gameplay), followed by sentences (levels); we hope you’ll take the next step and begin writing the next great novel (full-game experiences in existing or entirely new genres). The “escape the room” design template featured here can be used to quickly prototype a wide range of mechanics for many kinds of game experiences, from the included 2D side-scroller to isometric games to first-person experiences and more. Remember, game mechanics are fundamentally well-formed abstract puzzles that can be adapted as needed. If you find yourself having difficulty brainstorming new mechanics in the beginning, borrow some simple existing mechanics from common casual games (“match 3” variants are a great source for inspiration) and start there, adding one or two simple variations as you go. As with any creative discipline, the more you practice the basics the more fluent you’ll become with the process, and after you’ve gained some experience with simple mechanics and systems you’ll likely be surprised by the number of interesting variations you can quickly create. Some of those variations might just contribute to the next breakthrough title.
This book demonstrates the relationship between the technical and experiential aspects of game design. Designers, developers, artists, and audio engineers must work in close partnership to deliver the best experiences, taking issues such as performance/responsiveness, user inputs, system stability, and the like into consideration throughout production. The game engine you’ve developed in this book is well-matched for the type of game described in this chapter (and many others). You should now be ready to explore your own game designs with a strong technical foundation to build upon and a global understanding of how the nine elements of game design work together to create experiences that players love.