Karthik Balasubramanian, Author at Gameopedia

Game AI: Breathing Life into Digital Denizens

The outlaw Arthur Morgan has waylaid a rich-looking man, who is sprawled on the grass. Morgan fires a warning shot in the air to assert his dominance. A moment later, a bird flops to the ground, felled by his bullet. 

A gamer inadvertently fired this one-in-a-million shot in Red Dead Redemption 2 (RDR 2), and his clip of the scene went viral, with fans wondering if the bird’s death at the hands of RDR 2’s protagonist was a scripted event. It is unlikely to have been scripted, but is rather the result of the interplay between complex AI systems in RDR 2 (2018). 

When attacked, the AI-driven NPC responds realistically, trying to fend off the player. In response, the player fires a warning shot. As a result, another AI-driven NPC – a hapless bird – meets an untimely demise while flying directly overhead. The bird’s flight path has not been scripted so that it gets shot down by the player, it is merely following its own routine because RDR 2 endows both human and animal NPCs with complex behaviours and schedules, and the bird’s death is just one of the outcomes when such complex AI systems intersect. 

In this blog we will explore key aspects of game AI, and the development of seemingly intelligent behaviours in NPCs in various games and franchises. Game AI has evolved from the simple computer operated opponents in Pong and early arcade games to increasingly complex NPC agents in games such as RDR 2, the Halo franchise and Bethesda’s Elder Scrolls games. Developers have contributed significantly to defining, and redefining game AI, and games such as RDR 2 have pushed game AI to the limits, creating NPCs so convincing that they seem to have a life of their own.

What is Game AI?

Artificial intelligence in games – or game AI – is used to generate apparently intelligent and responsive behaviours mostly in non-player characters (NPCs, including human, humanoid and animal agents), allowing them to behave naturally in various game contexts, with human or humanoid characters even performing human-like actions. 

AI has been integral to gaming from the arcade age – AI opponents became prominent during this period with the introduction of difficulty scaling, discernible enemy movement patterns and the triggering of in-game events based on the player’s input. 

Game AI is distinct from the sort of AI we have become familiar with today, which is powered by machine learning and uses artificial neural networks. A key reason why in-game AI has remained distinct from today’s AI constructs is that game AI needs to be predictable to some degree. A deep learning AI can rapidly become unpredictable as it learns and evolves, whereas game AI should be controlled by algorithms that give the player a clear sense of how to interact with NPCs to achieve their in-game goals.  

According to Tanya Short, game designer and co-founder of KitFox Games, game AI is to some extent “smoke and mirrors” – complex enough to make players think they are interacting with a responsive intelligence that is nevertheless controlled and predictable so that gameplay doesn’t go awry. 

Within this relatively narrow scope, however, in-game AI can be quite complex, and game developers expertly fake the illusion of intelligence with various clever tricks – some developers have even experimented with giving more freedom to game AI, leading to interesting and unforeseen results

What Types of AI are Used in Gaming?

Arcade games were the first to use stored patterns to direct enemy movements and advances in microprocessor technology allowed for more randomness and variability, as seen in the iconic Space Invaders (1978) game. Stored patterns for this game randomised alien movements, so that each new game had the potential to be different. In Pac-Man (1980), the ghosts’ distinct movement patterns made players think they had unique traits, and made them feel they were up against four distinct entities. 

Space Invaders Used Stored Patterns to Randomise Enemy Movements (Courtesy Taito)
Space Invaders Used Stored Patterns to Randomise Enemy Movements (Courtesy Taito)


Over the years, certain key game AI techniques, such as pathfinding, finite state machines and behaviour trees have been crucial in making games more playable and NPC’s more responsive and intelligent. We delve into these below. 

Pathfinding

A relatively simple problem for humans – getting from point A to B – can be quite challenging for AI-driven agents

The answer to this problem is the pathfinding algorithm, which directs NPCs through the shortest and most efficient path between two parts of the game world. The game map itself is turned into a machine-readable scene graph with waypoints, the best route is calculated, and NPCs are set along this path. 

Such an algorithm is particularly important and prevalent in real-time strategy (RTS) games, where player-controlled units need pathfinding to follow commands, and enemy-controlled units need the algorithm to respond to the player.

Early pathfinding algorithms used in games such as StarCraft (1998) ran into a problem – each single unit lined up and took the same path, slowing down the movement of the entire cohort. Many games then used various methods to solve this problem – Age of Empires (1997) simply turned the cohorts into an actual formation to navigate the best route, and StarCraft II (2010) used ‘flocking’ movement, or ‘swarming’, an algorithm devised by AI pioneer Craig Reynolds in 1986. Flocking simulates the movement of real-life groups such as flocks of birds, human crowds moving through a city, military units and even schools of fish in water bodies. 

StarCraft II Used the Flocking Algorithm to Refine Pathfinding During Gameplay (Courtesy Blizzard)
StarCraft II Used the Flocking Algorithm to Refine Pathfinding During Gameplay (Courtesy Blizzard)

Finite State Machines

Finite state machines (FSM) are algorithms that determine how NPCs react to various player actions and environmental contexts. At its simplest a finite state machine defines various ‘states’ for the NPC AI to inhabit, based on in-game events. NPCs can transition from one state to another based on the context and act accordingly. If the NPC is designed to be hostile to the player character, seeing the player may lead it to run toward them and attack them. Defeating this NPC may impel it to run away, to go into a submissive mode, or simply enter the death state (i.e., die).

In fact, FSMs can ‘tell’ an NPC to ‘hunt’ players based on cues like audible or visible disturbances to the environment – this is a staple of stealth games, and the Metal Gear Solid franchise has used the hunting mechanic to create tense situations between the player and the NPC. Finite-state machines can also tell NPCs how to survive – under attack, they can take cover to improve health levels, reload ammunition or search for more weapons, and generally take action to evade death at the player’s hands. 

The Metal Gear Solid Franchise Uses ‘Hunting’ Behaviours to Lend Realism to Stealth Gameplay (Courtesy Konami)
The Metal Gear Solid Franchise Uses ‘Hunting’ Behaviours to Lend Realism to Stealth Gameplay (Courtesy Konami)

Behaviour Trees

Unlike finite state machines, a behaviour tree controls the flow of decisions made by an AI agent rather than the states it inhabits. The tree comprises nodes arranged in a hierarchy. At the far ends of this hierarchy are ‘leaves’ that constitute commands that NPCs follow. Other nodes form the tree’s branches, which the AI selects and traverses based on the game context to give NPCs the best sequence of commands in any particular situation. 

Behaviour trees can be extremely complex, with nodes attached to entire sub-trees that perform specific functions, and such nested trees enable the developer to create whole collections of actions that can be daisy-chained together to simulate very believable AI behaviour. As such, they are more powerful than finite state machines, which can become unmanageably complex as the number of possible states grows.

Finite State Machines Can Grow Increasingly Tangled as the Number of States Grows (Courtesy Unreal Engine)
Finite State Machines Can Grow Increasingly Tangled as the Number of States Grows (Courtesy Unreal Engine)

Behaviour trees are also easier to fine-tune and can often be altered using visual editors. You can even create behaviour trees in the Unreal Engine using a visual editor.

Notably used in the Halo franchise, behaviour trees have been part of developers’ AI toolkit for a while, and were used effectively in Alien: Isolation (2014). We will discuss their implementation both in Halo 2 and Alien: Isolation below.

How does Game AI Make NPCs Act Intelligently?

Various developers largely make use of the same fundamental concepts and techniques in creating game AI, but now employ them at much larger scales thanks to greater processing power. According to Julian Togelius, a New York University computer science professor, game AI is far more complex than the models discussed above, but are essentially variations on such core principles. In this section, we discuss some games that used AI inventively to create immersive encounters with responsive, intelligent NPCs. 

Tactical Communications in F.E.A.R

First Encounter Assault Recon, or F.E.A.R (2006) created the illusion of tactical AI combatants using mainly finite state machines, with a twist – developers gave enemies combat dialogue that broadcast their strategy, which changed based on the game context. These ‘communications’ made players think they were up against situationally-aware enemies working together to defeat them. 

F.E.A.R Verbalised its AI to Make it Seem Like Enemies were Working as a Team (Courtesy Vivendi Games)
F.E.A.R Verbalised its AI to Make it Seem Like Enemies were Working as a Team (Courtesy Vivendi Games)

The dialogue merely ‘verbalised’ the algorithms that directed NPC behaviour, but it added realism to enemy encounters – in real-life combat, soldiers do call out to their comrades to coordinate tactics, and in F.E.A.R, NPC soldiers would tell others to flank the player when possible, and even call for backup if the player was slaughtering them with ease. No real communication was taking place, but the NPC dialogue during combat gave the impression that the enemies were acting in concert. 

F.E.A.R helped pioneer ‘lightweight AI’ and added nuance by giving voice to the AI’s ‘inner thoughts’. Snippets of combat dialogue beguiled players into thinking they were working against organised, tactical squads.

Halo 2: Aliens that Behave Sensibly

A key feature of the Halo franchise is the enemy – alien NPCs who have formed an alliance to defeat humankind. These visually unique NPCs give cues to the player about how to take them down. Grunts are small and awkward, and may flee from the player, but elites and larger NPCs may take on even the Master Chief in direct combat. 

Halo’s Masterful Use of Behaviour Trees Made Each Alien Enemy Behave Uniquely in Context (Courtesy Bungie)
Halo’s Masterful Use of Behaviour Trees Made Each Alien Enemy Behave Uniquely in Context (Courtesy Bungie)

Rather than using finite state machines, Bungie used behaviour trees in Halo 2 to direct the actions of enemy aliens, because of the range of tactics made possible by sufficiently detailed behaviour trees. 

At a very abstract level, Bungie’s behaviour trees have various conditional nodes that determine NPC actions. But a lot happens at any point in the game and the ‘conditions’ for many nodes may be satisfied, leading to game-breaking ‘dithering’, when an NPC rapidly alternates between various actions that are all deemed relevant. To prevent this, Bungie used ‘smart systems’ that enabled game AI to think in context. 

Based on contextual cues (like the NPC type, its proximity to the Master Chief, whether it is on foot or in a vehicle), a system blocks off whole sections of the behaviour tree, restricting the NPC to a relatively small but relevant range of actions. Some of these remaining actions are prioritised over others, fostering sensible behaviour.

Stimulus behaviours’ then shift these priorities based on in-game triggers. If the Master Chief gets on a vehicle, then an enemy will seek a vehicle of its own, attempting to level the playing field for itself. Grunts will flee in the middle of combat if their captain is killed by the player – nodes that tell them to attack or take cover are simply overridden. 

This can lead to repeated and predictable behaviour – the player might see the grunts fleeing and choose to target their captain the next time. But that strategy won’t work either: after a high-priority action such as fleeing is executed, a delay is injected to this behaviour to stop the NPC from repeating it immediately – the next time you take out the captain, the grunts may choose to stand their ground. 

Bungie’s expert use of behaviour trees has led developers to adapt this game AI technique and it has since been used in several games, such as Bioshock Infinite (2013), Far Cry 4 (2014), Far Cry Primal (2016) and Alien: Isolation (2014).

Bethesda’s Radiant and Murderous NPCs

Halo’s enemy NPCs were essentially reacting to the player, but what if a developer wants to give the impression that NPCs are living lives of their own? Such an AI system would be especially useful in an open-world game, where a lot of NPCs may never engage in direct combat with the player, and exist to flesh out the world. 

 In Bethesda’s The Elder Scrolls III: Morrowind (2002), NPCs would pretty much ‘roam on rails’, lacking even the semblance of a routine. For The Elder Scrolls IV: Oblivion (2006), Bethesda attempted to create complex NPCs with daily habits using Radiant AI, which had to be dumbed down considerably due to its unexpected in-game results.

In Oblivion, Radiant AI prescribes various daily tasks for the NPC, such as sleeping, eating and doing an in-game job. These tasks comprise the NPC’s daily routine, and the AI allows the NPC to decide how to perform its tasks.

Oblivion’s NPCs Murdered Each Other to Complete their Tasks Before their AI was Fine-Tuned (Courtesy Bethesda)
Oblivion’s NPCs Murdered Each Other to Complete their Tasks Before their AI was Fine-Tuned (Courtesy Bethesda)

Most games do feature NPCs with schedules, the key difference was the ‘free choice’ given to NPCs to do what they had to do in Oblivion. Playtesting revealed a rather big problem with this AI system – NPCs prone to murder. 

As part of a certain quest, the player character needs to meet a dealer of Skooma – a highly-narcotic in-game potion. But the player would find this Skooma dealer dead because other NPCs designed to be ‘addicted’ to Skooma would simply kill the dealer to get to the drug, breaking the quest. In another case, a gardener could not find a rake, and so murdered another NPC, took his tools and went about raking leaves. When a hungry town guard left his post to hunt for food, other guards went along and the town’s malcontents started thieving indiscriminately as the law was nowhere in sight. 

Many of these criminals had low ‘responsibility’, an in-game NPC attribute that determines how likely they are to behave in unlawful ways. An NPC with higher responsibility would buy food, but one with low responsibility might steal it – both are trying to fulfil the eating task, but are just going about it in radically different ways.  

Of course, high-responsibility NPCs like guards won’t let crimes go unpunished, and an NPC who has stolen a loaf of bread can get killed. In the game, the player incurs a bounty if they commit a crime, and can pay instead of getting into a lethal encounter. This alternative was not granted to NPCs, so minor theft escalated to murder.

Designer Emil Pagliarulo had to take steps to tone down Radiant AI, so that NPCs wouldn’t slaughter each other to complete their daily tasks, and describes Oblivion’s original Radiant AI as a sentient version of the holodeck (the holodeck is a life simulator from the Star Trek franchise). 

But even in the finished version of the game, one can exploit Radiant AI in interesting ways. Oblivion’s game world features poisoned apples. If the player places these apples in a public place, an NPC will likely eat them and die. This has no connection to any quest – it is just a simple player action with disastrous consequences for an NPC. 

Even in Skyrim (2011), Bethesda’s fifth instalment in the Elder Scrolls franchise, the fine-tuned version of Radiant AI makes for lethal stand-offs between NPCs. In this video, NPCs fight to the death to claim certain valuable items dropped by the player, and in fact, they might do the same even if the items aren’t particularly valuable. This behaviour is driven by Bethesda’s Radiant Story system, which creates random quests based on certain parameters (like the quest giver, the guild they belong to, and other contexts) and also makes NPCs react dynamically to player actions.

Skyrim’s Radiant Story System Creates Dynamic Events Based on the Player’s Actions (Courtesy Bethesda)
Skyrim’s Radiant Story System Creates Dynamic Events Based on the Player’s Actions (Courtesy Bethesda)

NPCs will ask the player character if they can keep any item the player has dropped (or fight other NPCs to claim it). A guard may berate the player if he drops weapons, pointing out that someone could get hurt, and even fine the player if they disregard the guard’s warning. Completing a quest for an NPC makes them friendly towards you, and you can take their items instead of robbing them. In fact, friendly NPCs will also attend your wedding if you get married.

Meeting and Making Your Nemesis in the Mordor Games

The Nemesis System used in Middle Earth: Shadow of Mordor (2014) and Shadow of War (2017) is perhaps the one AI system that practically ensures that every player will encounter different villains in every playthrough. 

Shadow of Mordor Creates Unique Enemies in Each Playthrough with its Nemesis System (Courtesy Warner Bros)
Shadow of Mordor Creates Unique Enemies in Each Playthrough with its Nemesis System (Courtesy Warner Bros)

The developer Monolith Productions essentially created a dynamic ‘villain generator’ in which the player’s hostile encounters with an orc would result in changes to the orc’s status in the enemy hierarchy, his attitude towards the player, his powers, and more – if you set one on fire, he will hate you forever, and will develop a phobia for fire too, and if you run away from an orc in a fight, he will taunt you the next time you confront him. In effect, the Nemesis System turns a generic enemy into a named villain with unique traits. 

The Nemesis System is largely made possible because Talion, the protagonist, cannot die, as he is in a state between life and death – lore is used to weave player death into the narrative and gameplay. This mechanic allows orcs and other enemies to remember Talion, hate him for what he has done to them, rise up the ranks by killing him and even gain a following because of their exploits – this can make them even harder to kill.

The Nemesis System is also built on the idea that the orcs in Sauron’s Army are a bunch of back-stabbing, infighting brutes, who rise to alpha dog status by challenging and killing orcs higher up the hierarchy – orcs can become named villains not only by facing off against you, but also by taking on their commanders.

One of the late game objectives is to sow discord in Sauron’s Army and dismantle it thereby, and the best way to achieve this is by recruiting low-level orcs using a special power. Such allies will spy for you, betray and supplant enemy leaders, and even join you in fights against powerful named villains. This is part of Nemesis too – orcs can rise up, but can also lose status if you beat them or if their bid for more power backfires. ‘Turning’ such a weakened orc – or recruiting him – allows you to thin your enemy’s ranks: the game discourages indiscriminate killing. 

The ability to recruit orcs, even high-level ones such as captains and warchiefs, was expanded in Shadow of War to build up a veritable army of one’s own. Even the orcs in the game have complex relationships and are less prone to butchering each other – an orc you kill may have a friend who will hunt you down to avenge his brother-in-arms

Shadow of War Uses the Nemesis System to Help The Player Build an Army (Courtesy Warner Bros)
Shadow of War Uses the Nemesis System to Help The Player Build an Army (Courtesy Warner Bros)

Warner Bros, the publisher of the Mordor games, chose to patent the Nemesis System, preventing other developers from building on Monolith’s achievements. If the patent had not been granted, developers could have used Nemesis, or developed a system based on it, to create true drama between the player and their enemy, whose personalities grow every time they face off against each other.

The Perfect Monster in Alien: Isolation

Alien: Isolation developer Creative Assembly faced an unprecedented challenge when designing the game – how could game AI be implemented to recreate the perfect killing machine, the xenomorph, from the Alien movies?

The Xenomorph AI in Alien: Isolation Strikes a Perfect Balance between Fear and Opportunity (Courtesy Sega)
The Xenomorph AI in Alien: Isolation Strikes a Perfect Balance between Fear and Opportunity (Courtesy Sega)

As Ian Holm’s character says in the first film, Alien (1979), the xenomorph is the “perfect organism. Its structural perfection is matched only by its hostility… [It is] a survivor…unclouded by conscience, remorse, or delusions of morality.” 

The AI for such an entity has to be near-perfect as well – the horror game’s immersion would have been utterly broken if some bug made the xenomorph run around in circles, or behave like one of Oblivion’s Skooma-addicted NPCs. Every interaction between the player and the xenomorph had to be scary, believable and unpredictable. 

The developers adopted a design mantra called ‘psychopathic serendipity’, where the xenomorph somehow seems to be at the right place at the right time, and foils your plans even when you successfully hide from it. While you can’t kill it, it can kill you instantly.

Developers used a two-tiered AI system to foster these ‘serendipitous’ encounters: a director-AI always knows about your location and your actions, and periodically drops the alien-AI hints about where to look for you. But the alien-AI is never allowed to cheat, it can only work with the clues it’s given. You can always evade, hide or take it by surprise. This makes the game unpredictable, both for the alien and the player. 

The alien-AI has an extremely complex behaviour tree system that determines the actions it takes, and some of its nodes are unlocked only after certain conditions are met, making the xenomorph exhibit unnerving traits that suggest that it is learning from your actions. Other behaviours are unlocked as you get better at the game, enabling the xenomorph to keep you on your toes

A dynamic ‘menage gauge’ however, increases based on certain in-game contexts, estimating how tense the player is. When it reaches a certain threshold, the xenomorph will back off, giving the player some breathing space. 

The alien’s pathfinding algorithm is also tweaked to make it look like it’s hunting, searching or even backtracking, suggesting that it’s revising strategies on the fly. Such behaviour is activated either by giving the xenomorph areas of interest to explore, or making it respond to loud noises made by the player. The intentionally sub-optimal pathfinder makes the xenomorph stop at all points of interest, ramping up the tension, making the player wonder what it is up to. However, the alien will never look in certain areas of the game, as doing so would shift the game balance unfairly in its favour. Throughout the game, the alien never spawns or teleports anywhere (except for two cutscenes), but can sneak around so well that players think it’s teleporting. 

The AI in Alien: Isolation creates a macabre game of hide-and-seek with one of cinema’s most fearsome creatures, whose animal cunning keeps you guessing throughout the game. 

Peak Game AI in Read Dead Redemption 2

It is difficult to capture the full complexity of the in-game AI in Rockstar’s Read Dead Redemption 2 (RDR 2), but like Alien: Isolation, the game represents a novel layering of multiple AI systems. 

Read Dead Redemption 2 is so Complex that its AI will Surprise Players for Years to Come (Courtesy Rockstar)
Read Dead Redemption 2 is so Complex that its AI will Surprise Players for Years to Come (Courtesy Rockstar)

In this game, you can interact with every NPC in a variety of ways, and they will react and comment on what they notice about you, such as the blood stains on your shirt, how drunk you are, or your ‘Honor’ level, which gauges the good you have done, and also affects how the player character Arthur Morgan behaves. 

NPCs may ridicule your choice of clothing, and keep their distance if you are dirty. They also have their own complex schedules, not just restricted to doing their jobs. They will start looking over their shoulder if you follow them along their routine and may flee if you persist. In Grand Theft Auto V (GTA V), attacking an NPC might trigger various reactions like fleeing or even a counter-attack. An NPC in RDR 2, however, may not immediately draw their gun, but try to address the situation with dialogue, allowing for more believable interplay between the player and the NPC. 

During actual combat, NPCs will act intelligently – they will dive for cover, grip wounded areas and even try to take down Morgan with a melee weapon when possible. Enemies under fire will behave differently from calm ones who are not in the thick of combat. If you take refuge in a building, NPCs will cover all exits before a coordinated attack.

The sparse wild-west world of RDR 2 meant that each NPC had to be given a unique personality and mood states. Rockstar engaged 1,200 actors from the American Screen Actors Guild to flesh out the NPCs, each of whom had an 80-page script, and captured the actors’ demeanour and mannerisms over 2,200 days of mo-cap sessions. 

Even the wilderness teems with 200 animal species that interact with each other and the player, and are found only in their natural habitats – the vast open world features multiple ecosystems and animals react realistically to creatures higher up the food chain. Herbivores will flee at the sight of wolves, wolves themselves will flee from a grizzly bear and a vulture will swoop down on an abandoned carcass.

Rockstar also overhauled the animation system to create more accurate human and animal mannerisms in the game world, generating fluid movements without stiff transitions. The animation overhaul allows NPCs to react to the nuances of your facial expressions, your posture and mannerisms, especially as they all change due to the dynamic nature of the game world. A well-rested, well-fed Arthur Morgan looks different from one who is half-starved and muddy after an all-night trek, and NPCs will note this. 

Rockstar also completely recreated horse animations from scratch and even allowed the horse AI to decide how to move based on the player’s input. As a result, your horse is practically a supporting character in the game, and there’s a Youtube video devoted just to how Rockstar made the ‘ultimate video game horse’. 

One day, a fan exploring the game found a mounted NPC wearing the exact same clothes as the player character. In other games, with less variety in outfits, this will happen often, but RDR 2’s numerous outfits make this situation unlikely – until you realise the number of complex NPCs who share the world with the player. By sheer coincidence, an NPC had managed to choose the same clothes as the player – it is unlikely that the NPC’s choice of clothing is scripted and meant to surprise the player. 

Simply put, RDR 2’s AI is massively complex and will surprise players for years to come with its emergent gameplay.

Conclusion

We have seen how game AI can create complex interactions with NPCs who can act quite intelligently in context. 

In RDR 2’s case, NPCs are so complex that one can sense they have their own, complex lives, which are not just centred around the player. We may also assume that Rockstar did not use neural networks to power their game AI – the implementation of state-of-the-art AI would have surely made it to promotional materials. Every game discussed above uses traditional techniques at greater and greater scales, and it is likely that game AI might eventually reach a plateau phase. What, then, is its future?

AI powered by machine learning and neural networks may soon become a viable means for playtesting. What if an AI such as this were allowed to play a million games of RDR 2 or Skyrim to fine-tune NPC behaviours, while still maintaining the balance of predictability and randomness game AI requires? 

Most machine learning systems and neural networks work on vast datasets. Developers could perhaps take game AI to the next level by training a deep learning AI to amass and parse game AI behaviour, and improve it further still, and then create game AI – and new game AI techniques – for a subsequent game. Elder Scrolls VI, and Rockstar’s next open-world game, could perhaps benefit greatly from an AI created by AI.

Gameopedia offers extensive coverage of the AI used in all types of games. Contact us to gain actionable insights about how industry game AI techniques make games more believable and immersive. 

Read More

Case Study: How Metadata and Understanding Gamers Can Drive Conversions

In this study, we discuss how a user survey of an online gaming store yielded actionable insights about improving game discovery for gamers across various demographics. 

Why do customers play games? What makes them hate or avoid a game? How do they select a new game to play? The response to such questions revealed that coupling game metadata with a nuanced understanding of user attitudes and preferences can foster game discovery and drive more conversions in a scalable, consistent and user-focused manner.

About the Survey: Key Findings

Our survey respondents were all customers of a budget-friendly online gaming store that makes use of our game metadata services. The survey population skews young – 70% were less than 35 years old and 26% of the population – the largest single chunk – was between 20-24 years of age. Female participants were overrepresented in the youngest age group (under 25 years of age), but underrepresented in the survey population as a whole. Many of our more nuanced insights about driving conversions are derived from how user preferences change based on demographics. Our survey responses allow, but do not require, multiple responses, helping us understand the many factors that drive user decisions. 

A majority of respondents – about 60% – seek to take their mind off things by playing games (though 26% may also want to experience something beautiful, while taking their mind off things), and many customers avoid games with aggressive monetisation. Genre is the foremost decision driver in selecting a new game to play – 56% choose a game based on its genre – and nearly 75% of the site’s users are prompted to start a new game based on favourable critics’ reviews, user ratings, and friends’ recommendations. 

In the following sections, we add nuance to this basic gamer journey by delving deeper into the survey responses. We detail how game metadata can be harnessed to refine game discovery based on definable concepts such as game genre, setting, theme, and gameplay elements. We also discuss how stores can leverage even subjective attitudes about game aesthetics and monetization when directing key demographics towards games they would enjoy. Where possible, we link to relevant content such as blogs and other pages on our website, so that our insights into user preferences can be understood in a wider context. 

Each section below is entitled with the questions we asked in our survey, and contains insights we gleaned about user preferences from our survey respondents. 

Why Do You Play Games?

While a majority of the respondents play to take their mind off things, a fourth said they want to experience a beautiful game, and 14% said they used games to unleash their creativity. 

In fact, female respondents are 18% more likely to prefer games based on their visual appeal, and are around 60% more inclined to use games as an outlet for their creativity: both aesthetics and expressing their creativity matters to them. 

Given that female respondents are even more likely to play games with appealing visuals, retailers can drive conversions among their female customers (and even attract more women gamers) by curating games that are universally praised for their beautiful visuals, and adorning such games’ store pages with attractive screenshots. Stores can also feature games with a strong creative element – such as Minecraft (2011) and other similar sandbox games – to achieve the same effect. 

Only 14% of all respondents cited graphics quality as a decision driver for buying a game, and only 13% said they seek out games with a specific art style. Our store’s customers are not necessarily looking for state-of-the-art graphics, or for a specific ‘look’, but good aesthetics, and to learn more about how videogames can be beautiful, read our blog on the hunt for photorealism.

What Makes You Hate or Avoid a Game?

Respondents cited aggressive monetization, an unfriendly player base and poor performance (bugs and technical issues) as the three main factors that make them hate or avoid a game. For this question, any one of the responses can serve as a deal breaker – customers who cite multiple factors do not mean they will put up with some, but not all, of the problematic aspects of a game. They will abandon a game if it has even one of the features that displease them, and our sentiment analysis can help gauge user attitudes about a game’s monetization strategy, its performance, user experiences with a game’s multiplayer base, and even perceptions about whether the game delivers value for money. 

Players younger than 25 – who form a significant chunk of our survey population – are 15% more likely to tolerate aggressive monetization, though cost plays a greater role in their buying decisions (around 35% vs the average of 30%). Younger players want to be convinced of the value a game offers before parting with their money, and the freemium/free-to-play, or live-service game model is ideally suited to their preferences – they can assess the free base game and decide whether premium content will be worth the price, and can also satisfy their need for social engagement through such games. 

Stores can drive conversions among younger customers by selling premium content for prominent live-service games and other games that adopt the free-to-play or freemium business model. Such a strategy can be highly lucrative, as such titles can keep gamers engaged for years.

Older respondents are 20% less likely to play with other gamers. In fact, for older store  customers, forced interaction with other players is a deal-breaker. When compared with younger players, customers aged 25-40 are 25% more likely to avoid games with forced interactions, and respondents over 40 are 50% more inclined to abandon such games.

Case Study Image

Customers over 40 are 60% more likely to play games to solve problems with careful thinking and planning. For such users, gaming is a solitary pursuit and an opportunity to flex their brains. Featuring single-player games that emphasise puzzle-solving can attract more older players, and drive more conversions among them too, and our game metadata framework can help identify games of this sort, which serve niche interests.

What Makes You Start a New Game?

More than 70% of our respondents start playing a new game based on critic reviews, user ratings and friends’ endorsements. Our metadata framework provides details about critic reviews and user ratings, and player sentiment can be gauged to see if the game is likely to be recommended to others. Stores can feature favourable reviews and user ratings and use sentiment analysis to establish a game’s bonafides.  

About 30% also start a new game if it resembles what they have played before. But what exactly does the customer mean by resemblance? Is the similarity in gameplay, visuals or something else? 

The store could make educated guesses about resemblance through the customer’s purchase history, but guesswork is not scalable. The best insights about user preferences and game discovery will emerge from a rigorous metadata framework which categorises store titles by gameplay, visuals, or any relevant video game feature.

Suppose a customer has bought several Assassin’s Creed games, all after a year of the game’s launch. The user may not buy the latest release because it’s not yet on sale and they are at the store for a bargain. 

What if the store uses metadata to suggest a parkour-style exploration game, or an open-world game with stealth mechanics? Ghost of Tsushima (2020) – like the Assassin’s Creed games – is a gorgeous open world with stealth mechanics, and could resonate with an Assassin’s Creed fan.  

But if the customer wants games that resemble Assassin’s Creed in terms of parkour traversal, then Sunset Overdrive (2014), would be a good match, and Mirror’s Edge (2016) and Dying Light (2015) would be a novel experience because of their thrilling first-person parkour mechanics.  

Only by using a metadata framework for defining features like visuals, gameplay and traversal can the store identify multiple titles ‘like’ Assassin’s Creed. Such a framework is also scalable as it covers the store’s entire catalogue rather than a single franchise. 

What Drives You to Select a New Game?

This question is vital to retailers because of its direct insight into what drives conversions. 

Cost is the third-most cited factor and customers may well be conservative in their choice of games, especially considering that they tend to stick to a certain genre.

The second-most important factor – a strong story – is cited by 30% of the respondents, but only 17% abandon games with bad or weak stories. Players may likely put up with this flaw if the game is otherwise appealing. Games whose stories have resonated with gamers can be identified using sentiment analysis and given more prominence.

Only 17% cite good performance as a factor behind buying games, but 27% of the respondents will avoid games with poor performance. Good performance is expected – publishers won’t get brownie points just for delivering a functional game – and a bug-ridden release will attract few users. Consequently, stores can feature titles that are making a ‘come-back’ from bad and buggy launches. 

Genre is the foremost decision driver, with 56% citing it as a factor in buying games. Stores could identify the most popular genres amongst its users and give popular game genres prominence, but how would it define ‘game genres’?  

Sites like Metacritic do provide genres, but many games fall into multiple genres: Skyrim’s genres are ‘Role Playing’ and ‘Western Style’, to distinguish it from Japanese Role-Playing Games (RPG). A store using borrowed genre classifications might present open-world RPGs to customers who enjoy linear RPGs (if its metadata does not distinguish between the two), resulting in few conversions. 

A comprehensive metadata framework would define genres precisely and help identify the most popular genres amongst users, after which the store could play up the most prominent games in popular genres. A sufficiently granular metadata framework can give gamers the exact type of game they want within the foremost genres as well. 

Suppose the shooter genre is one of the most popular amongst store users. One customer has purchased several less-known shooters, and is unlikely to buy the more popular titles. 

Just as the metadata framework can be used to categorise a franchise like Assassin’s Creed by game features, a descriptor like genre itself can be sliced and diced into subgenres based on combat, visuals, camera perspective and more, to find a match for the user who buys niche shooters. 

Like a detailed map, a store whose pages contain granular information about genre, setting and other descriptors can speed up the user journey, steering gamers toward the title they want, and propel conversions. Such pages can also lead more people to stay on the site, instead of abandoning it. 

Our metadata framework is wide, covering a multitude of games, and deep, covering each game with detailed descriptors and dividing games into precise sub-groups. It can hence consistently drive conversions in a scalable manner, and this is why we do what we do. To learn more about how we do it, visit our pages on video game metadata and game taxonomy

Conclusion

Our survey respondents depend on trusted sources to start playing a new game and prefer specific genres or settings over others. Some of the reasons they play, or avoid, a game can be considered subjective – a game cannot be defined as beautiful with rigour, nor can its monetization be objectively characterised as ‘aggressive’. Game retailers can use sentiment analysis or trawl news outlets to gauge the prevailing opinion about aesthetics and monetization, and give prominence to certain games accordingly.

But our survey also indicates that many gamer preferences require a robust metadata framework – genres need to be well-defined if a store sets out to play up popular genres to all players, or specific genres to some of its clientele. Gamers who like to play something resembling a previous game need to be given suggestions using descriptors that can identify similarity precisely. A nuanced metadata framework can also identify the subtle but significant differences between two largely similar games, giving the user a better idea of what to expect from their new purchase.

Ultimately, the most robust game discovery or personalisation system will emerge when we understand why customers have the preferences they do, and we at Gameopedia are working to codify what drives user preferences. It is difficult to imagine that more than half of our client store’s customers prioritise genre for the same reasons. Some may like shooters for the adrenaline rush of its fast-paced action, and others may like interactive adventure games because of their strong narratives. If gamers prefer certain genres over others for specific reasons, game retailers can suggest other games that boast similar qualities to coax users into trying new genres. 

Our survey results thus indicate the need for a game metadata framework, and a deeper understanding of a user’s preferences, not only to improve conversions, but to truly understand, and satisfy, customer needs. 

Read More

The Retro Game: Nostalgia and Reinvention

When Nintendo released its Classic Mini NES in 2016, the gaming community went berserk. The Mini Nintendo Entertainment System (NES) sold out instantly due to ‘feverish demand’ and within days, scalpers were selling the console for up to nearly four times its retail price on eBay, at an average price of $230. In comparison, the Switch’s launch price was $299 in 2017. 

The NES Classic Mini (Courtesy Nintendo)
The NES Classic Mini (Courtesy Nintendo)

Nintendo’s retro console is a small-size replica of the NES and there isn’t any place to insert cartridges – it contains 30 games made for the original NES, most of which are at least 30 years old. Yet the console was wildly popular, and Nintendo simply could not match the demand for it. The company ceased manufacturing the NES Classic Mini by 2018, but its foray into retro consoles had shown just how popular retro gaming had become. 

The NES Mini’s unprecedented success suggests that even retro-inspired games may well find an audience among gamers, and this is indeed the case. In fact, modern retro-inspired games are popular both among older gamers looking to relive their childhood gaming experiences, and younger players eager for a taste of the classics. Such games succeed not only because of the pull of nostalgia, but also because they recreate the look and feel of older games while introducing innovative gameplay mechanics. 

In this blog, we will discuss what a retro game is and how they have inspired a slew of modern games. We will also discuss the history of how modern, retro-style games attained a degree of mainstream popularity and recognition, and delve into some of the most well regarded retro-inspired games of today. 

What is a Retro Game?

There isn’t a single widely-accepted definition of a retro game – what is considered retro, and what is considered a retro classic, is largely determined by what will evoke nostalgia among older gamers. 

Today, titles released during the 8-bit to 16-bit period (or the third and fourth generation of consoles) are fondly remembered as classics by older gamers, who played these games as children and are more likely to gravitate towards titles that bring back memories of playing such games. The average gamer is around 35-37 years old, and a significant chunk of gamers today are in their late thirties or early forties. They have more disposable income to spend on games, and are more likely to spend frequently on gaming. Such players almost certainly got their first taste of gaming from the third and fourth generation of consoles and their nostalgia for this time period impels them to seek out the games of the ’80s and ’90s. 

The games from this era are true classics, likely to remain relevant even when nostalgia ceases to be a factor. The Nintendo Entertainment System (NES) saved the industry after the video game crash of 1983 and introduced instant classics such as Super Mario Bros (1985) and The Legend of Zelda (1986), both of which would spawn long-running game franchises. The quality of these games has made retro gaming a highly enjoyable pastime – and the NES and SNES are especially popular among retro gamers. The shift from 2D to 3D, during the fifth generation, marked the end of an era that had brought gaming back to the mainstream. This may be why many indie titles, including the ones we discuss in this blog, pay homage to this time period in gaming history. 

Super Mario Bros and other Ground-Breaking Games Revived the Industry (Courtesy Nintendo)
Super Mario Bros and other Ground-Breaking Games Revived the Industry (Courtesy Nintendo)

What is a Modern Retro Game?

A modern, retro-style game devoutly recreates the 2D aesthetic of the 8-bit and 16-bit era and adopts the gameplay mechanics of the ‘classic’ generation while introducing innovations made possible by modern tools and design perspectives. Essentially, a modern retro game tries not only to recreate the appearance of a much older game, but also the experience of playing such a game, with innovations that can appeal even to younger gamers not necessarily looking to relieve their childhoods.

There are some exceptions to the 2D aesthetic, however: both Project Warlock (2018) and Ion Fury (2019) are inspired by the appearance and gameplay of early FPS games like Doom (1993) and Duke Nukem 3D (1996). Both Project Warlock and Ion Fury are nevertheless inspired by the same time period, and the gamers who played games on the SNES (Super Nintendo Entertainment System, 1990) no doubt played Doom and other FPS titles on PC as well.

Project Warlock Pays Homage to the Shooters of the ’90s (Courtesy Buckshot Software)
Project Warlock Pays Homage to the Shooters of the ’90s (Courtesy Buckshot Software)

History of Modern Retro Games

The rise to prominence of modern retro-style games can be linked to some extent with the history of indie game development – in the 2000s, indie developers carved a niche for themselves by delivering retro-style experiences, and by the 2010s, such games hewed closely to the design and aesthetic of older games, intentionally recreating the experience of playing a classic from the past. 

In the 2000s, major game studios were pushing the envelope on 3D gaming and the decade saw exponential growth in the quality of 3D graphics. Eventually, major studios transitioned to 3D game development and the 3D worlds pioneered by id and Epic Games became common. This created a market for those looking for nostalgic experiences of 2D.

According to Sam Roberts, director of the annual indie game festival Indiecade, the retro aesthetic helped indie developers create a niche for themselves because of the big developers’ ‘single-minded’ pursuit of high-res, photo-realistic graphics, which led them to abandon game genres that had been popular in the ’80s and ’90s. AAA studios were not really inclined to deliver retro gaming experiences, even though a demand for them existed, as demonstrated by the success of Cave Story (2004).

The 2D platform adventure Cave Story was the product of a single game developer, Daisuke “Pixel” Amaya, who made the game over the course of five years, mainly during his free time. The game has received widespread critical acclaim for its polished look and gameplay design, and for the sincere tribute it paid to classic franchises like Metroid, Mega Man, The Legend of Zelda, and Castlevania. Its success demonstrated the demand for retro games, and its quality and sophistication showed how indie game development had matured.

Cave Story was one of Indie Gaming’s First Successful Retro Games (Courtesy Daisuke "Pixel" Amaya)
Cave Story was one of Indie Gaming’s First Successful Retro Games (Courtesy Daisuke "Pixel" Amaya)

This was followed by other successful titles like Braid (2008), Super Meat Boy (2010) Terraria (2011) and Minecraft (2011). With the exception of Minecraft, these early indie successes were already harkening back to the 2D era, inspired in part by Cave Story. In 2008, Microsoft launched its summer of games event to promote indie games and prominently featured Super Meat Boy and Braid. Indie games had emerged from their niche and into the mainstream. 

The 2D indie games of the 2000s had unique aesthetics and did not generally mimic the look of an 8-bit or 16-bit game. But from the 2010s onwards, new techniques allowed developers to create an authentic ‘retro’ look. Shovel Knight (2014), made with a custom engine, was so similar in appearance to the games of the ’80s and ’90s that some gamers believed it could be played on the NES console. 

Super Meat Boy (Courtesy Team Meat)
Super Meat Boy (Courtesy Team Meat)

By the mid 2010s, there were a slew of indie games that took cues from Shovel Knight, and attempted to faithfully recreate the retro aesthetic of the ’80s and ’90s. Such games also retained older gameplay elements while introducing modern conveniences. Not all went as far as Shovel Knight in recreating the ‘classic’ look, but their visuals are clearly inspired by games for the NES and the SNES. 

Why are Modern Retro Games So Popular?

Modern retro-inspired games are popular because they are well-developed titles that are highly replayable and maintain an older-looking visuals and audio – the best retro games combine nostalgia and innovation to appeal to a wide variety of gamers.

In fact, a video game is far more capable of evoking nostalgic feelings than a film or a piece of music because it is highly immersive, allowing you to revisit a cherished virtual space from the past. Playing retro games (rather than watching a classic film) can be an intensely personal experience

However, just nostalgia alone cannot account for the popularity of retro games. Such games also bring back the elegant simplicity of older game design, and even while some of them are harder to play than the average game, their gameplay elements can be quickly understood, paving the way for an immersive experience quite unlike a modern AAA game, which can become overwhelming with its cutscenes, visuals, branching storylines and sprawling worlds. Those looking for a simpler experience may naturally turn to retro games.

According to The Independent, 90% of gamers will not finish modern games, partly because games now feature longer campaigns – a modern game’s campaign can take between 30-100 hours to complete. Given the complexity and length of modern video games, older gamers tend to prefer the simplicity and familiarity of a retro game that will not eat into their time. Even younger players can be attracted to such games because they are now trendy and their core gameplay loops are relatively easy to pick up.

Another compelling reason to play a retro game is that it provides an alternative to the toxic culture of competitive multiplayer gaming. As a critic observes, contemporary multiplayer focuses on ‘destroying’ opponents, but the couch co-op games of the ’80s and ’90s were about having fun together. Retro games that allow multiplayer gaming of the older kind let people relax instead of obsessing over being the best and racking up the most kills. 

At its simplest, nostalgia is a sentimental yearning for a happy past. It indubitably plays a role in the popularity of retro-inspired games, but so do many other factors. Gamers who are rediscovering old-school couch co-op are not just reliving their childhood, they are escaping the needless stress of competitive multiplayer. Gamers who are tired of sprawling open-world games with endless side quests can enjoy both the simplicity and the challenge of retro-inspired games. 

The Best Retro Games of Today

The best retro games released today blend nostalgia, innovative gameplay, simplicity and a very recognizable 8-bit or 16-bit aesthetic that goes right down to the use of ‘chiptune’ music and a rigorously limited colour palette derived from classic games. The games we discuss below are all very well-regarded for their adroit recreation of the past for gaming audiences of the present. 

Celeste

Celeste (2018) is a retro platformer with unusual mechanics – it lacks a skill progression system even as the levels get tougher. You will have to restart each level, or screen, afresh if you make a single mistake, and the lack of level progression essentially impels you, rather than your player character, to become better at the game. This might give the impression that Celeste is a ‘hard’ game, meant to be ‘beaten’ – but the game uses its difficulty to tell a compelling and emotional story about a young woman who must climb a mountain while coping with her depression and anxiety. 

Celeste is a Difficult Game that Tells a Moving Story (Courtesy Maddy Makes Games)
Celeste is a Difficult Game that Tells a Moving Story (Courtesy Maddy Makes Games)

You die a lot in Celeste, but each death is a reminder that you are constantly learning how to overcome challenges. When you do complete each level, there is an exhilarating sense of accomplishment, especially as your player character does not level up – it’s you who have surpassed the challenge. Celeste’s restrained approach to mental health actually helped a player cope with suicidal thoughts – a remarkable achievement for any game. Celeste was by no means a ‘cult’ hit – by the end of 2019, it had sold over a million units. 

Sonic Mania

Unlike most retro-inspired games, which are usually made by indie studios, Sonic Mania (2017) was produced by Sega itself. Sonic Mania went back to the franchise’s roots – building and maintaining momentum were once again the focus of the game. The Sonic franchise had long been stagnant and Sonic Mania was a refreshing return to form. 

Sonic Mania Goes Back to the Franchise's Roots (Courtesy Sega)
Sonic Mania Goes Back to the Franchise's Roots (Courtesy Sega)

The game allows you to control Sonic, Tails, Might, Ray, and Knuckles, each of whom have unique skills. Sonic’s new drop-dash move enables faster movement through the air, enabling new platforming strategies. The soundtrack, with its combination of remixed classics and modern tracks suited the game’s own mix of old and new. The graphics were true to the aesthetics of the Sega Genesis, but still looked great on modern displays. Sonic was finally cool again, and all thanks to a game that got back to the basics, and within a year of launch, it had sold a million copies

The Messenger

Inspired by Ninja Gaiden (2004), The Messenger (2018) is an intense 2D side scroller that lets you play as a deadly Ninja who initially goes through various linear levels to combat a boss. But that is when the game throws a twist at you: the Ninja gains special powers that enable him to explore the past and present, presented in 8-bit and 16-bit styles.

The Messenger's 8-bit and 16-bit Art Styles are Part of its Gameplay (Devolver Digital)
The Messenger's 8-bit and 16-bit Art Styles are Part of its Gameplay (Devolver Digital)

But this is just the beginning – the past and present levels branch out into even more areas, and by then it is clear that The Messenger is not a linear game at all, but a game inspired by the Metroidvania gaming genre, which uses guided non-linearity to encourage exploration. The player must traverse various levels, solve puzzles and defeat several more enemies before he meets the real, final boss: the demon who destroyed his village. 

Enter the Gungeon

Emulating the top-down shooters of the third and fourth console generations, Enter the Gungeon (2016) is a rogue-like title with a high difficulty level filled with creative gun designs. The procedurally generated levels follow an internal logic that results in true novelty, rather than slight variations of the same thing, increasing replay value. The game was a critical and commercial success: it has sold three million units since launch.

Enter the Gungeon’s Bullet Hell Mechanic (Courtesy Devolver Digital)
Enter the Gungeon’s Bullet Hell Mechanic (Courtesy Devolver Digital)

The game is difficult enough that there are online guides for beginners who may be unfamiliar with the ‘bullet hell’ mechanic – a staple of many games from the NES and SNES era. In a bullet hell game, a large number of projectiles in detailed formations are hurled at the player, who must then avoid them even as he tries to destroy the gun firing these missiles. Enter the Gungeon uses the bullet hell mechanic to maximal effect, with a great deal of variety both in terms of enemy projectiles and the implements that the player character can use to defeat them. 

Bloodstained: Curse of the Moon

Bloodstained: Curse of the Moon (2018) is perhaps a game that hews too close to its inspirations. Heavily influenced by the Castlevania series, the game painstakingly recreates the 8-bit aesthetic and the slow-paced action of the NES classic Castlevania III: Dracula’s Curse (1989). In fact, the game’s combat system was so like its inspirations that one reviewer soon grew impatient with the characters’ ‘plodding movement and attack speed’, and IGN states that the game walks a fine line between homage and theft.

Bloodstained: Curse of the Moon’s Owes Much to its Inspirations (Courtesy Inti Creates)
Bloodstained: Curse of the Moon’s Owes Much to its Inspirations (Courtesy Inti Creates)

Other reviewers were more appreciative, praising the ease with which you could switch between multiple player characters, and the gothic visuals and music that set a brooding tone for the whole game. Even IGN praised its difficulty scaling, as the game introduced new gameplay elements rather than just giving bosses more health – some enemies can knock back the player, who might then plummet down the sort of abysses very common in 2D games. The game offers multiple options for tackling its eight stages, making it highly replayable. Within two years of launch, the game had sold over half a million copies

Shovel Knight

Shovel Knight comes closest to perfectly recreating an 8-bit game and its art style counts almost as a faithful forgery, and even the widely praised chiptune soundtrack reinforces the feeling that one is playing a game made for the NES – developers actually had to clarify that the title could only run on modern consoles. 

The 2D platformer pays homage to Zelda II: The Adventure of Link (1987), copying its downward thrust attack, and other inspirations include Castlevania, Super Mario Bros and the Mega Man series. 

The game’s developers – Yacht Club Games – recreated many elements of a classic 2D side-scroller, including ‘parallax’ scrolling – the backgrounds of side-scrolling games can suggest a 3D space by shifting different layers at different speeds, mimicking how near parts of the landscape rush by you when you look out a train window, while horizon landmarks seem to remain immobile. Even the colour palette of the game is restricted to what would have been available during the NES era. Yacht Club Games took a nuanced approach to difficulty as well, introducing penalties for dying rather than returning you back to the beginning of the level – in effect, the game’s difficulty is the only aspect not faithfully copied from its inspirations.

Shovel Knight: A Game So Retro Users Thought it Required an NES (Courtesy Yacht Club Games)
Shovel Knight: A Game So Retro Users Thought it Required an NES (Courtesy Yacht Club Games)

The crowdfunded game proved so successful, both among audiences and critics, that it is now considered one of the greatest games ever made, and has sold 2.5 million copies since launch. 

Undertale

The 2D RPG Undertale (2015), was also lauded as game of the year by many publications and was nominated for, and won, many awards – in a year when the Witcher 3 hit the stands. This is all the more incredible because Undertale was mostly made by a single designer, Toby Fox, who also composed the music for the game. Undertale, like Shovel Knight, is a classic of the 2010s.

Undertale shares the 8-bit aesthetic of Shovel Knight but its gameplay is entirely unique, quite unlike any games from the classic (or contemporary) era. Undertale leaves it up to the player to decide whether they want to kill or spare enemies, creating three distinct playstyles pacifist (with no kills), neutral (with some kills) and genocide (all kills). Undertale however, gently nudges you toward a neutral playthrough

Transcending both the Retro and Modern Aesthetic, Undertale is an Indie Classic (Courtesy Toby Fox)
Transcending both the Retro and Modern Aesthetic, Undertale is an Indie Classic (Courtesy Toby Fox)

Many games discussed here add nuance to game difficulty – Undertale actually lets you talk to enemies and get past them without striking a single blow. Many of the games feature widely-acclaimed music, Undertale is the most streamed video game soundtrack on Spotify as of May 2022. Your play style even determines what content you will see – an iconic battle with one of the game’s toughest enemies (accompanied by one of gaming’s most popular tunes) is unlocked only if you choose the genocide playthrough. Simply put, Undertale is indie development at its innovative best, combining old and new, and transcending both. 

Released in September, the game sold over half a million copies soon after launch, becoming one of the best-selling Steam titles of 2015. It has since made $26.7 million off Steam sales alone, and continues to remain popular, getting ported to the Switch as well, where it became one of the top-selling indie games in 2019. 

Conclusion

The greatest quality of the retro games we have discussed is their runaway imagination, even as they hew close to their 8-bit inspirations. Nostalgia can only go so far; in fact, it has been criticised for discouraging innovation in game design. The designers of retro-inspired games are aware of this, and succeed in striking a fine balance between nostalgia and reinvention. 

Many of the games featured here are far more innovative than some of the greatest AAA games released today, despite the millions of dollars spent by bigger studios – AAA titles invariably push the envelope in terms of graphics, but not always in terms of gameplay. Moreover, ‘risk aversion’ is the new norm for bigger industry players, and this allows smaller games, with smaller budgets, to truly spread their wings and soar to new heights. And the ones that reach truly undiscovered territory are those that go back to the roots of home console gaming. 

Gameopedia offers custom solutions depending on your specific data requirements. Reach out to us for actionable insights about retro games, from their unique aesthetics and novel gameplay.

Read More

The Evolution of 3D Graphics in Video Games (The Hunt for Photorealism)

A colleague shared this story about his first time playing a game on the PS5 – he had downloaded God of War and had just started to play the initial sequence when his father, who happened to be watching, exclaimed, “Why do they make movies any more when games look so realistic?.”

God of War (2018) looks stunning, but it is not exactly photorealistic. However, the reaction of our colleague’s father suggests that today’s major games could, perhaps, pass for live action to an untrained eye – clearly, games have come a very long way from the first 3D titles of the late 1990s. 

God of War (Courtesy Sony)
God of War (Courtesy Sony)

In this blog we will delve into the major milestones in the development of 3D game graphics, which have evolved from very basic and primitive effects to the near-photorealistic visuals we enjoy today. We will discuss the advent of true 3D games and then delve into the evolution of how 3D games are rendered – or how developers kept pushing the envelope to make games look increasingly realistic, immersive and believable. 

Before the advent of true 3D, games such as Doom (1993) and its numerous clones had faked the illusion of 3D using 2D game objects. The enormous success and technical innovations of Doom soon led to full 3D games – developers were keenly aware that gamers wanted true 3D titles and worked ceaselessly to create such experiences by designing and using new game engines.

The Advent and Rise of 3D Games

3D gaming rose to prominence not only because developers were striving to create 3D-capable engines – hardware manufacturers were coming up with the first true graphics cards and the developers of games for the Nintendo 64 (1996) and the Sony PlayStation (1994) were trying to make true 3D games that would reach mainstream audiences. All these factors conspired to make 3D gaming prominent by the late 1990s. 

Hardware Acceleration Reaches Consumers

Hardware acceleration is a process by which certain workloads are offloaded to specialised hardware capable of parallel processing, which can execute these demanding tasks more efficiently than a software application running on the CPU.

Early graphics cards were designed to support hardware acceleration for video game rendering and one of the first successful cards of this type was the Voodoo 1, made by the company 3dfx and launched in late 1996. By the end of 1997 it was the most popular card among developers and consumers, though 3dfx soon declined with the ascent of Nvidia, which would buy 3dfx Interactive in 2000.

Tech Pioneers Start Making True 3D Games

The first 3D games were the result of unceasing innovation by a handful of brilliant programmers at id Software and Epic Games. At id, John Carmack spearheaded the creation of the Quake engine in 1996, which featured real-time 3d rendering and support for 3D hardware acceleration. The engine used static light maps for stationary objects and environment, while moving bodies such as the player character and enemies had dynamic shaders. Tim Sweeney of Epic Games introduced 3D graphic effects way ahead of their time with his Unreal Engine, which used clever tricks to simulate soft shadows, volumetric fog, dynamic lighting and more. 

Quake (Courtesy id)
Quake (Courtesy id)

5th-Gen Consoles and Mainstream 3D Gaming

Advancements were not just restricted to PC hardware and games – consoles also gave a major push to the emergence of games with 3D graphics. The Nintendo 64’s hardware architecture powered true 3D games such as Super Mario 64 (1996) and The Legend of Zelda: Ocarina of Time (1998), and the PlayStation also had great looking 3D games such as Gran Turismo (1997), a racing game that uses full 3D environments. Like the Nintendo 64, the PlayStation used custom hardware to make 3D graphics possible, and the enormous success of the PlayStation, the first console to sell more than a 100 million units, propelled 3D games into the mainstream.

The Legend of Zelda: Ocarina of Time (Courtesy Nintendo)
The Legend of Zelda: Ocarina of Time (Courtesy Nintendo)

By the late 1990s, both PC hardware and consoles were capable of supporting 3D games and there had been a decisive shift toward 3D gaming. The next challenge was to make such games look as realistic as possible. Both Carmack and Sweeney had experimented with various rendering techniques to make their blocky 3D games look more realistic, but these titles are a far cry from what we see today. Since the late 1990s, developers have continued to push 3D game rendering toward photorealism, and this endeavour continues to this day. This blog is hence a history of advancements in 3D rendering – it ends with real-time ray-tracing, but only time will tell what new avenues game developers will explore. 

How Game Graphics Evolved Towards Realism

When developers strive for photorealism, they use every tool at their disposal to achieve it. In the following sections, we discuss the key innovations in game graphics, and how each of them dramatically increased the realism of 3D game rendering.

Normal Mapping – Detailing Optimised Game Models

Every 3D model (or mesh) in a game is composed of triangles, and will have generally gone through several iterations of optimisation to reduce its ‘polycount’, i.e, the number of triangles (or polygons) it has. During the design stage, however, high-poly models, containing lots of details, are created using 3D design tools such as Max, Maya, ZBrush and others. Such high-res models can contain more than a million polygons and simply cannot be deployed in-game – the renderer would choke – but their details are required to make a scene believable. This is where normal mapping comes in – through a process known as baking, the detail of a high-res model is transferred to a ‘map’ (short for bitmap), or texture, which the game engine can use to give an optimised model the illusion of detail. Such normal maps can also convincingly mimic how the high-polygon model would respond to lighting, furthering the illusion that you are seeing a detailed in-game object, and not an optimised mesh linked to a normal map. The key benefit of a normal map is not just that it creates the impression of detail, but also that it creates this with highly-optimised, game-ready geometry. 

Normal Mapping Allows Detail to be Preserved in Highly Optimised Meshes (Courtesy Wikimedia Commons)
Normal Mapping Allows Detail to be Preserved in Highly Optimised Meshes (Courtesy Wikimedia Commons)

Nvidia’s GeForce 3 was the first card to support textures such as normal maps and specular maps – the former gave models a detailed appearance, and the latter controlled how shiny or glossy the model would look. A custom-made version of this card was used in Microsoft’s first Xbox, which used normal mapping extensively. The PS2 didn’t include support for such texture-mapping, but by the seventh generation of consoles, normal mapping was the norm across PCs and consoles. 

Implementing normal mapping and other textures in games was a major breakthrough, and normal maps are used to this day in games and other graphics pipelines. Today’s graphics processing units (GPUs) are capable of rendering far more polygons, but this capability is used in conjunction with normal mapping to make ultra-realistic scene assets.

The Transition to HD Gaming

HD-TV is an advancement in digital display technology that became available to consumers in the early 2000s and became widespread within a few years. The resolution of digital TV sets and monitors are measured by the number of pixels on the display, and in the early years of HD-TV, display resolutions ranged from 720p (921,600 pixels) to 1080p (over 2 million pixels). 

Seventh-generation consoles such as Sony’s PS3 (2006) and Microsoft’s Xbox 360 (2005) supported both HD gaming and HD video playback. In 2005, Microsoft exec J Allard touted the Xbox 360 as the console that would usher in a new era of HD gaming, even though gaming in high resolutions had been possible on PCs for many years before these consoles, thanks to the power of dedicated PC graphics cards.

God of War Was One of the First Franchises to be Remastered in HD (Courtesy Sony)
God of War Was One of the First Franchises to be Remastered in HD (Courtesy Sony)

However, it was the consoles, which could support gaming and double as home entertainment systems, that made HD gaming mainstream. Sony remastered many of its PS2 games to run on HD screens with the PS3. Many of these remasters, which now played at higher resolutions, had much sharper image quality and better-looking character models. Higher resolutions also decrease aliasing – the jagged edges that appear on rendered game models. This type of visual artefact can be quite distracting, and a HD display can help make for a more immersive experience by minimising aliasing artefacts. HD resolutions do not automatically imply photorealistic renders, but they can help bring out the detail in renders by achieving high image quality. 

Advancements in Graphics Shaders

A shader is essentially a piece of code that runs on the GPU and contains specific instructions on how to render a 3D scene in pixels, or how to manipulate a 2D image before it’s shown on-screen. Shaders can tell the renderer how a 3D object should be lit, how it should be coloured, what it reflects and much, much more. Early graphics cards had fixed-function rendering pipelines, limiting the sort of effects that could be applied while rendering a scene. But the advent of cards with programmable shaders – the first of which was the GeForce 3 – utterly transformed 3D rendering. In fact, the use of normal and specular maps to texture an object requires a programmable shader pipeline. 

Within a few years, developers had written highly complex shaders. In 2007, a programmer working on the CryTek engine developed screen-space ambient occlusion, which darkens the creases, holes and dents of an object, and the areas where it is contact with other objects, resulting in a more realistic scene that looks like it is responding to indirect or ‘ambient’ light. The shader was first used in Crysis (2007), a game now legendary for its demands on computer hardware. Crysis in fact contains more than 85000 shaders, which all but melted the graphics hardware of the time – and contributed greatly to the realism of the game.

Crysis has Always Pushed the Envelope in Terms of Game Graphics (Courtesy Electronic Arts)
Crysis has Always Pushed the Envelope in Terms of Game Graphics (Courtesy Electronic Arts)

Crysis 2 (2011) used a screen-space reflection shader to render reflections on glossy or glass-like surfaces and objects. The SSR shader contributes a lot to a rendered scene, but has its limitations, and is used in conjunction with other techniques such as cube mapping (also implemented via a shader) to create realistic in-game reflections. 

Crysis 2’s Screen Space Reflection Shader (Courtesy Electronic Arts)
Crysis 2’s Screen Space Reflection Shader (Courtesy Electronic Arts)

Deferred rendering was another major screen-space shading technique developed in the late 2000s. In essence, this technique allows a scene to be rendered with many more lights, by rendering the game geometry and the scene lighting in separate rendering passes. In traditional, ‘forward’ rendering, increasing the number of lights can rapidly increase rendering times but deferred rendering enables more lights and more realistic world lighting without impacting render times significantly. Games such as Dead Space (2008) and Killzone 2 (2009) were among the first to implement deferred rendering and it has now become an industry standard

Killzone 2 Was One of the First Games to Use Deferred Rendering (Courtesy Sony)
Killzone 2 Was One of the First Games to Use Deferred Rendering (Courtesy Sony)

The shaders described above improve lighting, shadows and reflections and thereby help make a scene look more realistic and believable. However, shaders are capable of many more effects, such as anti-aliasing, and contrast-adaptive sharpening, which allows games to be run at lower resolutions by intelligently sharpening the upscaled render. Such shaders can even be injected into games through custom programs like Reshade, which boasts an ever-growing library of shaders, and supports a huge variety of games

Physically-Based Rendering – a New Paradigm

The hunt for photorealism is punctuated with various paradigm shifts, and the advent of physically-based rendering (PBR) around the early 2010s is probably one of the most important. Graphics shaders had incrementally improved the look of games, striving for realism by improving how in-game scenes were rendered. Shaders could do everything from scene lighting to post-processing effects such as camera filters and depth of field. But these shaders were all working on models that were textured in what is now known as ‘traditional’ or ‘non-PBR’ workflows, where the diffuse and specular maps of game assets were painted by texture artists and did not generally reflect the real-world properties of such assets. 

Traditional vs PBR Shading – Notice the Accurate Reflection on the Rifle’s Scope on the Left (Courtesy Marmoset)
Traditional vs PBR Shading – Notice the Accurate Reflection on the Rifle’s Scope on the Right (Courtesy Marmoset)

PBR is crucial to photorealistic renders, because the associated texture sets and shaders accurately model how light interacts with in-game objects. In real life, a shiny gold crown or copper bracelet will look gold or orange – not because these metals are in any way ‘pigmented’, but because they absorb certain wavelengths of light. PBR shaders model this accurately, giving a clear grey sheen to most metals, but giving coloured metals or alloys their characteristic tint based on what they absorb. 

Even non-metallic objects are textured so that they will reflect a tiny bit of light, as they do in real life, and shiny non-metallics have a sheen based on their real-life properties. Reflective surfaces will accurately reflect in-game environments – physically-based textures even capture how reflective a body is based on the angle at which light hits it. PBR can hence improve the results from a screen-space reflection shader – if a smooth marble floor has a PBR-based texture, then it will be essentially opaque when you look directly down it, but will reflect scene objects if you look at it from an angle

In fact, texture artists can make use of detailed reference tables when they adopt the PBR workflow for the sake of physical accuracy – such charts provide base values for texturing various metals and non-metals. PBR texturing can make life easier for artists – earlier, they would expend a good deal of effort into making a golden crown look appropriately golden, and then add surface details like dirt, dull cavities, scratches and more – now, they can add such realistic details while letting the renderer take care of making the object look ‘golden’. 

Remember Me, a 2013 title published by Capcom, is credited as the first game to use physically-based rendering. Many studios soon transitioned to PBR, and held in-depth workshops to help texture artists adopt the new texturing pipeline.

Remember Me Was One of the First Games to Use Physically-Based Rendering (Courtesy Capcom)
Remember Me Was One of the First Games to Use Physically-Based Rendering (Courtesy Capcom)

The Advent of High-Dynamic Range Game Content

Both graphics shaders and physically-based rendering can work in synergy to enhance a scene by improving its lighting, shadows and reflections. The advent of high-dynamic range (HDR) TV in the mid-2010s utterly transformed this process by allowing games to output scenes with a greater dynamic luminance range, a wider colour range (known as the gamut), and more colour tones within this gamut (using higher bit depth). 

Skies and the sun could look tremendously bright. Shadows in dungeons could look very dark, and in horror games, these areas were that much scarier, because you could just make out a lurking shape in the shadow. Roses in a bouquet could each have a subtly different shade of red. The gradient of colours in the horizon during sunsets could look much smoother. 

Standard dynamic range (SDR) displays have a maximum luminance value of around 100 nits. Expensive HDR displays can get as bright as 4000 nits. This means that both the brightest and darkest parts of the render are displayed without losing detail – in an SDR display, these parts would fall above or below the screen’s luminance range and will look either bright white or pitch black. SDR displays can only show 8-bit colour information, or 256 levels of luminance for each colour channel. HDR displays have a bit depth of 10 bits per channel, resulting in 1024 shades between the brightest white and the darkest black, and can support over a billion colour tones

This is why such displays (when showing HDR content) can make colours really pop, enhance the overall contrast of the scene, and smooth the gradient between light and dark colours during a sunset, eliminating banding artefacts with a wider range of colour tones. 

Ironically, game engines had become capable of high dynamic range rendering (HDRR) by the early 2000s, but had no displays capable of showing such renders. Half-Life 2: Lost Coast (2005) was one of the first games to use HDRR and many other games performed render calculations in high dynamic range, but then squeezed the result into standard dynamic range using a process called tone-mapping. Just as a normal map texture is used to capture the geometric details of a high-polygon model, tone-mapping is used to map a HDR rendered frame onto a lower dynamic range. The result is better than what would have been generated without using HDRR, but is not true HDR output.

High Dynamic Range Rendering in Half-Life 2: Lost Coast (Courtesy Valve)
High Dynamic Range Rendering in Half-Life 2: Lost Coast (Courtesy Valve)

Horizon Zero Dawn (2017), Shadow of the Tomb Raider (2018) and Middle Earth: Shadow of War (2017) are among several games that were released soon after HDR displays became widespread, and such games support true HDR output, drastically improving image quality. Tone mapping still remains part of the workflow when creating HDR content, but the far wider luminance and colour range of a HDR display results in content whose dynamic range is not clamped, making for highly realistic lighting, colour and better overall image quality. 

Horizon Zero Dawn was One of the First Games to Output HDR Content for Supported Displays (Courtesy Sony)
Horizon Zero Dawn was One of the First Games to Output HDR Content for Supported Displays (Courtesy Sony)

Real-Time Ray Tracing

In May 2020, the BBC published an article on real-time ray-tracing titled ‘Get ready for the ‘holy grail’ of computer graphics,’ and there is probably no better indication about the importance and primacy of this revolutionary technique in present-day game graphics.

Ray-tracing had long been a part of CGI (computer generated imagery) pipelines in film and television, but was implemented via offline rendering, and was prohibitively expensiveToy Story 3 (2010) took an average of seven hours per frame, and Monsters University (2013) is said to have taken 29 hours per frame

Toy Story 3, and Many Other Films, Use Ray-Traced Rendering (Courtesy Pixar)
Toy Story 3, and Many Other Films, Use Ray-Traced Rendering (Courtesy Pixar)

As ray-tracing actually models the interaction of light with in-game objects, it works best with physically-based textures, which provide accurate data to the ray-tracing algorithm. In fact, the first major book-length publication on PBR refers to ray-tracing, and contextualises PBR as a new method to improve such ray-traced scenes with physically-accurate materials – ray-tracing in films and TV predates PBR. 

The first challenge to implementing ray-tracing in games is that it has to be in real time and not offline, and Nvidia’s first range of RTX cards, released in 2018, managed this feat. Real-time ray-tracing greatly enhances shadows, lights and reflections dynamically and it works thus – the GPU shoots rays from the camera and then calculates how these rays bounce off in-game objects, scene lights and other scene elements (like water bodies), to determine how the scene should look.

A ray that bounces off an object and hits a scene light determines how that object is lit and where its shadow falls – if the object is close to another, then contact shadows are drawn on both. Objects that deflect rays onto glass-like surfaces will be reflected by such scene elements. Rays that move from a light source to coloured objects will take on the objects’ hue and bathe nearby geometry with coloured light. Since the camera and the player character move constantly in games, such calculations have to be performed countless times. Ray tracing is even capable of recursive reflections, like infinite mirrors, though such reflections may not be feasible for complex scenes.

Ray-Tracing Renders Ultra Realistic Reflections and Lighting in Spider-Man: Remastered (Courtesy Sony)
Ray-Tracing Renders Ultra Realistic Reflections and Lighting in Spider-Man: Remastered (Courtesy Sony)

Ray-tracing greatly improves upon previous lighting solutions, like screen-space reflections, and a simple example can illustrate why. Imagine a third-person perspective scene in which the player character is facing a reflective glass shop front, near which are two barrels. Since the SSR shader can see the barrels (in the 2D render), it will paint their reflection on the glass. But the shader cannot see the front side of the player character, and thus cannot draw the appropriate reflection. Ray-tracing creates an accurate reflection of the whole scene by accounting for rays that hit the glass and then hit the player character (and vice versa), and also adds other off-screen objects to the reflection based on their position in the game world.

Cyberpunk 2077’s Ray-Tracing Patch Transforms the Lighting of Many In-Game Scenes (Courtesy CD Projekt Red)
Cyberpunk 2077’s Ray-Tracing Patch Transforms the Lighting of Many In-Game Scenes (Courtesy CD Projekt Red)

Nevertheless, real-time ray-tracing has high performance costs, and only one game – the indie title Stay in the Light (2020) – currently applies it across the board. Other games use it in specific contexts and combine existing methods along with ray-tracing to enhance a scene’s graphical fidelity. Metro Exodus: Enhanced Edition (2019) was hailed as ‘the first AAA ray-tracing game,’ and Control, released a few months later, achieved widespread acclaim for its implementation of ray-tracing.

Stay in the Light is Completely Based on Ray-Traced Rendering (Courtesy Sunside Games)
Stay in the Light is Completely Based on Ray-Traced Rendering (Courtesy Sunside Games)

It is no coincidence that Nvidia, AMD and Intel have all come out with upscaling algorithms soon after the advent of real-time ray tracing. Even the beefiest graphics card will slow to a crawl if it tries to render a game in native 4K with high-quality ray-tracing settings, and that’s where upscaling comes in – the GPU renders the image at a significantly lower resolution, which is then scaled up to (almost lossless) 4K. While Nvidia’s Deep Learning Super Sampling (DLSS) and Intel’s Xe Super Sampling (XESS) use machine learning, AMD’s Fidelity Super Resolution (FSR) does not, though it provides comparable results. Support for DLSS is available for both Control and Metro Exodus: Enhanced Edition, and upscaling algorithms benefit games that lack ray-tracing as well, giving them a significant performance boost – God of War on PC supports both DLSS and FSR.

Conclusion

3D game graphics have evolved over the course of two decades to create stunning visuals in present-day games. Developers have strived for realism and the best implementations of 3D techniques work synergistically to create near-photorealistic, or even hyper-realistic renders. HDR, which enhances colour, contrast and image quality, works best when PBR textures and ray-tracing accurately model how a scene interacts with light. Screen-space shaders are deployed alongside ray-tracing for performance gains, and normal mapping – by now a very old technique – is still critical in optimising scene geometry without losing detail. Some of these techniques represent paradigm shifts – PBR totally replaced traditional texturing workflows and real-time ray-tracing may well replace screen-space effects completely as graphics cards add muscle to their ray-tracing capabilities. 

So, have we reached photorealism in gaming yet? Not quite – even the latest games are near-photorealistic but are still not indistinguishable from a live-action video. Real time ray-tracing is itself a very clever approximation of real life, or ‘ground truth’, as technologists like to call scenes observable to the human eye. 

However, contemporary live-action content, such as the Jibaro episode from Netflix’s Love, Death and Robots, show that modern offline renders can pass for real life – in fact, the best CGI in live-action content is often in places where you don’t even expect it. This suggests that as computing capability increases and advanced ray-tracing methods such as path-tracing become feasible, we may edge closer to ground truth and true photorealism, even in gaming, where a life-like, interactive experience needs to be rendered at 60 frames per second. 

Gameopedia offers custom solutions depending on your specific data requirements. Reach out to us for actionable insights about video game graphics and technology.

Read More

Remakes, Remasters and Next-Gen Upgrades (How Revived Games Thrive)

When Bluepoint Games was remaking Shadow of the Colossus (2005) for the PS4, they noticed a pattern about the birds in the central temple complex of the game. In Shadow, the protagonist Wander sets out from this temple to kill various giants, and is returned to the temple once his mission is complete. Bluepoint developers noticed that if Wander was heading out for his fourth giant, or Colossus, then four birds would be perched in the temple; if he was heading out for his 10th, then ten birds would appear. 

This detail had escaped Bluepoint’s notice when they were remastering the game for the PS3, and in any case, since a remaster mostly uses the original code, the pattern would have been reproduced automatically. But in a remake, which requires developers to recreate a game, pretty much from scratch, all these little touches have to be remade as well, and Bluepoint took great pains to ensure that the bird-and-colossus pattern, and various other details, made it to the remake. 

The result was a ‘precedent-setting’ game that not only introduced Shadow to a whole new generation of gamers, but also preserved and recaptured the experience of playing the ground-breaking original. Bluepoint’s remake has received widespread critical acclaim, with some claiming that the new Shadow is one of the best remakes of all time. The original Shadow of the Colossus is often cited as an example of how video games can be art, and Bluepoint’s painstaking reconstruction does justice to the game, its fans and its legacy. 

In this blog, we will discuss remakes, remasters and next-gen upgrades, all of which give old games a new lease of life on upgraded consoles and modern PC hardware. We will see how prominent remakes faithfully retain the unique features of the originals, how remasters greatly enhance the graphics of an older game for new hardware, and how the next-gen upgrade endows a game with improved graphical fidelity and performance on a new generation of consoles. 

Remakes and remasters are compelling business propositions today, especially because the gamers who played the original versions of classic games are older now and have more disposable income. Remakes and remasters are big money makers – digital revenue for prominent remakes nearly doubled between 2018 and 2020, and remake earnings surged in 2020 amidst widespread pandemic lockdowns. Moreover, remakes and remasters allow younger gamers to experience ground-breaking classics with all the graphical fidelity and streamlined gameplay of modern hardware.

Older Gamers Who Have Played Classic Games Create a Market for Remasters and Remakes
Older Gamers Who Have Played Classic Games Create a Market for Remasters and Remakes

In fact, in 2020, a remaster – The Legend of Zelda: Skyward Sword HD  – was more anticipated than many new titles, and it was also the best-selling game in the US at launch, during July 2021. Remakes of Resident Evil 2 (2019) and 3 (2020) had bigger launches than the new entry in the franchise, Resident Evil 7 (2017). One observer calls the new trend of remasters and remakes a ‘nostalgia gold rush’, underscoring how a longing for the past plays a crucial role in driving the success of a remake or a remaster. 

In the following sections, we will discuss what remakes, remasters and next-gen upgrades are, and why they are made. 

How are Games Revived for New Generations?

The development of remakes, remasters and next-gen upgrades are all endeavours that revive an older game for new hardware and modern consoles. But what, exactly, do these terms mean, and why do studios and developers undertake remakes and remastering projects? We discuss both in the sections below.

What is a Video Game Remake?

A video game remake is a ground-up recreation of a classic game. It includes high-quality models, textures, animations and sounds, and is powered by a modern game engine that brings state-of-the-art lighting, reflections, shadows and other effects. 

Examples include Capcom’s Resident Evil 2 remake, released nearly 21 years after the original, and the Final Fantasy VII remake (2020). Both games were originally released on Sony’s first PlayStation console.

Final Fantasy VII Remake (Courtesy Square Enix)
Final Fantasy VII Remake (Courtesy Square Enix)

What is a Video Game Remaster?

A video game remaster is essentially a much better-looking version of an older game. Taking advantage of modern hardware, a remaster adds a whole range of visual effects that were either unavailable to the original, or hard to implement without performance costs, and also upgrade the game’s textures, models and animations. In general, remasters use much of the same code as the original, but can update it so that the game runs at higher resolutions and frame rates on new hardware. Many remasters are bundled into a single collection as well, and remasters of a single game can include all the DLC in a single edition. 

There are numerous examples of remasters, across video game generations, including the Shadow of the Colossus remaster (2011) for the PS3, the Last of Us Remastered (2014) for the PS4, and the Master Chief Collection (MCC) for PC and the Xbox One consoles. The MCC continues to receive updates long after its initial release in 2014

Halo: The Master Chief Collection (Courtesy Microsoft)
Halo: The Master Chief Collection (Courtesy Microsoft)

What is a Next-Gen Upgrade?

A next-gen patch updates a game to match the quality of titles released for the latest hardware. They are usually meant for recent games – older games would need a remaster. Even a bare-bones next-gen upgrade will usually boost frame rates and performance and enable higher resolutions. Some developers may also provide high-res texture packs and greatly upgrade game graphics with features such as ray-tracing, support for upscaling algorithms and HDR rendering. Many games receive such upgrades for current-gen consoles. 

Developers may also update remasters or remakes with next-gen features: examples include Resident Evil 2, a remake which received a ray-tracing patch on console and PC, and Crysis Remastered, which was updated to support NVIDIA’s Deep Learning Super Sampling (DLSS), an AI-based upscaling technique

Why are Video Games Remastered and Remade?

Remasters and remakes are not made purely for financial reasons, though such considerations may play a significant role in determining what game is to be remastered or remade. Good remakes and remasters evoke nostalgia, build excitement for new releases in the franchise and help developers improve their skills and industry cred.

Furthering a Game’s Legacy and Evoking Nostalgia

One of the key reasons for remastering or remaking a game is to evoke nostalgia among the many fans it garnered when the original was released. Remasters and remakes allow fans to revisit cherished virtual spaces while enjoying all the convenience and graphical fidelity of modern hardware. Such updated games can also attract entirely new audiences looking to discover why these titles became classics. 

Off-Setting the Risks of AAA Development

According to an NPD analyst, publishers can pursue the remastering trend to make money through lower-risk ventures. Remasters may sell less than a new game, but cost much less to make, and publishers can also decide which games to remaster, knowing where the demand exists. In fact, Nintendo’s Super Mario 3D All-Stars, a compilation of older Super Mario games, was sold with a sixty-dollar price tag and became the second best-selling Switch title of 2020, despite the fact that the collection was a time-limited release. An avid fan base looking to relive their beloved franchise had created a natural and profitable market for the Switch release, and saved Nintendo the millions of dollars involved in making a new game from scratch. 

Super Mario 3D All-Stars (Courtesy Nintendo)
Super Mario 3D All-Stars (Courtesy Nintendo)

Creating Excitement for New Releases

Microsoft successfully built hype around Halo 5: Guardians (2015) by releasing the Master Chief Collection for Xbox One right before it. The MCC allowed many new gamers to experience the franchise’s history before they dived into Halo 5. Despite a troubled launch, the Master Chief Collection is now a well-regarded remaster and arguably the best way to experience the early adventures of John 117.

The Crash Bandicoot N. Sane Trilogy (2017), a collection of remastered Crash Bandicoot games was a resounding success, and a remake of the PS1 game, Crash Team Racing Nitro-Fueled (2019) followed soon after. For long, Crash Bandicoot had been a mascot for Sony, but the franchise had stagnated until the remaster and remake revived it. In 2020, Crash Bandicoot 4: It’s About Time marked the first new release in the franchise in 12 years and proved a commercial and critical success

Crash Bandicoot N. Sane Trilogy (Courtesy Activision)
Crash Bandicoot N. Sane Trilogy (Courtesy Activision)

Building a Developer’s Reputation and Capabilities

When Grim Fandango (1998) was remastered and released for multiple platforms in 2015 by Tim Schafer’s Double Fine Productions, it was praised by fans and critics and sold far more units than the original. Double Fine built brand loyalty so well with its Grim Fandango remaster that its Kickstarter project ‘Double-Fine Adventure’ broke records in 2012, raising one million dollars within 24 hours. 

Grim Fandango Remastered (Courtesy Double Fine Productions)
Grim Fandango Remastered (Courtesy Double Fine Productions)

Bluepoint Games has a splendid reputation thanks to its critically-acclaimed remasters of games in the God of War and Uncharted franchises, and its marvellous remakes of Shadow of the Colossus and Demon’s Souls (2020). Its remake of FromSoftware’s first ‘Souls-like’ game was a launch title for the PS5 and Bluepoint has been purchased by Sony. It now has the chance to make a first-party PlayStation game, and all its experience remaking and remastering Sony hits will no doubt help.

In the following sections, we will discuss just what it takes to remake, remaster or upgrade a game – each endeavour has its own challenges, and we delve into them below. 

The Video Game Remake - a Labour of Love

Remaking a video game from scratch is a major undertaking, given that the game being remade was released generations ago. The resulting remake must nevertheless capture the feel of the original faithfully, while updating the content to modern gameplay and graphics standards. As such, developers must have one eye on the past and one to the future, and strive to recreate every little detail in the original, and even keep the gameplay elements and mechanics intact while updating them to match modern controller setups. 

Prominent titles include Bluepoint’s great Shadow of the Colossus (SOTC) remake for the PS4 released in 2018, Capcom’s remakes of Resident Evil 2 and 3, and Square Enix’s Final Fantasy VII Remake. SOTC’s remake was released 13 years after the original, the RE 2 remake nearly two decades after the original, and the FF7 remake was released 23 years after the original – the original games are so old they necessitated a full remake. 

This is why remakes are recreated on modern game engines, and not much of the original code makes it to the remake. Many of the environments, models and textures have to be made from scratch. Combat elements may need to be overhauled, as is the case with the Final Fantasy VII remake – the original had turn-based combat while the remake features a revamped real-time combat system and allows you to switch between the main player character and his companions to execute special moves. The end result recreates the flow of battle in the original, while introducing innovative gameplay mechanics.

FF VII Remake’s Combat System is Used to Control Multiple Characters (Courtesy Square Enix)
FF VII Remake’s Combat System is Used to Control Multiple Characters (Courtesy Square Enix)

Bluepoint Games has justly earned the moniker ‘masters of the remaster’ because of their work on various critically acclaimed remasters and remakes, and their Shadow of the Colossus has been hailed as one of the best remakes ever – a closer look can tell us just what it takes to create a great remake of a beloved game. 

Shadow of the Colossus is a minimalist classic made by game creator Fumito Ueda for the PS2. In the game, the hero Wander explores desolate landscapes in search of mighty ‘Colossi’, whom he must kill in order to bring his lover back from death. The game’s areas offer no treasures, there are no low-level enemies to combat, no NPCs and no cities, towns or villages. Wander has only a sword and a bow with arrows to kill each Colossus, and rides a horse that doesn’t always respect his commands. The game is essentially sixteen boss battles, each of which is unique and takes place in its own desolate world, made all the more memorable because Ueda shuns so many traditional gameplay tropes. SOTC was an instant classic and A New Yorker article discusses the game’s status as a work of art. Remaking a game with such a formidable reputation was, well, a colossal challenge. 

For its PS4 remake, Bluepoint started with the updated code base from their PS3 remaster of the game, and used Ueda’s later title, The Last Guardian (2016) as a guideline for how the visuals should be updated. To recreate the forested and grassy areas of the original, they devised a foliage system, in which grass and plants not only sway in the wind, but also bend and flatten as Wander runs through them. 

They painted details such as erosion, cracks and other damage onto the terrain and the mountains to make them realistic, and retained the unique architecture and look of each structure, and even the Colossi themselves, while improving them with higher-quality textures and models. Animations were also revamped and look much more believable, especially in the battles with the Colossi. Even the fur on the Colossi sway and bend as Wander climbs up the giants to kill them. 

Crucially, Bluepoint used physically-based rendering (PBR), a texturing and rendering pipeline that accurately models the interaction of light with in-game objects and is especially effective in rendering reflective or glossy, metallic surfaces realistically. In a game that uses PBR, a gold crown and an iron sword won’t shine the same way – each metal’s sheen is based on its real-life characteristics – and even glossy or dull leather will look different based on how they interact with light in real life. Bluepoint used a blend of PBR and traditional techniques to maintain a balance between modern photorealism and the stylised look of the original. The developers also strove to stay true to the lighting setup of the original, even as they introduced High-Dynamic Range (HDR) rendering with their Bluepoint engine. With HDR, the game has much brighter highlights (leading to glorious skies), much deeper blacks and a far greater range of colours.

The Shadow of the Colossus Remake’s Visuals Utterly Transcends the Original (Courtesy Sony)
The Shadow of the Colossus Remake’s Visuals Utterly Transcends the Original (Courtesy Sony)

Bluepoint also brought performance improvements to the game. They broke up the world into manageable portions, rendering distant areas at lower levels of detail, and made countless optimisations to game assets so that they could offer a 60-frames-per-second (FPS) performance mode, and a locked 30-FPS quality mode at 4k HDR on the PS4 Pro. They also fixed the awkward controls and camera movement of the original, so that the game is playable no matter what frame rate you choose. However, they decided to keep the original control mapping as an option for older fans. With its devout attention to detail and its commitment to reviving SOTC for a new generation, this remake does count as one of the best ever, and is a standard for other developers to reach for. 

Capcom also did an excellent job with the Resident Evil 2 remake, though the title is significantly different from the original, which was not a true 3D game and was released for Sony’s first PlayStation, well before full-3D games were pioneered by id Software and Epic Games. Nevertheless, Capcom faithfully retained the original’s atmosphere – ‘jump-scare’ locations are recreated faithfully in the remake, and even some of the in-game objects are placed exactly where they were in the original. However, some character roles are expanded and certain areas are reworked from scratch. 2019’s Resident Evil 2 is not a shot-for-shot recreation, but a reimagined version that remains true to the spirit of the original. 

The Resident Evil 2 Remake Captures the Spirit of the Original (Courtesy Capcom)
The Resident Evil 2 Remake Captures the Spirit of the Original (Courtesy Capcom)

Capcom nevertheless fumbled the Resident Evil 3 remake, cutting out content, iconic locations and scripting the behaviour of the enemy, Nemesis, taking away the element of surprise that made him so terrifying in the original. 

A remake works if it remains faithful to the original while reimagining it at the same time. Not even the best graphical upgrades can assure success if the game veers too far away from the original. Introducing new elements while staying true to the source is a tough balancing act, and remakes of games such as Resident Evil 2, Shadow of the Colossus and Final Fantasy VII pull this off, while others fail to live up to expectations. Considering that the Final Fantasy VII remake cost $140 million – more than what it takes to make and market a major movie – developers must tread carefully when remaking old games for contemporary audiences. However, despite the manifest challenges involved, studios continue to pursue the remake trend – upcoming remakes include Resident Evil 4 (2005), Dead Space (2008), Silent Hill 2 (2001), Tom Clancy’s Splinter Cell (2002), and Prince of Persia: The Sands of Time (2003).

The Video Game Remaster – Old Games Get a New Look

A video game remaster is a lesser undertaking than a remake – it uses largely the same game code but greatly enhances the visuals by adding various graphical effects, and increasing resolution and performance by taking advantage of modern hardware. Many remasters are bundled into a single collection, giving users good value for their money. 

Remastering is an ideal choice for famous games released a generation or two ago Generally the time frame between a remaster and the original is shorter, especially when contrasted to a remake. Bioshock: The Collection (2016) – a compilation of remastered Bioshock games, was released about three years after the last Bioshock title, Bioshock Infinite. Borderlands: The Handsome Collection (2015) which contains Borderlands II and Borderlands: The Pre-Sequel, was released a year after the latter title.

Sony has used remasters to fill out its catalogue since the PS3 generation. A Wikipedia page lists nearly 60 remasters for the PS3, many of which are bundles containing multiple remastered games. These games look much better than their PS2 or PS1 counterparts because the PS3 is one of the first HD consoles, and the original games remastered from SD (standard definition) look much sharper, with higher-quality texture detail and better character models. There are nearly 60 remaster titles (some of which bundle multiple games) for the PS4 as well, but many of these remasters are available on other consoles and PC too. Sony’s remasters are either branded as ‘Classics HD’ or come with labels like ‘Remastered in High Definition’. 

Remastered games come with a slew of visual upgrades. Assassin’s Creed 3 Remastered (2019) includes volumetric lighting (or ‘god-rays’), screen-space reflections (by which water bodies and other glass-like surfaces reflect nearby objects in the scene), improved shadow detail, realistic lighting from in-game light sources and increased view distance. The remaster also uses physically-based rendering along with upgraded textures and remade character models to make the game look photorealistic and supports 4K HDR rendering on PC, PS4 Pro, Xbox One X, and the current console generation.

Assassin’s Creed III’s Remaster Comes with a Host of Visual Upgrades (Courtesy Ubisoft)
Assassin’s Creed III’s Remaster Comes with a Host of Visual Upgrades (Courtesy Ubisoft)

One of the better remaster editions available today is Halo: The Master Chief Collection, both in terms of the visual upgrades it brings to its games, and the sheer number of games included – six remastered games, with each title’s multiplayer receiving regular updates. When released in 2014 for the Xbox One, the MCC won IGN’s People’s Choice Award as the best remaster of the year, and the collection has only gotten better since. MCC was first released for PC in 2019, (though only Halo: Reach was included), but within a year, all games in the collection were ported to PC. 

The visual upgrades are of such a quality that one can end up considering these remasters as remakes. Digital Foundry’s Youtube review of Halo 2: Anniversary, which is part of the MCC, straight up calls the game a remake. Halo 2 has gorgeous pre-rendered cinematics that replace the original’s engine-based cutscenes, and uses real-time lighting and shadows, along with global illumination, to add realistic lights and shadows to both in-game objects and particle effects (such as explosions). Not all games in the collection were as comprehensively remastered as Halo 2, but every game does come with performance enhancements and supports increased resolutions. The collection is not without its flaws, however, and the multiplayer experience can be buggy, especially in the PC versions of Halo 2 and Halo 3

Halo 2’s Remaster Overhauls Much of the Original’s Graphics (Courtesy Microsoft)
Halo 2’s Remaster Overhauls Much of the Original’s Graphics (Courtesy Microsoft)

Remasters generally improve upon the original and become the ideal way to experience an older game, unless the game has been remade – Fumito Ueda had endorsed the PS3 remaster of Shadow of the Colossus as the definitive edition of the game before the remake was released. But even prominent companies like Blizzard and Rockstar Games can botch remasters so thoroughly that they become the target for relentless backlash from gamers. 

Grand Theft Auto: The Trilogy – Definitive Edition (2021), which bundles ground-breaking classics such as GTA III (2001), GTA: Vice City (2002) and GTA: San Andreas (2004) is an example of what happens if a publisher remasters games without taking care to respect player expectations. The release was buggy, the graphics lacklustre, character models still looked flat and unrealistic, and the Guardian’s critic calls the remaster an ‘infuriating disappointment’. Gamesindustry.biz published an opinion piece arguing that companies shouldn’t release remasters just to make a fast buck but must cherish the creative history of the games they upgrade, and excoriated Rockstar for removing the original versions of these games from digital download stores when it released the remastered collection. The debacle forced Rockstar to apologise, even though they had not developed the remaster in-house – Grove Street Games, the studio behind the remaster, is fixing its various issues

Rockstar Studios Received Backlash for Botching the Remaster of Seminal GTA Games (Courtesy Rockstar)
Rockstar Studios Received Backlash for Botching the Remaster of Seminal GTA Games (Courtesy Rockstar)

Another infamous example is Warcraft III Reforged (2020), which is a remaster of the original Warcraft III: Reign of Chaos (2002). It received an overwhelmingly negative reception and even led to the creation of a website that successfully petitioned for a refund to all those who bought the remaster by listing all the upgrades that Blizzard had promised, but not delivered. 

Warcraft III: Reforged – a Remaster – Failed to Deliver on its Promises (Courtesy Blizzard)
Warcraft III: Reforged – a Remaster – Failed to Deliver on its Promises (Courtesy Blizzard)

Remasters will likely succeed if developers stick to what is now a well-worn path – offering good value for money by improving the visuals and performance of the game, and bundling either its DLC, or multiple games, into one easily accessible package. 

As discussed above, remasters are inherently less risky than new games or costly remakes, and their target audience can be clearly identified. This is perhaps the reason why a badly-made remaster draws such backlash – it fails to live up to quite modest expectations.

The Next-Gen Upgrade – A Boost for Recent Games

Both remasters and remakes involve a good deal of effort, time and money, and must satisfy gamers familiar with the original and others looking to discover a classic. 

A next-gen upgrade doesn’t have to completely overhaul a game, and usually upgrades a game’s performance and visuals to a certain level – it can unlock 60-FPS modes on a title that was locked at 30 FPS in an earlier console generation, and can also use current-gen hardware to display a game at native or upscaled 4k resolutions. 

Some developers go the extra mile and provide higher-res textures, and offer support for ray-tracing and upscaling algorithms, and utterly transform their game’s look as a result – ray-tracing is an advanced graphics technology that results in near-perfect lighting, shadows and reflections, but it can also severely affect performance. It thus goes hand-in-hand with upscaling algorithms, which allow the game engine to render ray-traced scenes at much lower resolutions before upscaling the result. 

Capcom’s Resident Evil 2, Resident Evil 3 and Resident Evil 7: Bio-Hazard all have free next gen-upgrades, which feature ray-tracing. id Software’s Doom Eternal (2020) got a ray-tracing patch for both PC and current-gen consoles, as did HellBlade: Senua’s Sacrifice (2017) – both games are also available on the Xbox and PC Game Pass.

Doom Eternal is Utterly Transformed by Ray-Traced Reflections (Courtesy Bethesda)
Doom Eternal is Utterly Transformed by Ray-Traced Reflections (Courtesy Bethesda)

Next-gen upgrades make sense for recent games. By default, such games use modern rendering paradigms, such as physically-based rendering and HDR, and are built from scratch with the sort of graphics features that are introduced to an older game when it is remastered or remade. A cross-gen title, which is released late in the life-cycle of an older console generation and ‘straddles’ the boundary between older and current-gen consoles, is also an obvious candidate for a next-gen patch. 

Recent games and cross-gen titles may have run at lower resolutions and frame rates due to the limitations of console hardware when they were released, and the next-gen patch is meant to fix this and make the game run better on newer consoles. Adding support for ray-tracing and upscaling algorithms is a bonus – developers may be focussing on delivering such features for true next-gen games released exclusively for the PS5 or the XBox Series X|S, rather than upgrading older games with such tech.

Given that a next-gen upgrade can be quite trivial compared to a remaster or a remake, one would expect an industry-wide standard for delivering them. This is far from the case. If you want a remake or a remaster, all you need to do is go to a shop or a digital storefront and buy it. But the upgrade path to a next-gen version of your game is absurdly convoluted today. 

Some developers participate in Xbox’s Smart Delivery program, which automatically downloads the game version best suited to your console regardless of what disc or digital edition you buy. Microsoft has promised Smart Delivery support for all first-party games, but third-party publishers aren’t obliged to participate. Ubisoft supports Smart Delivery for Assassin’s Creed: Valhalla (2020) and CD Projekt Red does the same for Cyberpunk 2077 (2020). However, EA came up with a ‘dual entitlement’ scheme where owners of Madden NFL 21 (2020) could claim a free next-gen upgrade for the Xbox Series X|S or PS5 before Madden NFL 22 was released, which makes little sense – why should one game’s next-gen upgrade be blocked after the franchise gets a new release?

Sony, meanwhile, seems to lack a coherent strategy for delivering upgrades, and does not feature any pro-consumer initiative meant to match up to Smart Delivery. Sony promised dual entitlement for a range of first-party cross-gen games for the PS5 – which meant that buying a cross-gen PS4 game would entitle you to a free PS5 upgrade – but did not include Horizon Forbidden West (2022) in this policy. After the ensuing backlash, the game’s PS4 version now has a free upgrade when bought for a PS5, but nevertheless, a costlier PS5 version exists. Consumers may well buy the PS5 version, not realising that the buying PS4 version will result in an identical download

For cross-gen titles with dual entitlement, Sony simply leaves it up to the user to figure out the differences between the PS4 and PS5 versions of the game, and choose accordingly. An InputMag columnist calls the PS5 the ‘most confusing console on the market’, especially because it does not list the next-gen features of even its launch titles in the product description – you have to download these games (or watch Youtube videos of those who did) to find out the features available in quality and performance modes.

As mentioned above, Capcom released its Resident Evil next-gen upgrades for free, and CD Projekt Red did the same for Cyberpunk 2077, offering ray-tracing support on consoles with its next-gen patch. The company has also promised a next-gen upgrade for its seven-year-old open-world classic, The Witcher 3: The Wild Hunt (2015) – the patch is expected to arrive before the end of 2022

In fact, Microsoft has told third-party developers that they should offer Xbox Series X|S upgrades for free, and should refrain from branding such upgrades as new DLC. If the developer still decides to create a paid upgrade path, Microsoft recommends that owners of a last-gen version be offered a discount when they pay for the Xbox Series X|S version. Of course, no third-party studio or publisher is obliged to heed Microsoft’s suggestions, resulting in controversial releases such as Control Ultimate Edition.

505 Games’ policy regarding Control’s next-gen upgrade has been particularly egregious. Released in 2019, Control was one of the first games to implement advanced ray-tracing effects on the PC, and is one of the games NVIDIA uses to showcase the capabilities of its RTX cards. Console players, however, could not enjoy these features because the Xbox Series X|S and the PS5 were yet to be released. 

When the publisher did offer a next-gen console upgrade, they locked it behind a $40 ‘Ultimate Edition’, which had no new content when compared to the Digital Deluxe Edition many users had bought earlier, expecting a free next-gen upgrade for the extra money they had paid. In fact, the only upgrade path for the next-gen patch on console is to buy the Ultimate Edition, even if you have bought the base game and all the DLC earlier, and fans of the game are justifiably angry. 

Despite being simpler than a remaster or a remake, a next-gen patch is a far more convoluted upgrade path and can lead to considerable confusion and frustration. While developers strive to create beautiful and faithful remakes and remasters of older classics, there appears to be no industry-wide commitment to deliver the relatively simpler enhancements of a next-gen patch. Is this because gamers now expect these upgrades for free?

Conclusion

Remakes and remasters succeed or fail based on how well they uplift a game while remaining true to the source material, especially as nostalgia is a significant factor in determining such games’ sales. In the future, however, we may not see such remakes or remasters because of the prominence of live-service games, which aim to always keep pace with the 11latest graphics standards as part of their intent to keep gamers engaged for years. 

Also, the primacy and profitability of mobile games may make remakes and remasters less important in the future because the factors driving their creation may no longer be relevant. Shadow of the Colossus has arguably awed multiple generations of gamers, but will a mobile game like Angry Birds ever hold the same place in gamers’ hearts, especially to justify a remaster? 

Ray-tracing transforms the look of present-day games that implement it, and we may soon reach a point where there isn’t much of a difference between console generations, especially as consoles these days feature much the same architecture as PCs, while being optimised for gaming. This would imply that the ‘next-gen upgrade’ will suffice to update a game to a new console generation or a new line of PC hardware. 

Given how utterly confusing upgrade paths are today, publishers, developers and console manufacturers – especially Sony and Microsoft – may soon have to collaborate on establishing a standard by which each console automatically provides access to the most suitable version of any title that a gamer buys. Microsoft has already laid the groundwork with Smart Delivery, but such an initiative can work only if everyone agrees to implement it. Gamers will continue to be short-changed, especially when it comes to next-gen patches, until an industry-wide policy is established for such upgrades. 

Gameopedia offers custom solutions depending on your specific data requirements. Reach out to us for actionable insights on the remake-and-remaster trend in the gaming industry.

Read More

The Decline of Physical Games and The Rise of Digital Distribution

When Valve made the much-anticipated Half-Life 2 available on Steam in 2004, a deluge of users rushed to download the game or authenticate their physical copy, and Steam simply keeled over and crashed

Valve had made Steam authentication mandatory for even physical copies of the game, so everyone who had bought the title, either through Steam or at a retailer, had to go through Valve’s client, and the company’s servers simply could not handle the load. Gamers who had expected the further adventures of Gordon Freeman ended up with pop-ups from the Steam client apologising for the lengthy delays. 

Steam has come a long way since then and is a fixture of millions of PCs now, and many major publishers, including Sony, currently release games on the platform. Steam is also home to thousands of indie games and is arguably the most prominent digital delivery service for PCs. 

The digital distribution of games – whether on PC, console or mobile – has grown prevalent thanks to growing internet speeds, higher broadband penetration, and many other factors. Consoles have long featured digital storefronts from which gamers can buy and download games, smartphone games are available via Google Play or the iOS App Store, and for the PC, digital games have practically become the norm as modern computers tend to lack optical drives. In turn, sales of physical game copies, or ‘retail’ editions, are on the decline, though they are yet to be fully supplanted by digital delivery. 

In this blog, we will delve into the history of game storage media, from cartridges to Blu-ray discs and chart the rise to prominence of digital distribution services for games. Game storage formats have played a significant role in the history of gaming, and even today’s major consoles provide support for physical copies. However, the industry is transitioning towards digital delivery as the principal mode for selling games – we will look into why this is the case.

What is a Digital Game?

A digital game refers to any title downloaded from a digital delivery service such as Steam, a console manufacturer’s online store, or smartphone app stores. Digital game data can be installed on devices like internal hard disks, solid-state drives and even removable storage. Note that such games do not have to be paid for a la carte – subscription services such as the Xbox Game Pass or PlayStation Plus allow users to download many games and play them so long as they remain subscribed.  

What is a Physical Game?

A physical game copy refers to any game whose data is stored in physical media like cartridges, floppy discs, CDs, DVDs or Blu-ray discs. Today, such physical copies may not contain all the files required to play the game and you may still have to download either a significant portion of game data or the latest patch from a digital delivery service.

In the past (especially during the time of cartridges), buying a game was a one-and-done deal, and you inserted the game cartridge into a slot to play and even save progress – cartridges featured a chip specially designed for game saves. Effectively, this meant that all game data – whether it was the game itself, or your progress, was saved in the cartridge, and not on a storage device in the console. 

Physical game copies (especially of console games) can be shared among friends, sold second-hand to others or traded in at a retailer. They can even allow access to a full game without necessitating an internet connection, though most contemporary games need downloadable patches and updates to work properly. People still buy physical games because of discounts at local retailers or e-commerce sites, and the advantages that physical copies offer. 

A Retail Display of Video Games at a Store in Geneva (Courtesy Wikimedia Commons)
A Retail Display of Video Games at a Store in Geneva (Courtesy Wikimedia Commons)

In the next section, we will outline the evolution of physical game storage formats, from the cartridge to the Blu-ray disc. Like any form of media, gaming depended on physical storage before high-quality internet speeds made digital downloads a viable alternative. The capacity and efficiency of physical formats steadily increased and the transition from one storage format to another often marked a major inflection point for the gaming industry, as we will see below. 

A History of Physical Game Media

Physical copies of games have played an integral role in making gaming a popular and affordable past-time, especially since they were geared from the first to make home gaming viable. 

The following sections will discuss the role of cartridges and floppy disks in gaming from the 70’s to the 90’s, followed by CDs and DVDs (both of which Sony used to great effect in PlayStation consoles), and then to Blu-Ray, which was first used by Sony for the PlayStation 3, and then integrated with Xbox consoles as well from the eighth console generation onwards. 

Storage Size Growth

Cartridges and Floppy Disks

The advent of cartridges marked a major shift for home gaming – no longer did consumers have to buy dedicated consoles for their favourite games, but could buy a single console and play multiple games on it simply by slotting a compatible game cartridge into the system. 

Fairchild Semiconductor pioneered the design of the first console with interchangeable game cartridges – the Fairchild Channel F (1976). Although overshadowed by the Atari 2600 (1977), Fairchild was the first to create a console with a microprocessor that loaded games from programmable cartridges.

The Fairchild Channel F (Courtesy Wikimedia Commons)
The Fairchild Channel F (Courtesy Wikimedia Commons)

Designers at Fairchild knew that their sensitive cartridge circuit boards had to be capable of withstanding considerable abuse, like being left out in the sun, or being stepped upon. They encased their technology in hard, durable plastic, but also created an easy-to-use slotting mechanism, which would enable both the cartridges and the console to withstand multiple insertions and ejections. 

Cartridge design evolved considerably after Fairchild’s pioneering efforts. They loaded graphics and other game data faster and faster, and new iterations were made to use less system memory. By the time Nintendo released the N64 (1996), cartridges could store upto 64 MB of data – the first cartridges made by Atari and Fairchild could store just 32 KB at best. From the late 70s to the early 90s, the cartridge was the default storage medium for console games, especially because they were hard to reverse engineer

Game Cartridges for the Atari 2600 (Courtesy Flickr)
Game Cartridges for the Atari 2600 (Courtesy Flickr)

During roughly the same period, floppy disks – especially the 3.5 inch versions introduced by Sony in 1980 – were used to store and exchange games and other programs for the PC. The 3.5 inch disk was designed to be durable and resilient, with a hard plastic casing and a sliding metal shutter that protected the magnetic storage tape inside. They were an ‘almost viral’ way for transmitting shareware – especially portions of games like Doom (1993) and the Commander Keen games. 

Doom was also one of the most prominent games to be released on floppy disks, as were many other games of that period, such as Prince of Persia (1989). Such games could be shared even with friends who didn’t have the same type of PC, since the floppy disk drive had become an industry standard by the late 70s. 

The Floppy Disk Set for the Full Release of Doom (Courtesy Internet Archive)
The Floppy Disk Set for the Full Release of Doom (Courtesy Internet Archive)

Both the floppy disk and the game cartridge would eventually be supplanted by the CD-ROM (Compact Disc Read Only Memory). We discuss the advent and rise of optical media in the following sections.

Optical Media

The use of optical media greatly expanded the possibilities of gaming by allowing games to be much larger in size. In a cartridge, game data is stored on a single chip on a circuit board, limiting storage capacity, and save games are stored on another chip. But almost all the surface area of an optical disc can be crammed with data in the form of microscopic indents etched by a laser – such discs are known as ‘optical’ because light is used to both read and write onto them. A Blu-ray disc’s capacity is more than 50 times that of a CD because of its much smaller, densely-packed indents.

Comparison of Storage Density in Optical Media (Courtesy Wikimedia Commons)
Comparison of Storage Density in Optical Media (Courtesy Wikimedia Commons)

Games such as Final Fantasy VII (1997), Halo 2 (2004), the Last of Us (2013), and many others made full use of the storage space available to them to deliver quality content at the highest-fidelity graphics that were possible at the time they were released. Support for optical media also helped consoles double as home entertainment systems, and the success of many consoles, across generations, was partly because they could play movies and music as well. 

The CD-ROM Supersedes the Game Cartridge

The PlayStation (1994) is the first home gaming console to sell more than 100 million units, and this is at least partly due to its use of compact disks (CDs) as the storage format for its games. In 1985, Sony and Philips had together developed a technical standard by which CDs could hold any form of data, which led to the creation of CD-ROMs. With about 700 MB of storage space, they dwarfed the capacity of cartridges (which could store up to 64 MB) and allowed games to offer much more content than was possible earlier. CDs also featured various other advantages – they were cheaper to manufacture and also reduced production times for games, leading to lower retail prices. 

Sony’s PlayStation Popularised the CD-ROM as a Game Storage Medium (Courtesy Wikimedia Commons)
Sony’s PlayStation Popularised the CD-ROM as a Game Storage Medium (Courtesy Wikimedia Commons)

According to a PC World article, Sony won the first console war it ever participated in largely because it used CDs. Due to the inherent advantages of this storage format, Sony was able to bring many third-party publishers into the fold, including Square Enix, whose hugely successful Final Fantasy games had traditionally been Nintendo exclusives and had served as system sellers for Nintendo consoles

Square decided that Nintendo’s cartridge-based N64 would not suffice for Final Fantasy VII. It turned to Sony, and the end result was a gorgeous game with high-quality cinematics and pre-rendered backgrounds, all made possible because of the CD’s higher storage capacity and the PlayStation’s support for 3D graphics. Final Fantasy VII was hailed as the ‘game that sold the PlayStation’ – FF VII did for Sony what previous instalments of the franchise had done for Nintendo.

Final Fantasy VII (Courtesy Sony)
Final Fantasy VII (Courtesy Sony)

The PlayStation could also play audio and video CDs, making it one of the most versatile home entertainment systems of the time. The resounding success of the PlayStation led to the marginalisation of the cartridge, and home gaming consoles would use optical storage as the primary medium by selling games.

Final Fantasy VII’s Cinematics Were a Key Selling Point for the Game (Courtesy Sony)
Final Fantasy VII’s Cinematics Were a Key Selling Point for the Game (Courtesy Sony)

DVDs Turn Consoles into Home Entertainment Systems

Much like the CD-ROM, the DVD (Digital Versatile Disc) would allow games to be much larger in size and feature more content. Halo: Combat Evolved (2001) spawned a huge franchise that became a major system seller for Xbox consoles, and required the higher storage capacity of a DVD. Halo 2, considered one of the greatest games of all time, is nearly seven times as large as the first Halo game.

Halo 2 (Courtesy Microsoft)
Halo 2 (Courtesy Microsoft)

The larger-sized 3D games made for the PS 2 and the Xbox are recognisably modern – Gran Turismo 3 (2001) for the PS 2 features stunning graphics, as do the Halo games released for the Xbox, which used Microsoft’s DirectX technology and a graphics card made in collaboration with Nvidia for hardware acceleration. Higher graphical fidelity depends on high-resolution textures and game assets, which in turn require greater storage space – the shift to DVD enabled more content, and contributed to higher-quality graphics as well.

The PlayStation 2 (2000) and the first Xbox (2001) supported game DVDs and CDs and both could interface with home theatre systems, essentially serving as DVD players. The PS 2 could play movies and music out of the box, with the gamepad allowing you to control playback. You could also buy a remote for image adjustments and more playback features. Unlike the PS 2, the Xbox did not come with built-in DVD playback support: it needed a DVD kit containing a remote control and infrared sensor

The Xbox Required a Special Kit for DVD Video Playback (Courtesy Wikimedia Commons)
The Xbox Required a Special Kit for DVD Video Playback (Courtesy Wikimedia Commons)

Costing a mere $299 at launch, the PS 2 was actually cheaper than some of the standalone DVD players of that time. The console’s support for DVD video was welcomed at a time when the DVD was one of the best ways to experience movies at home, as it came with bonus features and extra materials like deleted scenes and interviews with cast and crew. Films like the lengthy Lord of the Rings trilogy made it to the DVD as extended editions. The PS 2 not only boasted a great library of game exclusives but also served as an affordable home entertainment system, allowing users to enjoy high-quality DVD editions of their favourite movies. 

The PS 2 is the best-selling home-gaming console of all time, and one of the factors that is said to have contributed to this was its capacity to double as a DVD player – it even allowed you to play burned CDs and DVDs, i.e, pirated movies, music and games.

Blu-ray Powers Massive Games

The Blu-ray disc format allowed even larger game sizes thanks to its storage capacity of upto 50 GB. The PS 3 edition of Last of Us (2013) – one of the finest games ever made – was around 26 GB in size, Uncharted 3: Drake’s Deception (2011) takes up 43.5 GB of storage space, and the PS 4 box-set for the Last of Us Part II contained two Blu-ray discs, to accommodate more than 70 GB of game data.

The Last of Us Part II (Courtesy Sony)
The Last of Us Part II (Courtesy Sony)

Game sizes actually increased from one console generation to another – Call of Duty: Ghosts (2013) for the PS 3 was 10.9 GB in size – this mushroomed to more than 30 GB for the PS 4, mainly because the next-gen versions came with higher-resolutions texture sets, higher-polygon game assets, and higher-definition cutscenes and cinematics.

In fact, the Blu-ray disc format’s prevalence is at least partly due to gaming and the PlayStation 3. Sony created the prototype of the Blu-ray disc in 2000, but when it was officially released in 2006, a format war ensued between Blu-ray and the HD-DVD format. By 2008, however, both the software and entertainment industries had settled on the Blu-ray format – a BBC article published the same year argues that the format won because it was integrated with the PlayStation 3, which had sold more than 10 million units by early 2008 and could play Blu-ray video at a time when the first standalone Blu-ray players were nearly double the PS3’s price. Microsoft had gone with the HD-DVD format for the Xbox 360 (2005), and though rumours sometimes surfaced about a Blu-ray add-on for the console, the Xbox 360 does not support Blu-ray playback

Eventually, Microsoft also chose the Blu-ray, which is supported by the PS 4 (2013), the PS 5 disc edition (2020), the Xbox One (2013) and its variants, and the Xbox Series X (2020). These consoles support DVD and Blu-ray video playback as well.

It must be noted that a built-in Blu-ray player did not propel the PS 3 to the success that PS 2 enjoyed with its DVD playback capability. Nintendo’s Wii (2006), which used a proprietary DVD-based disc format for physical copies and lacked any movie or music playback, won the console war of this generation, mainly due to its far wider appeal. 

In fact, by the late 2000s, observers were predicting that physical formats would decline thanks to a new ‘download era’ ushered in by high-speed, high-bandwidth internet connections and increased broadband penetration – in effect, supporting a physical format would not confer a significant advantage. In the next sections, we will discuss the rise to prominence of digital distribution for games, and the factors that have driven this transition.

The Prominence of Digital Delivery in Gaming

Digital delivery for games has grown more and more popular over the course of the last decade and the pandemic-period lockdowns accelerated this trend – in 2020, digital game sales surpassed physical sales for the first time, and 91% of the game industry’s revenue was digital (this includes full game downloads along with in-app purchases, downloadable content and mobile game sales). By 2021, the number of unique console titles sold on digital platforms in the US had far surpassed games sold as physical copies. According to an NPD report, nearly 2200 unique console titles were released on digital storefronts in the US, as compared to 226 titles sold as physical copies. A year later, Sony reported that digital purchases constituted 80% of game sales in 2022’s first quarter

Unique Digital Console Titles Far Surpassed Physical Copies in 2021 (Courtesy Ars Technica)
Unique Digital Console Titles Far Surpassed Physical Copies in 2021 (Courtesy Ars Technica)

All this data suggests that digital delivery has become the primary, if not the sole, method for distributing games, and analysts believe that digital games will totally dominate the industry in less than a decade. Digital delivery is already the default for smartphones, which have never offered support for physical games, and we have covered mobile gaming and its main platforms in detail elsewhere. However, digital games did not rise to primacy overnight for other platforms such as PC and consoles, which had to transition from selling large-sized titles on high-capacity optical media to digital delivery. We will discuss the emergence and growth of digital distribution in such platforms below.

The Rise of Steam

Steam is by far the most successful digital distribution platform for PC games, though it was not the first – the now-defunct Stardock Central (2001) was the first such service for PC games. Steam was released in 2003 as a client that could easily update certain Valve games, especially the popular multiplayer shooter, Counter-Strike (2004).  

A 2002 survey conducted by Valve revealed that 75% of its users had broadband connections, and convinced the company that digital delivery of games was a viable proposition. Their first attempt to deliver a game digitally (Half-Life 2) was not a resounding success, as is recounted above, but Valve continued to improve the Steam client and its support infrastructure, and within a few years, major publishers such as id Software, Take Two Interactive, Eidos, EA and many others decided to make their PC games available on Steam.

The Steam User Interface circa 2010 (Courtesy Valve and Reddit User 2muchrubik)
The Steam User Interface circa 2010 (Courtesy Valve and Reddit User 2muchrubik)

As the service grew more and more popular, some developers baulked at its restrictive terms of service, with EA deciding to launch Mass Effect 3 on its own Origin platform in 2012. Nevertheless, Steam remains the pre-eminent digital distribution platform for PC games – by 2021, it had 132 million monthly active users who could browse a library of over 50000 unique titles. Revenue from games sold on Steam reached $6.6 bn in the same year. 

Other digital delivery sites include GOG.com (2008) – which enforces a strict non-digital rights management policy – and the Epic Games Store (2018), which is attempting to challenge Steam’s primacy. To incentivise publishers, Epic takes a 12% cut from game sales, whereas Steam takes 30% initially, though its share decreases to 20% if game revenue exceeds $50 million. Epic also does not enforce a digital rights management (DRM) policy for all games, allowing each publisher to decide DRM policy. 

In 2020, Epic Games spent more than $400 million to secure games that would be exclusive to Epic, and unavailable on Steam, for at least a year. Subsequently, in December 2021, monthly active users peaked at 62 million, which is less than half of Steam’s 132 million monthly active users, and as of 2021, Epic has a total of 917 games, a fraction of Steam’s game catalogue. However, Steam took more than a decade to build its library and following, and the Epic Store has been around for less than five years. Epic is also known for freely giving away an estimated $17.5 billion worth of games, and even though its store went down for eight hours when it made Grand Theft Auto V free for a week in May 2020, Epic got seven million new users as a result of the giveaway. Steam might have a huge head-start, but Epic arguably has the momentum.

Epic Games Got 7 Million New Users After its GTA V Giveaway (Courtesy Rockstar Games)
Epic Games Got 7 Million New Users After its GTA V Giveaway (Courtesy Rockstar Games)

The prominence of Steam, the rise of Epic and the death of the computer optical drive, have made digital distribution the principal method for selling games for PCs.

Digital Stores for Consoles Evolve

Console makers, especially Microsoft, were quick to realise the potential of digital delivery – the original Xbox shipped with the Xbox Live (2002) service, whose principal function was to enable multiplayer gaming and the digital delivery of game content. In its early years, the platform offered premium downloadable content and add-ons for games, but could not offer full game downloads. In 2004, Microsoft created the Xbox Live Arcade platform, which offered small, quickly downloadable games from a range of developers – most of these games were either console classics or arcade-style titles. The platform served as an easy entry point for independent developers who could develop games and quickly release them as download-only titles. 

Paperboy, a 1985 Atari Game, on Xbox Live Arcade (Courtesy Flickr)
Paperboy, a 1985 Atari Game, on Xbox Live Arcade (Courtesy Flickr)

Sony launched the PlayStation Store in 2006, and as with the Xbox, it initially offered downloadable content rather than full game downloads. The Store platform was first launched with the PlayStation 3 and six years later, it started to offer a variety of full game downloads. 

The Wii Shop Channel (2006 – 2019), a digital distribution platform for the Nintendo Wii, supported the download of various applications, content, and even a web browser. Wii Ware (2008), one of the services in the Wii Shop Channel, was Nintendo’s first foray into digital delivery of full (small-sized) games, and as Microsoft did with the Xbox Live Arcade, Nintendo promoted Wii Ware as an avenue for small developers to publish innovative content that would be delivered digitally, thereby avoiding the risks and commitments of retail distribution. The Wii Ware service shut down when the Wii Shop Channel was discontinued, and players can no longer purchase titles anymore, though they can continue to download owned games to compatible devices.

By 2009, both Microsoft and Sony started offering full game downloads – after its E3 press conference, Microsoft announced that games such as Bioshock (2007), Assassin’s Creed (2007), and The Elder Scrolls IV: Oblivion (2006) would be offered as full game downloads on its Store platform for the Xbox 360. Meanwhile, Sony announced that full games could be downloaded over WiFi to its handheld gaming device, the PSP Go (2009), which had 16 GB of storage and lacked a slot for physical games. By 2011, the PS 3 had also enabled full game downloads – at a time when game sizes were ballooning, and broadband speeds were not quite capable of downloading large-sized games. In 2013, Ars Technica reported that a major Steam game took about 4 hours to download, while a PS 3 game took more than 5 hours to download – and this was in the US, where broadband speeds have generally been high. Download times have however tended to decrease as internet speeds and broadband access have steadily improved.

Oblivion was One of the First AAA Games to be Delivered Digitally on the Xbox (Courtesy Bethesda Softworks)
Oblivion was One of the First AAA Games to be Delivered Digitally on the Xbox (Courtesy Bethesda Softworks)

The Nintendo eShop launched in 2011 for the 3DS and was made available on the Wii U at the console’s launch (2012), and keeping up with Sony and Microsoft, Nintendo’s eShop featured full game downloads, along with DLC, video content, updates and more. The first game to be released both on the eShop and as a retail copy was New Super Mario Bros. 2 (2012). 

New Super Mario Bros 2 (Courtesy Nintendo)
New Super Mario Bros 2 (Courtesy Nintendo)

The majority of Nintendo’s retail releases are now available as digital downloads, and developers can also choose to publish digital-only titles on the eShop. This platform is the primary digital storefront for the Nintendo Switch, the hybrid hand-held and home-gaming console that has sold more than 100 million units

By 2013, digital game sales had started to edge past physical sales in the US, and by 2018, physical sales accounted for only 17% of all game sales in the country. In the next section, we will discuss the factors that have contributed to the rise of digital distribution. 

Physical Game Sales Have Steadily Declined Since 2013
Physical Game Sales Have Steadily Declined Since 2013

Why Have Physical Game Sales Declined?

There are many interconnected factors that have led to the decline of physical game sales. Higher internet speeds and broadband penetration have made video streaming services possible – they have also contributed to making large game downloads viable. The COVID pandemic also made digital downloads popular during the shutdowns, and revenue from digital downloads have surpassed physical sales in part because mobile games, which are delivered digitally, have become the biggest segment of the gaming market. We discuss these and other factors below. 

  • Increasing penetration of high-bandwidth, high-speed, low-cost internet: Based on Ookla reports, nearly 50 countries have median download speeds greater than 100 megabits/second (Mbps). As of September 2021, the global average download speed on broadband was 113.2 Mbps, up nearly 30 megabits from 2020 speeds. 83% of US households have access to broadband connections, and a majority of those homes have connection speeds of 15 Mbps or higher. With the advent of 4k texture packs, 4k video, high-quality audio and more, the size of some contemporary games exceeds 100 GB, with Call of Duty: Modern Warfare (2019) at a staggering 183 GB. The game would take at least four Blu-ray discs for the retail edition, and a high-speed internet connection is a more viable option for delivering such games than cumbersome disc editions. 
  • Digital storefronts control game prices and cut out the resale market: Game sales are significant sources of revenue for console makers and publishers. Cutting out the middleman – retailers who stock physical copies of games and trade-ins – and drawing people to digital storefronts would result in higher overall profits for console manufacturers and publishers. This has led console makers to offer more incentives for digital purchases – the Xbox Play Anywhere program, launched in 2016, allowed users to buy a digital Xbox game and play it on both the console and the PC. Both Sony and Microsoft now offer cheaper all-digital editions of consoles: the Xbox Series S and the PlayStation 5 Digital Edition. Microsoft is even working on a patent that will allow Series S owners to authenticate a physical game copy using an external disc drive and download it via the digital storefront. The PS 5 uses an advanced compression technique to decrease the size of game data and reduce the wait time to access a digitally downloaded game.
  • The Pandemic made digital downloads more popular: Many observers have pointed out that lockdown restrictions spurred the purchase of digital copies, especially as gaming itself became of vital importance as a means of connecting with people, and physical stores could not be visited due to widespread shutdowns. In fact, the pandemic is considered a tipping point for digital game sales. By 2020, when the current generation of consoles were released, digital purchases accounted for nearly half of console game sales
  • Cloud gaming and gaming subscriptions: Currently, both Microsoft and Sony feature a robust digital delivery platform, and both offer game subscription services with libraries that boast multiple exclusives, which can be downloaded and played for free so long as you pay a monthly subscription fee. Such services also offer discounted prices at the digital storefront for many games, including those that are not part of the subscription. The highest tiers of these services also feature cloud gaming. With 25 million users, the Xbox Game Pass accounts for a significant portion of Microsoft’s gaming revenue. In fact, the rise of subscription services, made possible by higher internet speeds, could even lead to a new war based on the ‘subscription exclusives’ between console makers.
  • Live-service games depend on downloads: The industry has seen a major shift towards live-service games, which seek to keep their audiences engaged for years by regularly adding new game content, updates, game-balancing patches and in-app purchases – and all such content is delivered digitally. In a world where many of the industry’s most prominent titles follow the games-as-a-service model, physical copies may simply not be a viable mode of distribution. Live-service games are also quite large in size – the free-to-play game Destiny 2, including all expansions and updates, currently requires 97 GB of storage space on a PS 5, and more than 100 GB of space on an Xbox Series X|S, and though a physical copy exists, a significant portion of the game content needs to be downloaded. Some of the most prominent games today depend on digital delivery.
  • The biggest sector in gaming uses digital delivery: Mobile games are delivered as apps through stores such as Google Play and the iOS App Store, and they are some of the most lucrative titles on the market. As of 2022, more than 60 mobile games have made more than $1 billion in lifetime sales, and mobile titles constitute the largest share (45%) of global games revenue. The mobile gaming market is expected to be worth nearly $140 billion by 2026, and as mobile games grow ever more popular and lucrative, revenue from digital sales (including full-game downloads, DLC, in-app purchases and more) will continue to surpass physical sales revenue. 
  • Considering that so many factors are contributing toward the transition to digital delivery, one might believe that physical games are on the way to extinction. This is, however, not quite the case. In 2018, gamesindustry.biz argued that the sale of physical games might be declining, but it still remained an industry worth billions of dollars worldwide, with 75% of AAA games being sold as physical copies. A year later, it reported that physical console games accounted for 60% of sales value during the last quarter of 2019. These reports may reflect a pre-COVID reality, but even during the pandemic, in April 2020, more than a million physical games were sold in the UK, the highest figure since 2015. Physical games, while on the decline, are not yet finished.

Conclusion

The inherent advantages of physical games – shareability, resale value and access to game data without the need for a connection – still hold value to gamers. The outrage over Microsoft’s attempt to implement an always-online DRM policy for even physical Xbox One games shows that full ownership – another perk of physical games – is valued by users.

However, the Nintendo Switch’s cartridges underscore the problems with physical copies. While Sony and Microsoft support Blu-ray disc editions for their consoles, the Switch uses a proprietary cartridge based on the SD card format for retail games – such cards can currently hold upto 32 GB of data, though 64-GB cartridges have been in the works for at least three years. The small storage capacity of such cartridges requires users to download a significant part of the game, and they are also costlier to manufacture than Blu-ray discs, resulting in some games costing more on the Switch than on PC or console. These disadvantages may lead the portable Switch’s successors to eschew support for physical copies.

In fact, both retail distribution and digital delivery may be superseded by cloud gaming, which is expected to become a $20.9-billion industry by 2030. Amazon’s Luna and the cloud gaming solutions of Microsoft and Sony require only a fast connection to play a game, and if game streaming becomes prevalent, it can make retail copies, digital downloads and even game installation a thing of the past. Cloud gaming is considered the ‘killer use-case’ for burgeoning 5G networks, and the growth of 5G infrastructure can make game streaming a highly popular and easy way to play games. 

Some sources on the web continue to argue that physical games are still relevant, and others that they should remain relevant, given that access to a digital copy of a game can potentially end when the provider goes bust – this is one of many arguments used to justify breaking DRM on media. Some have even argued that the death of the Blu-ray format (as a medium for movies and games) does not bode well for gamers, especially those on a budget, who need to resell their games to afford new ones.

Opinions aside, the industry itself is moving away from physical copies in a quite decisive manner, propelled by steadily increasing internet speeds and a changing industry landscape in which highly-lucrative live-service titles and mobile games dominate. Physical copies may not completely die out, but they will likely be relegated to a (very) niche market in the years to come. 

Gameopedia offers custom solutions depending on your specific data requirements. Reach out to us for actionable insights on both digital delivery and the retail distribution of games.

Read More

Game Engines: All You Need to Know

On January 1, 1993, id Software issued a press release about their upcoming game, Doom, in which they made a few immodest claims – the game would herald a technical revolution in PC programming, it would push back the boundaries of what was thought possible on contemporary PCs, and would offer a host of technical ‘tour de forces to surprise the eyes’ while delivering smooth gameplay across a range of PC hardware. The press release also contained a quote from John Carmack, id’s technical director, who claimed that the Doom ‘engine’ was optimised to run at excellent frame rates. 

When it came out, Doom (1993) was all it had promised to be. It was played by millions of people and continues to be celebrated as a milestone in gaming and culture more than 25 years later. Many factors contributed to Doom’s massive success, but perhaps the most important was the ‘engine’ used to power the game, which not only allowed Carmack and his associates to push the limits of what was possible in a computer game, but also repeat such a feat with their subsequent games, deploying upgraded versions of their original engine. In effect, id Software created a new paradigm for game development – the use and reuse of a unified code base known as an engine to create new games that shared a common core. 

Engines such as the one mentioned in id’s press release now power the creation of a bewildering array of games for multiple platforms – in this blog, we will discuss what engines are, their early history, and the two most popular game engines in the world – the Unreal Engine and Unity. We will also delve into various first-party game engines used by major studios to create games for classic franchises such as Assassin’s Creed, Battlefield, Call of Duty and The Elder Scrolls. 

What is a Game Engine and Why it’s Vital

A game engine is a software framework primarily designed for the development of video games. Developers can use engines to construct games for consoles, PC’s, mobile devices and even VR platforms.

Game engines often include core functionality such as a 2D or 3D renderer, a physics or collision engine, and support for sound, cinematics, scripting, animation, AI, and networked play. They also allow games to be deployed on multiple platforms – game engines are capable of platform abstraction. Both Unity and Unreal Engine support easy deployment of game-ready assets from their respective marketplaces, the Unity Asset Store and the Unreal Marketplace.

The great advantage of a game engine is that it allows game developers to reuse or adapt the engine to make new games, instead of starting from scratch – they are spared the chore of reinventing the wheel. By providing support out of the box for developing many aspects of a game, engines reduce cost, time and manpower investment, allowing game studios to remain competitive in a constantly-evolving industry in which production costs have steadily risen. They also foster innovation by making game development easier. 

In the following section, we will discuss how game engines transformed game development, by allowing developers to use a single software to create multiple games for multiple platforms. 

History of Game Engines

Strictly speaking, game engines did not exist prior to the ’90s. Before the advent of such software applications, games had to be designed from the ground up, and very little of the code base could be reused when deploying games on multiple platforms. Games for the Atari 2600 – a major game console released in the ’70s – were coded using assembly language specific to the console. Developers did reuse code, but there was no unified code base that could be extended to create multiple Atari 2600 games. 

Super Mario Bros (1985) is arguably the first console game to be made by reusing a critical piece of code – Shigeru Miyamoto’s team had written the code to enable smooth side scrolling for their 2D racing game Excitebike (1984), and their code was reused in the making of the first Super Mario Bros game, allowing Mario to smoothly move across the screen and even accelerate from a walk to a run. This was an important inflection point in game development, considering that much of the first Mario game had to be designed by hand, on graph paper, and the platform for which it was built, the Nintendo Entertainment System, used assembly language, limiting code reuse. Despite this, the developers managed to re-implement the Excitebike code for the Mario game.

Super Mario Bros
Super Mario Bros (Courtesy Nintendo)

The ’80s also saw the release of several 2D game development kits – known as game creation systems – that enabled users to create specific games using pre-built assets: the Pinball Construction Set (1983), the Adventure Construction Set (1984) and the Shoot-‘Em-Up Construction Kit (1987) are examples of kits published during this time. Each kit was specialised for a specific type of game – users made pinball games with the Pinball Construction set, while the Shoot-Em-Up kit enabled the creation of 2D shooters.

Pinball Construction Set (Courtesy Electronic Arts)
Pinball Construction Set (Courtesy Electronic Arts)

These kits allowed everyday users to create games using existing components, and often featured a Graphical User Interface (GUI) that further eased the process of creating a game. The Shoot-Em-Up kit even allowed users to share their creations as full game files, which could then be played on other systems that did not have the construction kit itself. No coding knowledge was required to make these games – this helped popularise such kits, but also limited the scope of the games that could be made with them.

Garry Kitchen’s GameMaker (1985), released by Activision for various home computer systems, was the first integrated development environment for making games, and can be considered a proto-game engine. GameMaker allowed users to create background graphics, movable objects known as sprites, sound effects and in-game music, and also included a game programming language that allowed the developer to code more features. GameMaker’s versatile programming language set it far apart from other game creation systems of the time.

The first game engine as we understand the term today was not the Doom engine, but an earlier code base created by id for the production of a new game trilogy for their Commander Keen franchise. From the outset, id recognised that creating a single piece of software that provided common functionality for multiple games was a more significant accomplishment than making a game, and even tried to license the so-called ‘Keen engine’ (1991), but met with little success

The reason why this software was called a game engine is largely because both John Carmack and John Romero of id are car enthusiasts. According to Romero, an engine is the heart of the car, while the game engine is the heart of the game – it powers the game, while the art and other game assets are comparable to the body of the car. After coining the term, id introduced it to the world with the Doom press release in 1993. 

The developer found a much more receptive audience for the Doom engine – after all, it had powered a massive hit, and was hence licensed by several companies, leading to the release of games such as Heretic (1994), Hexen: Beyond Heretic (1995) and Strife: Quest for the Sigil (1996). Developers using the Doom engine added their own graphics, characters, weapons and levels—the “game content” or “game assets” were unique to the developer, but the technology that powered the game was id’s engine.

Heretic (Courtesy GT Interactive)
Heretic (Courtesy GT Interactive)

The engine itself was revolutionary in many ways – it faked a 3D experience by adding height differences to the environment and used only 2D sprites. Even now, Doom is remembered for how fast-paced it was, but the original renderer Carmack wrote slowed to a crawl on most systems when rendering complex scenes. Carmack then researched academic papers and implemented a technique known as binary space partitioning – never used before in a game – to dramatically speed up the Doom engine’s renderer. In drastically simplified terms, binary space partitioning enabled the engine to prioritise the rendering of objects based on how close they were to the player, and ‘cull’ or not render areas that were hidden or too far away from the player, thereby shortening render times. This enabled Doom to be very responsive and run at high frame rates on the home computers of that time, without sacrificing graphical fidelity. 

A few years after Doom, Carmack observed that the engine technology was id’s key value proposition and that game designing (on top of such engines) could well be managed by ‘a few reasonably clued-in players’. 

Ken Silverman took cues from Doom to create the Build (1995) engine at the age of 19. The Build engine refined the illusion of 3D with the ability to look up and down in games such as Duke Nukem 3D (1996), even though these titles were rendered, like Doom, on a 2D plane. Silverman also added tags to parts of the game world – walking over or into these areas would teleport the player, creating the impression of falling down a pit or passing through a tunnel.

Soon after Doom, id released the Quake engine (1996), which featured true 3D real-time rendering and support for 3D hardware acceleration. The engine also used a dynamic shader for moving objects and a static lightmap for stationary bodies. Quake (1996), which was made with the engine, is yet another milestone in gaming.

Quake (Courtesy GT Interactive)
Quake (Courtesy GT Interactive)

Bethesda had earlier attempted a true 3D engine of its own, called XnGine (1995), which suffered from bugs and stability issues. Eventually, Bethesda achieved success with the engine by creating the huge, procedurally-generated world of The Elder Scrolls II: Daggerfall (1996), but would eventually abandon XnGine for NetImmerse (1997), the predecessor to Gamebryo. 

The developers at id then outdid themselves with id Tech 3 (1998), an upgraded version of the Quake engine that would later power games such as Medal of Honor: Allied Assault (2002) and Call of Duty (2003), and would be licensed to several developers. The engine was simply called the Quake III Arena engine, the game for which id had upgraded their original code base. It featured next-gen graphics using shaders – scripts that determined – and enhanced – the appearance of many in-game objects, surfaces, areas and even character models with accurate shadows, light emission and reflections. Shader code also specified the sound effect associated with a surface, and was also used to identify map sections such as water bodies. The engine was one of the first to allow for curved surface geometry as well. All these innovations – especially the shader system – were computationally expensive and would have crippled the engine’s renderer, which is why the developers implemented the now legendary fast inverse square root algorithm. 

The fast inverse square root algorithm quadrupled the speed of calculations for pathfinding, lighting, reflections, and many other game-critical operations. In effect, Quake III Arena (1999) boasted far better performance – with high quality, hardware accelerated graphics – because a piece of code less than 20 lines long sped up calculations performed millions of times per second in games. When id released the source code for the engine in 2005, the code fragment attracted the attention of many programmers, mathematicians and developers because of the accuracy and speed with which it approximated inverse square roots (it had an error margin of about 1%). It is four times as fast as a regular inverse square root calculation, and is almost as fast, though not as accurate, as the current algorithm for approximating inverse square roots.

Quake III Arena (Courtesy Activision)
Quake III Arena (Courtesy Activision)

id created a thriving licensing business on the back of its pioneering game engine, which also powered the developer’s highly successful multiplayer title – Quake III Arena. It may hence seem like id lacked any serious competition at that time, but that was not the case – it was challenged by a relative newcomer – Epic Games’ Unreal Tournament (1999). Epic’s multiplayer title was made with the Unreal Engine, which had first been used to create Unreal (1998), a first-person shooter that had proved successful. The engine itself surpassed id’s technology in key areas, especially graphics, and in the next section, we will discuss the rise of the Unreal Engine, and Unity, another industry standard engine.

Unreal Tournament (Courtesy GT Interactive)
Unreal Tournament (Courtesy GT Interactive)

Unreal and Unity: The Industry Standard

The Unreal Engine and Unity are currently the industry standard – they provide the ‘digital infrastructure for many of the world’s most popular games’. We will discuss the history of both engines – the first version of the Unreal Engine is a milestone in the development of game engines, and drew praise even from the wizards of id, while Unity rose to prominence by making high-quality tools accessible to indie studios. We will also detail the features that both these engines now offer to developers. 

Unreal Engine

Tim Sweeney, the founder of Epic Games, was no laggard at programming. He coded 90% of the first Unreal Engine for Epic Games’ FPS title, Unreal, which would debut in 1998, although the engine itself was licensed to other developers by 1996. 

Initially, UE was designed for software (CPU-based) rendering, but would later be able to make use of dedicated graphics hardware. From the outset, it used 16-bit colour and supported visual effects such as soft shadows, dynamic lights, volumetric fog and texture filtering – many of these features would not only be praised by id’s Carmack, but also acknowledged as milestones that Tim Sweeney reached before id could get there. 

Unity (Courtesy Unity Technologies)
Unity (Courtesy Unity Technologies)

Carmack would remark that Unreal had broken the mould with the use of 16-bit real colour – developers would inevitably choose to work with a 16-bit colour palette going forward – and that the engine had raised the bar on what gamers would expect from future games. Carmack had always been one of Sweeney’s heroes – in fact, Sweeney equated Carmack’s innovations in the gaming field with Newton’s contributions to the study of physics. One can only imagine Sweeney’s reaction to Carmack’s praise. 

By late 1999, about sixteen projects had been developed using Epic’s engine, including Deus Ex (2000) and The Wheel of Time (1999). Unlike id Software, whose engine business only offered the source code, Epic provided support for licensees and met with them to discuss improvements to its game development engine. 

Unreal also provided tools that were more user-friendly for non-engineers, a crucial factor for the Deus Ex developers, whose team included many designers and whose goal for the game was to go beyond the standard FPS. 

Fast forward to today and Unreal still enjoys a reputation for creating user-friendly tools that greatly extend the capabilities of developers in creating cutting-edge games. The current version of the Unreal Engine has been hailed as a game changer because of its feature set, is free to use and download (there are premium licensing options as well) and is known for its forgiving learning curve. The engine is free for internal projects, though users will have to pay a 5% royalty fee if and when their product earns over $1 million

A huge number of games are made with Unreal – there is an exhaustive list in Wikipedia that keeps track of these. Both Unreal Engine 4 and 5, the latest iterations of the game engine, came with major improvements – Unreal Engine 4 introduced the Blueprints system, a versatile visual scripting language that allows developers to prototype game elements and gameplay mechanics quickly by connecting nodes and other basic building blocks. 

Released in April 2022, UE 5 features a host of innovations and is already being used by companies to make games – currently, we have access to some eye-watering gameplay footage for several upcoming UE 5 games, an interactive tech demo based on the Matrix universe, and footage of a fan-made Superman game, all of which demonstrate the engine’s myriad capabilities. 

UE 5’s tools have brought about some of the most photo-realistic gameplay footage we have yet seen, and we discuss some of the engine’s most important innovations below.

Nanite: This is a ‘virtualised geometry system’ that speeds up the creation of LODs (levels of detail). In any game, objects are rendered at decreasing levels of detail based on how far they are from the player (or how important they are in a game environment).  Before Nanite, developers had to author LODs by hand (the engine would then use the appropriate LOD for an object based on the player’s position), but Nanite allows one to import high-quality assets that are automatically rendered at the correct level of complexity with respect to the player’s point-of-view.

Lumen: This utility is used to manage and rapidly update scene lighting – scenes change to reflect time of day accurately, new light sources are immediately integrated into the overall lighting profile, and even sudden flashes of light entering the shot affect the scene realistically. 

World Partition System: This utility greatly enhances the Unreal Engine’s functionality in terms of building open worlds. The system uses a grid to map an entire universe into manageable sub-level chunks, which can be loaded and unloaded as a player traverses the landscape. Such sub-levels can be developed by independent teams as well. 

UE 5 also has new tools for animation and sound, and can even be used for film and animation production (Unreal Engine 4 is already being used to create the environments for TV shows such as the Mandalorian). It allows for much larger file sizes, supporting even 12k texture files – an important feature considering that games are now played on increasingly higher resolutions. On the whole, UE 5 continues to power the creation of games and media with cutting edge graphics, sound, animations and more.

Unity

When he created the first version of the Unreal Engine, Tim Sweeney was going up against his greatest hero, and the Unreal Engine soon became the de facto choice for AAA titles until Epic made it free, allowing a lot of indie and AA studios to use Unreal’s rich toolset. Unity has quite a different story – from the outset it courted indie developers.

Unreal Engine (Courtesy Epic Games)
Unreal Engine (Courtesy Epic Games)

Unity is the brainchild of three developers – Nicholas Francis, Joachim Ante, and David Helgason. The company was founded in Copenhagen and the engine began life as a graphics tool for Mac OSX. The team then recruited a diverse group of developers, all of whom brought their own ideas to the table and the result was an engine applicable to a variety of use cases. Immediately, commercialising the product became a priority, especially for use by indie developers, whose main pain point was having to reconstruct an engine for every new game concept. 

The creators of Unity felt that their engine would foster creators in the indie game space and democratise game development. They released their first version in 2005, and won an award from Apple for ‘Best Use of Mac OSX Graphics’ the next year. The developers soon provided support for Windows and browsers and the engine grew more sophisticated, eventually allowing the founders to devote their attention full-time to what had essentially begun as a passion project. 

It was in 2008 that Unity skyrocketed in popularity, when it became the first engine to provide support for Apple’s new App Store – suddenly, a great many developers wanted to use Unity to make game apps for the App Store and the engine rapidly rose to prominence as iPhones became ever more popular. 

Unity 3.0 (2010) was an important inflection point as well – it featured Android support, provided advanced graphics features for desktop and console platforms, and a host of technical upgrades – essentially bringing high-end development tools to indie game makers at very affordable rates. Within two years, Venture Beat observed that ‘few companies had done as much for the flowing of independently produced games as Unity’. Unity has continued to deliver high-end tools to developers who would otherwise have no access to them, and by 2018, CEO John Riccitiello claimed that half of all the games on the market were made using Unity. 

Unity now supports over 25 platforms and has long been the darling of indie developers. Unity has an advanced physics engine, integrates with Autodesk asset-creation tools like Maya and 3DX Max, and matches most of what UE offers – recently, it even introduced the Bolt system to compete with Unreal’s Blueprints. However, while even novice programmers can use Blueprints to develop the entire logic of a game, Bolt’s functionality does not extend that far.

Unity has a massive community of users, and learning it is a breeze because of the wealth of tutorials and guides available online – it is known for being beginner-friendly. Like UE, Unity is the engine behind a huge number of games and enables the creation of 2D, 3D, VR and AR content – it now powers lucrative mobile games such as the AR title Pokemon Go, and Call of Duty Mobile. 

According to a TechCrunch article, Unity didn’t seek to topple AAA game engines but succeeded in making – and selling – a product ideally suited to the budget and needs of independent studios. Unity’s market grew rapidly when it supported the App Store, and continues to expand thanks to the steady addition of new features. 

Though it first rose to prominence as an engine for iOS games, Unity has long been capable of producing AAA-quality titles across platforms. Both Unity 5 (2015) and Unity 2021 have introduced major updates to graphics, sound, lighting, animations and multiplayer, added visual effects such as volumetric fog and global illumination (which make environments more realistic), and integrated techniques such as deferred rendering (an optimised form of rendering that greatly improves framerates), all of which have contributed to the polish of new Unity games. 

The engine is still considered best for designing mobile games – the fastest-growing market in gaming – and was ranked as the most popular for mobile game development in a survey by Game Developer. By 2020, Unity had a 50% market share in mobile game development, and a 60% share in VR and AR game development. 

Given the overwhelming popularity of both Unreal Engine and Unity, they are the driving force behind a huge number of games – in fact, Unity has edged past Unreal as the most popular game engine because of its use by a growing number of small and medium-sized studios. Both Unreal and Unity come with very flexible licensing options, which have contributed to their widespread use in the gaming industry. As both Unity and Unreal are highly accessible, they are even taught in game design courses.

However, quite a few prominent games are made without using these two engines – major game studios built their own engines to make such titles. We will provide an overview of such engines in the following section. 

First-Party Game Engines

Like Unreal and Unity, first-party engines have powered very well-known games. However, these engines are inaccessible to the average user, and developers can get to use them only if they work for the studio that has built it, or partner with the studio or its parent company and develop games for it. Such first-party engines may even lack a name or a versioning system – not much can be found about the specific upgrades each version has received. For instance, it is known that Bethesda’s Creation Engine is a fork of the Gamebryo engine, but no public changelogs exist that track the evolution of the Gamebryo to the Creation Engine. 

Be that as it may, such first-party engines have powered some of the greatest games and game franchises ever, from Half-Life 2 (2004) and Skyrim (2011) to Battlefield, Call of Duty and Assassin’s Creed. In this section we will explore the first-party engines that have been critical in creating such titles and franchises.

id Tech

id Tech first debuted as the Doom engine in 1993 and led to a whole family of game engines based on id’s technology when id was still licensing its engine to other developers. Prior to id Tech 5, the engine had no official designation and was referred to either as the Doom or the Quake engine. The id Tech 5 version was an attempt to revive id’s engine licensing business and compete with the likes of Unreal, but then ZeniMax bought id Software in 2009 and decided to restrict the engine to id’s projects and those of sister studios owned by ZeniMax. Since id Tech 5, the engine has been proprietary, while previous versions have been released under the GPL licence. The latest version, id Tech 7, was used to make Doom Eternal (2020), and it is evident that the engine has come a long way from its predecessors.

Doom Eternal (Courtesy Bethesda Softworks)
Doom Eternal (Courtesy Bethesda Softworks)

Source

The engine behind some of the greatest games ever made, including Half-Life 2 and the Portal series, Source (2004) began life as an updated version of Valve’s GoldSrc engine, which was in turn a highly-modified version of the Quake engine. The Source engine is famous for its accurate simulation of physics and collisions .

Half-Life 2 (Courtesy Valve)
Half-Life 2 (Courtesy Valve)

Source 2, which featured improved graphics, debuted with the release of Dota 2 Reborn (a remaster of Dota 2) in 2015 and then made a major mark with the release of the VR phenomenon Half Life: Alyx in 2020. Valve’s engine is now showing its age and its graphics look somewhat dated. Source lacks a version numbering scheme, and is improved with incremental updates.

Half-Life: Alyx (Courtesy Valve)
Half-Life: Alyx (Courtesy Valve)

IW Engine

Created by developer Infinity Ward, this proprietary engine has powered the Call of Duty game franchise. Originally based on id Tech 3, the IW engine (2005) has received major updates at a steady rate, allowing CoD titles to have high-quality graphics and state-of-the-art features with each major release. Some have argued, however, that the latest CoD titles lack the quality of their predecessors, but even such naysayers were amazed by the next-gen graphics of Call of Duty: Modern Warfare (2019), which used a rebuilt version of the IW engine.

Call of Duty: Modern Warfare (Courtesy Activision)
Call of Duty: Modern Warfare (Courtesy Activision)

Anvil

Anvil (2007) is Ubisoft’s proprietary engine, used primarily for the Assassin’s Creed games, a few Prince of Persia titles, and Tom Clancy’s Ghost Recon Wildlands (2017). Anvil is a powerful engine that made possible the open-world settings, traversal mechanics, and high-fidelity graphics of the original Assassin’s Creed in 2007, and continues to power Assassin’s Creed games, such as Origins (2017), Odyssey (2018) and Valhalla (2020). Consistent updates to Anvil have allowed Ubisoft to transition to fully open world settings for Assassin’s Creed games. 

Assassin’s Creed (Courtesy Ubisoft)
Assassin’s Creed (Courtesy Ubisoft)

While Anvil was always the engine for the AC games, many of Ubisoft’s other titles were powered using Unreal Engine, until relations soured when Epic doubled the royalty fee Ubisoft had to pay. Ubisoft, in turn, decided to adapt Anvil for projects other than the Assassin’s Creed franchise and Ghost Recon Wildlands became the first title in the Ghost Recon franchise to be made using Ubisoft’s in-house engine. Anvil was also used to make the sports games Steep (2016) and Riders Republic (2021) and the proprietary engine will power the upcoming Prince of Persia: Sands of Time remake as well.

Frostbite

The Frostbite engine was first used in 2008 to create Battlefield: Bad Company – earlier Battlefield games were made using other engines. The engine has powered all Battlefield games since the release of Battlefield 3 in 2011. 

Created by EA developer DICE, Frostbite is one of the few first-party game engines to receive criticism from a developer that used it. EA subsidiary Bioware has publicly declared that the development of Dragon Age: Inquisition (2014), Mass Effect: Andromeda (2017) and Anthem (2019) was hamstrung because of EA’s directive to use Frostbite – an FPS-centric engine – for Bioware’s markedly different games. While Dragon Age: Inquisition did prove successful, Andromeda and Anthem tanked. 

Criticisms aside, Frostbite is still known for powering the games of major franchises such as Fifa, Madden and Battlefield, many of which have tight, annual release schedules.

Madden 23 (Courtesy Electronic Arts)
Madden 23 (Courtesy Electronic Arts)

Creation Engine

The Creation Engine powered the development of Skyrim (2011), Fallout 4 (2016) and Fallout 76 (2018). A modified version of the Gamebryo engine, which was used to make Oblivion (2006), the Creation Engine enabled Bethesda to greatly extend the scope and breadth of Skyrim as compared to Oblivion, resulting in one of the greatest open-world games ever made. Bethesda also updated the engine to bring more realistic and high-quality graphics to Fallout 4. 

While the engine was able to meet the higher graphics standards for Fallout 4, Bethesda struggled to update the engine to support the multiplayer Fallout 76. During the making of Fallout 76, developers and QA testers were forced into crunch mode, attempting to create a multiplayer, live-service experience from a tool that had hitherto only powered single-player games. Fallout 76 had a disastrous start, in part because there were no NPCs at launch due to the Creation Engine’s limitations. The game has considerably improved since, but not without taking its toll on the staff involved.

Fallout 76 (Courtesy Bethesda Softworks)
Fallout 76 (Courtesy Bethesda Softworks)

Decima

Decima is a proprietary engine created by the Dutch developer Guerilla Games and was first used for Killzone: Shadow Fall (2013), a PS4 launch title. The engine stood out because of the high-quality graphics featured in Killzone and was then used to make the PlayStation VR game Until Dawn: Rush of Blood (2015) – the engine is primarily associated with the development of PlayStation games. 

The engine was heavily modified to power Guerilla Games’ ambitious Horizon Zero Dawn (2017) project, which featured a fully-open world teeming with animal automatons – a far cry from a ‘corridor shooter’ like Killzone, wherein players are funnelled through a narrow path from one game location to another.

Horizon Zero Dawn (Courtesy Sony Interactive Entertainment)
Horizon Zero Dawn (Courtesy Sony Interactive Entertainment)

Throughout these years, Guerilla had no official name for the engine, until they shared their tech with Hideo Kojima when he visited the studio – Kojima was then coming to terms with splitting from Konami and was profoundly grateful to Guerilla for sharing the engine with him. Guerilla, in turn, had great respect for Kojima and his team, and decided to name the engine ‘Decima’, based on Dejima, a Japanese island where the Dutch and the Japanese traded during the 17th century. Kojima’s first game after parting ways with Konami – Death Stranding (2019) – was the result of collaboration between Guerilla and Kojima Productions, and both Horizon Zero Dawn and Death Stranding contain references to each other.

Conclusion

Since the ’90s, game engines have been the driving force behind just about every game released on the market. In our early history of game engines we see an unbroken line of innovation from John Carmack and John Romero to Tim Sweeney and the makers of Unity. We have also seen the steady rise to prominence of Unreal Engine and Unity as the two most popular game engines on the market, both having low barriers to entry. The number of games made using these engines is mind-boggling, and they are hence behind some of the best and worst titles ever made. 

Games made with an engine may vary in quality, but the game engine itself is a powerhouse that arguably allows developers to meet the cripplingly harsh annual release schedule for many major games, and also develop amazing titles when working under longer development cycles. By sparing developers the chore of reinventing the wheel, game engines not only make their lives easier, but also enrich the lives of gamers, who demand – and enjoy – games that constantly push the envelope in terms of graphics and gameplay. 

Gameopedia works with clients across the industry on custom requests and can provide in-depth data about game engines. Reach out to us for data that can yield novel insights about the software frameworks that power the creation of games.

Read More

Contemporary Trends in Online Multiplayer

In early 2012, a Kiwi soldier named Dean Hall released a mod for Bohemia Interactive’s tactical military sim Arma 2, creating an online multiplayer open world where players had to survive a zombie apocalypse. Named DayZ, the mod featured an unprecedented degree of realism – players had to eat, sleep and maintain a steady temperature, and the basic need to survive both the zombies, and the humans in the game world, became the sole focus of players. In DayZ you could either team up with others to stand a better chance of survival, or shoot and loot them for their gear, rations and medical supplies – if the game had any goal to speak of, it was to not die. 

The early 2010s continued the tradition of innovation in multiplayer – a trend we discussed in depth in our previous blog. In the 2010s, games such as Minecraft, GTA Online and Final Fantasy XIV would each offer their own spin on the multiplayer experience. Another trend to emerge in the first half of the decade was the shift to mobile multiplayer, where studios would release innovative games that made meaningful use of mobile technology, such as Pokemon Go, with its augmented reality-based gameplay. The dominant trend in the latter half of the 2010s was the rise of the hero shooter and the battle royale, two genres that became wildly popular across platforms from PC and console to mobile. Perhaps the most significant recent trend in multiplayer, however, has little to do with game development and everything to do with the state of our society – online multiplayer experienced tremendous growth during the lockdowns of the pandemic, and we will discuss this as well in this blog. 

 

2010-2016: Innovation and Mobile Multiplayer

In the first half of the 2010s, developers created innovative multiplayer games for consoles and PC, and also shifted toward multiplayer on mobile. These two trends – continued innovation in multiplayer and the shift toward mobile multiplayer, will be covered in this section.

Unique Multiplayer Titles Refresh the Genre

The decades between 1990-2010 had seen unique genres native to multiplayer, but the first half of the 2010s would see developers taking multiplayer to directions that defied traditional expectations.

Minecraft (2011)

Mine Craft Gameplay Image
Minecraft (Courtesy Mojang)

Soon after the release of Minecraft 1.0 in 2011, it would become a highly popular multiplayer game. Minecraft’s multiplayer is distinctive in that it allows players to collaborate on mining for resources and working together to build increasingly complex and elaborate structures. While it has other multiplayer modes, Minecraft’s collaborative multiplayer was unlike any other style seen before – no game had ever enabled a mode where players just worked together to build incredible structures: there is no adversarial element involving combat in collaborative mode. Minecraft does offer more traditional gameplay in Minecraft Realms, where you can team up with others and go on adventures, and a PvP mode called BedWars, where you defeat other players by destroying their respawn point – a bed. 

 

DayZ (2012)

DayZ (Courtesy Bohemia Interactive)

DayZ’s realism was unprecedented in gaming, let alone online multiplayer. Apart from having to eat and sleep, players were vulnerable to fractures, drinking poisoned water could result in cholera and a zombie bite or bullet wound could send players into shock. The hostile conditions made for highly tense encounters with other players, who might simply choose to kill you, or decide on the spur of the moment to cooperate and team up with you. Dayz also features permadeath, meaning no matter how much loot and experience you have, you restart from scratch when you die. This adds yet another layer of immersion to what is already a hyper-realistic survival sim. Games such as DayZ are referred to as PvPvE (player vs player vs environment) because they combine cooperation and competition set against challenges posed by the game world. 

 Within the first three months of launch, DayZ had a million unique users and also boosted sales of Arma 2, the base game required to play the mod, by 500 percent, leading the CEO of Bohemia Interactive to admit that Dayz was the primary driver of Arma 2 sales. 

Dean Hall was soon hired by Bohemia Interactive to create a standalone version of the game, which would be released on Steam Early Access in 2013, where the alpha version would sell a million copies by 2014. It also influenced games like Rust and ARK: Survival Evolved, and cast a long shadow on the development of survival games.

 

Dota 2, GTA Online and Final Fantasy XIV: A Realm Reborn (2013)

In the span of a single year, gamers got three of the most enduringly popular online multiplayer games: the MOBA, Dota 2, the MMORPG, Final Fantasy XIV, and GTA Online, which can be considered an evolution of what MMO experiences could offer, with its plethora of activities and challenges.

Valve released Dota 2 in 2013 and went on to host the most lucrative esports title in the world – the Dota 2 International, which now boasts a staggering prize pool of over $40 mn. Valve took an innovative approach to the free-to-play game’s premier esports title, crowdfunding it by the sale of skins, cosmetic upgrades and battle passes. Dota 2 is hailed as one of the most complex, balanced and challenging MOBA games and was ranked the best PC multiplayer game by IGN in 2013. 

DOTA 2
Dota 2 (Valve Corporation)

Set in the vast open world of Los Santos, GTA Online allows players to do just about anything they want, and Rockstar keeps releasing updates that expand the activities that players can engage in, introduce quality of life upgrades, and add even single-player missions. In GTA Online, you can take part in vehicle races, heists, and casino trips, run a criminal enterprise, buy homes, go to flight school, steal exotic cars, run around the city… and more – it’s a game containing countless games.

GTA Online (Courtesy Rockstar Games)

Final Fantasy XIV’s first iteration (2010) was a disaster, but Square Enix decided to resurrect it rather than abandon the project. The end result, released in 2013, is now a major MMORPG. Naoki Yoshida, tasked with reinventing FFXIV, streamlined many of its MMORPG elements: the developers set a low level cap, inviting players to continue playing the game in search of loot and resources, the armory system allowed players to change classes on the fly by equipping certain items, and Yoshida also pushed hard to bring high quality graphics to the MMO. The game also receives updates regularly, every 3.5 months, a cadence that ensures it remains fresh for gamers. It is termed one of the biggest games in the world as of 2022 by Eurogamer.

Destiny (2014)

Destiny (Activision)

Destiny received quite a bit of criticism when it was initially launched, and it took a novel approach to addressing such critiques – listening to players. Unlike other games, Destiny’s patches, updates and expansions were direct responses to player feedback, with Bungie acting on what the community wanted rather than setting an update schedule purely based on its own agenda for the game. On its release, Destiny called itself a ‘shared-world shooter’ – the PvE element constitutes the majority of the game, while the PvP zones are equally appealing. Destiny features a unique networked mission architecture, something that Bungie has elaborated on in detail

Destiny achieved a seamless blend of single-player, co-op and multiplayer, using a persistent world made up of public and private spaces. Multiplayer events might be triggered in a public space, or a friend might join in on co-op, while private spaces lock players into campaign goals. The two spaces flow together without interrupting the story. 

Destiny made nearly $500 million dollars in pre-orders and day one sales, amassed over 20 million players a year after release, and was the best-selling new IP of 2014.

 

Rainbow Six Siege and Rocket League (2015)

On paper, Rocket League’s premise seems absurd – soccer (football) matches between player-controlled vehicles. After a low-key initial launch, the game gained a massive following thanks to its fast-paced, intensely competitive gameplay and sustained developer support. The game was also offered for free on the Playstation Plus service for about a month, increasing its visibility and making it the most downloaded PS4 game of 2016. The game received the Best Sports/Racing Game award at The Game Awards of 2015. When it went free-to-play in 2020, it crossed one million concurrent players.

Rocket League (Courtesy Psyonix)

Rainbow Six Siege had a bad initial start but turned things around to become an important multiplayer FPS game emphasising strategy, taking a cue from Arma 3’s tactical elements. Ubisoft achieved one of the industry’s most impressive turnarounds by adopting a games-as-a-service model for Siege, releasing a slew of content updates and patches to bring the game up to scratch and eventually garnering 25 million users. The game also maximised its appeal by morphing into a hero-based shooter, giving playable characters unique abilities. Rainbow Six Siege is now a major esport.

Rainbow Six Siege (Courtesy Ubisoft)

The Shift toward Mobile Multiplayer

As early as 2011, Mojang realised that Minecraft would work very well as a mobile game and released the Pocket Edition in 2011. The mobile Minecraft title had mostly the same feature set as the PC game, and would go on to become one of the top-grossing mobile game apps. The shift to mobile multiplayer had already begun.

Clash of Clans (2012)

Clash of Clans (Courtesy Supercell)

One of the earliest successful mobile multiplayer games was Clash of Clans, which offered complex team-based gameplay over mobile devices. Set in a persistent world, the player is a village chief. Raiding other villages for important resources, unlocking new troops and bolstering the defences of your own village against attacks form the core gameplay elements of the title. Players can also team up to form clans (of upto 50 players) and battle other clans, chat with friends and more. In 2021, Clash of Clans generated nearly $490 million in in-app purchase revenues, and remains the second most popular game by daily user counts in the US.

Hearthstone (2014)

Hearthstone (Courtesy Blizzard)

The free-to-play PC and mobile game Hearthstone: Heroes of Warcraft achieved an unexpected degree of success, proving that a digital collectible card game could be just as successful as similar games played with actual cards. Blizzard used the lore, characters and other elements of the Warcraft franchise to full effect to create a fun, fast-paced card game with eye-popping graphics. The developers worked hard to recreate the experience of a real card game with the user interface, digitising assets from the earlier physical World of Warcraft Trading Card Game. The game’s success is attributable to faithfully adapting a traditional deck-building experience in a digital environment, keeping games short, offering a variety of match types, releasing regular expansions with new cards, and even letting you admire your card collection with special views. Hearthstone reached 100 million players by 2018, had a user base of 23.5 million by 2020 and nearly 4 million players play it across platforms as of 2022. It has made more than $700 million since launch. It has its own esports scene as well.

Honor of Kings (Arena of Valor, 2015)

Honor of Kings (Tencent Games )

The mobile MOBA (Multiplayer Online Battle Arena) Honor of Kings is one of the most successful mobile games of all time, registering 100 million active users per day, becoming the first mobile game to make $10 billion in revenue, and becoming the leading mobile game app in China. Ironically, it might have never come about if Riot Games had agreed to parent company Tencent’s request for a mobile version of League of Legends. The LoL creators did not want to dilute the game’s brand with a mobile knock-off, and Tencent turned to another subsidiary, TiMi Studio Group, for a new mobile MOBA IP. The result was Honor of Kings, a MOBA game that features multiple competitive modes, a PvE mode and even a standalone mode when the player is offline. An international version, known as Arena of Valor, was released for Western regions in 2016, with greatly altered heroes to fit the target market. The MOBA has many key virtues – you can easily set up battles with friends, and the fast-paced stand-offs last just 15-20 minutes, The game had all the key MOBA elements but was still easier to master than Dota 2 or League of Legends, creating a low barrier to entry, and its seamlessly integrated social elements kept players engaged with each other. The massive success of the MOBA led Riot Games to reassess its stance on a mobile version of LoL – it would release League of Legends: Wild Rift, a modified version of LoL, for mobile in 2020. Since late 2021, Wild Rift has been drawing in about 15-20 million players each month

 

Clash Royale and Pokemon Go (2016)

Clash Royale (Courtesy Supercell)

Mobile multiplayer fans would get their hands on not one, but two innovative mobile titles in 2016 – Supercell’s follow-up to Clash of Clans – Clash Royale, and Niantic’s revolutionary Pokemon Go, which would use augmented reality as the basis of its gameplay. 

A gamesindustry.biz article asserts that Clash Royale’s innovative gameplay powered it to displace lacklustre titles from the top of mobile grossing charts. Clash Royale cleverly combined aspects of tower defence, MOBA, and card-based battles to instantly become one of the top grossing games in the world a month after its release. Clash Royale offers accessible but deep gameplay, fast synchronous multiplayer that lasts only minutes, ending in a nail-biting stand-off between competitors, and well-integrated social elements. It is hailed as a smart game that rewards strategy and delivers a complex, tactical experience on a small screen. The game crossed $3 billion in lifetime player spending by 2020, and as of 2021, has been downloaded more than 500 million times

While many of the games listed here bring interesting genres to mobile, Pokemon Go is altogether a different beast. Niantic used augmented reality to overlay Pokemon on real-life locations, which would be visible to players through their phone camera. A swipe of a ‘Poke ball’ would ‘capture’ the Pokemon. One of the biggest selling points of Pokemon Go was that players had to step outside to capture Pokemon – it was probably the first game that actually took place in real-life settings. The game garnered more than a 100 million players on mobile phones within a month of its release.

After reaching a certain level, players can experience the game’s multiplayer aspects – they can battle at a Pokemon Gym and join one of three colour-coded teams – red (Valor), blue (Mystic) and yellow (Instinct). The three teams vie for control of the strategic Pokemon Gyms around the world – not only do the Gyms host raids, but also allows your owned Pokemon to earn coins, which can be spent on upgrades and items at the in-game store. Updates to the Gym mechanic brought cooperative raiding and the chance to take down large-sized Pokemon together, and the game continues to get updates.

Pokemon Go (Courtesy Niantic)

The game peaked at 232 million active players in 2016, and is still going strong – 71 million people played the game in 2021 and the game has been downloaded over 500 million times. The game has amassed $5 billion in lifetime revenue by 2021, and much of this revenue has come from the United States.

2016-2020: The Rise of Hero Shooters and Battle Royales

The first half of the decade saw the release of online multiplayers games so unique that it is difficult to imagine them sharing the same demographic. Of course, dedicated multiplayer fans would have played all these games to fully enjoy the variety on offer. In the latter half of the 2010s, online multiplayer games would be characterised mostly by the hero shooter and battle royale genres, though some unusual games, such as Sea of Thieves and Among Us, would make their mark on the online multiplayer genre. 

 

The Rise of the Hero Shooter

Overwatch (2016)

Overwatch (Courtesy Blizzard)

Overwatch has been imitated by multiple developers, and for good reason. The game’s distinctive roster of hero characters is not new – Team Fortress (1996) would also feature unique characters who performed different functions during competitive missions, but Overwatch took that concept and added MOBA elements to fashion the hero shooter as we know it today. The heroes of Overwatch have outrageous powers and going one-on-one against a single one of them would be difficult – but the 6v6 player structure ensures the game remains accessible – each team brings its own ridiculously overpowered band of heroes into play – each with their own special attacks that are easily learned by watching guides. Team-based plays revolve around one character facilitating the powers of one another in a special set of moves. Winning in Overwatch hence entails having an intimate knowledge of the strengths and weaknesses of the 32-character roster, knowing which hero can totally neutralise an opponent’s character, and managing your own weaknesses – your opponent knows as well as you do that each hero has a counter who can negate the hero. Within a year, Overwatch had made more than $1 billion, and had 30 million registered players, becoming Blizzard’s fastest-growing franchise. As of 2022, Overwatch draws in about 8-9 million players each month

 

Multiple Games Adopt the Hero Shooter Formula

We discussed above that Rainbow Six Siege grew in popularity once it became a hero shooter like Overwatch. But Siege was hardly Overwatch’s only imitator. According to a PC gamer article, hero shooters have ‘become the de facto mould of what multiplayer shooters should look like in 2022.’

Rainbow Six Siege was one of the first games to adopt the hero roster formula, adding mercenaries and criminals to its array of ‘Operators’. The game now features far more female and trans characters. Each Operator, like a hero from Overwatch, has special abilities and even plays a critical role in PvE experiences. The game’s use of Operators ‘with more flair’ has been credited as one of the reasons for its turnaround after a bad launch. 

Even games that have never been hero shooters have taken cues from Overwatch. The latest iterations of Call of Duty have featured distinct playable characters, while Battlefield 2042 changed the series’ anonymous classes into unique personalities. Even CS:GO, one of the most traditional first-person shooters, now has unlockable skins that let you enter battle by picking a favourite agent

Apex Legends would combine the hero shooter with the other prominent genre of the late 2010s – the battle royale. We will deal with Apex Legends in our discussion of the battle royale genre below.

Valorant (2020)

Valorant (Courtesy Riot Games)

The hero shooter Valorant was Riot Games’ answer to titles like CS: GO, Rainbow Six Siege and Overwatch – a highly-accessible tactical shooter involving 5v5 matches where one team tries to plant a bomb known as the ‘Spike’ and the other works to stop them. The first team to win 13 rounds is the victor. In unrated games, if both teams have 12 wins each after 24 rounds, the 25th round serves as a ‘sudden death’ tie-breaker. In competitive games, if scores are tied after 24 rounds, a team has to win two consecutive rounds to secure victory.

Valorant differs significantly from the games that it draws inspiration from. Unlike the heroes in Overwatch, none of Valorant’s heroes will survive a critical shot. Each player is an agent with distinct abilities, one that is in-built, and two others that you can buy at the game’s beginning. Another ‘ultimate’ ability gets charged by surviving multiple rounds. Valorant is a traditional shooter in that kills are based on your aim and skill with a weapon. But the special abilities impart intel, create killing zones and can even blind opponents to give players a better chance at scoring kills. As such, mastering the game and winning the best of 25 rounds depends on both your shooting abilities and skillful deployment of special powers. This combination makes for gameplay that is a blend of control and chaos, and reviewers praised the game for breaking new ground. By January 2021, the game had overtaken CS:GO in earnings, and it is now a major esport and has drawn in nearly 20 million players per month for the last year. 

Battle Royale takes Centre-Stage

Hero shooters such as Overwatch enjoyed massive popularity – until a new genre – the battle royale – began to grow in popularity. One of the biggest phenomena in gaming – Fortnite – is a battle royale that has since morphed into a metaverse-like experience, and we discuss the important battle royale games below.

PUBG: Battlegrounds and Fortnite (2017)

PUBG Mobile (Courtesy Tencent Games)

Like so many genres before it, the battle royale got its start as a mod – in fact, as a mod for a mod. DayZ fan Brendan Greene (known as PlayerUnknown) initially released a mod for DayZ where players were thrown into a shrinking map and had to kill each other until a last player remained. He would then use Arma 3’s resources to create a total conversion featuring battle royale, with an aircraft that dropped players into a large map to fight it out amongst each other. In 2016, Krafton studios invited Greene to create a standalone version, which would result in PUBG: Battlegrounds, earlier known as PlayerUnknown’s Battlegrounds. 

The simple premise of PUBG (and other battle royales) may well be the reason for its enduring popularity. A number of players (upto a 100 in PUBG) are dropped to an area with no weapons and must hunt for arms in order to go up against other players. Weapons and other items can be looted off killed opponents as well, and the map size shrinks every few minutes, forcing the players closer together and increasing the frequency of PvP encounters. Players can enter the dropzone individually or play as teams, and must also choose the right time to parachute off the aircraft. As players kill off each other, the last player or team standing wins. 

At over 75 million copies sold for PC and consoles, PUBG is the best-selling game on PC and Xbox One, and the fifth-best selling game of all time. Since 2020, the game has been drawing in a staggering 300-400 million players each month

Fortnite Battle Royale – one of the most popular games in the world, and a cultural phenomenon in its own right – took direct inspiration from the success of PUBG. Epic Games realised that they could create a battle royale version of Fortnite with ease and released it after two months of in-house development. It had much the same gameplay as PUBG, though it also featured a building mechanic where players could construct structures to fend off attacks from enemies and traverse the game map. Within a year of its release, the free-to-play game had 125 million users and was making huge sums from microtransactions, making more than $9 billion by 2019, and making $5.8 billion for Epic in 2021. It has been drawing in more than 250 million players a month since 2020.

Fortnite is more than just a successful game – it transcends gaming with its live concerts, crossover events, skins from other important media franchises and more. Its huge registered player base of 400 million allows it to experiment with metaverse-like experiences, as we have discussed elsewhere.

Fortnite (Courtesy Epic Games)

Battle Royale Becomes a Craze

The massive success of the battle-royale mode spawned a number of imitators, both in the console/PC and mobile space. The first IP to join the bandwagon was Call of Duty, which introduced a battle royale mode called ‘Blackout’ in Call of Duty: Black Ops 4 (2018). Supporting up to 100 players, Blackout took place in the largest map yet in a CoD title and made full use of the fluid movement and controls of the franchise to create a fast-paced battle-royale mode that even drew in players already tired of battle royales.

Not to be outdone, Electronic Arts introduced a battle royale mode called ‘Firestorm’ to Battlefield V (2018), and, like Activision, featured the battle royale in the biggest map to date in a Battlefield game. The battle royale mode supports 64 players, who can compete in squads of up to four players.

EA then followed this up with a new IP (based partly on Titanfall) that combined elements of both the battle royale and the hero shooter in one addictive package: Apex Legends (2019). Legends is a gorgeous game with an incredibly detailed game map – knowing the map’s ins and outs confers a significant tactical advantage. It introduced many other innovations such as dropships, care packages (loot drops) and the highly efficient non-verbal ping-based communication between teammates. Each hero also brings distinctive playstyles to player squads, and a steadily growing roster of heroes keeps the game fresh. In a little over two years, it had 100 million players and has made more than $2 billion as of mid-2022

Activision responded with Call of Duty: Warzone (2020), in which the battle royale mode predominated. While it featured much the same elements as the battle royale mode in Black Ops 4, it also encouraged players to amass Cash – in-game currency – in order to buy killstreaks and special items, and allowed upto 150 players to play in the free-for-all, in teams of up to four players. Within a year, the game had 100 million users and Warzone made nearly $4 billion in the first two years following its release. 

Even mobile gaming is characterised by highly-successful battle royale IPs such as Fortnite, which began life as a cross-platform game released for PC, consoles and mobile, and PUBG Mobile (2018), which crossed $7 billion in lifetime revenue by 2021 and Garena Free-Fire (2017), which became the most downloaded mobile game in 2019, and has made more than $4 billion in the two years since its release. 

Even a tetris game available on the Nintendo Switch – Tetris 99 (2019) – has battle royale elements, as does the open-world racing sim Forza Horizon 5 (2021), which offers the ‘Eliminator’ battle royale mode. 

 

New Styles Emerge, Old Ones Return

The second half of the decade was not just an unbroken series of hero shooters and battle royales – they were merely the most popular genres around. 

2018 saw the release of three innovative multiplayer titles, A Way Out, Among US and Sea of Thieves. The first featured only a two-player cooperative campaign, without any single-player version, and could be played in couch co-op mode or over the internet with a friend. The developer of A Way Out, Hazelight Studios, released another co-op game called It Takes Two in 2021, building on its formula of telling a compelling story purely through multiplayer mode. 

Sea of Thieves is a rollicking pirate adventure with sea battles between player crews, and went on to become highly popular, holding its own against battle royales and hero shooters during the height of their popularity. It sold five million copies on the PC platform Steam by 2021 and has drawn in 15-17 million users per month since 2020

The multi-platform game Among Us (2018) was yet another innovative title, which features asymmetric multiplayer – the game consists of a team of crewmates and a smaller team of impostors, both of whom look alike and work in the same area. The crewmates should complete all tasks in the allotted time or vote out all impostors, while the impostors should sabotage crewmate activities, kill crewmates without being detected or unleash a disaster that cannot be solved in time by crewmates. It was largely ignored upon release until pandemic shutdowns resulted in a massive spike in user counts – the game amassed nearly half a billion players in 2020, and has drawn in nearly 400 million players per month since late 2020

The free multiplayer mode of Halo Infinite (2021), replete with classic Halo multiplayer elements and the new grapple-shot mechanic, became the most popular Xbox title on Steam within less than a day of its launch. In about a month, nearly 20 million players had joined the fray. 

Games like Escape from Tarkov (beta release 2017) and The Cycle: Frontier (2022) also indicate the emergence of a new multiplayer play style, where players are dropped into common zones but can choose freely between co-op and PvP mode – there is no need to be the last man standing – each mission has its own objective quite apart from killing other players. Like DayZ, Tarkov and The Cycle are PvPvE matches – both the environment and other players pose a challenge, but you can cooperate with the latter. 

The battle royale and hero shooter might be the biggest players in town, but game developers appear to have outgrown the need to copy the two genres.

 

The Pandemic and Multiplayer

Online multiplayer games – for PCs, consoles or mobile – draw in millions of players every month. It might be tempting to criticise them as addictive time-sinks, but the popularity of multiplayer during the pandemic tells a wholly different story. 

According to the BBC’s Life Project, online multiplayer became a social lifeline during the pandemic lockdowns and gamers successfully built supportive communities around the games they loved, forging strong friendships. Playing with friends online has also been studied as a healthy replacement for in-person contact when lockdowns prevent such interaction. 

Existing friendships have thrived and people have actually grown their network of friends during the pandemic via multiplayer, social gaming and connecting over the gamer-focussed Discord, a VOIP and instant-messaging platform – gaming gives people a way to share fun, light-hearted experiences during dark times. As a result, people have reported overwhelmingly positive experiences from gaming, especially thanks to its potential for social interaction via multiplayer. In fact, dedicated MMO players have reported feeling a strong sense of social identity, a higher sense of self-esteem and decreased feelings of loneliness even before the pandemic.

VentureBeat attributes the rapid evolution of the social aspects of mobile gaming to the pandemic, because such features allowed people to stay connected while socially distanced, as they played inexpensive but interesting mobile games together.

It is no surprise, then, that the gaming industry registered record gains during the pandemic, growing 12% to $139 billion in 2020 amidst widespread lockdowns. Despite a contracting PC and console market in the post-pandemic period, the overall industry is poised to grow at a compound annual rate of 11% through 2024 to hit a record $200 billion in worth.

 

Conclusion

Online multiplayer has grown massively this decade, with games supporting millions of players across platforms, with even mobile multiplayer coming into its own. While the pioneers of the previous decades had to innovate just to make multiplayer a viable option (remember the code optimizations for QuakeWorld and Unreal Tournament), the developers of this decade have worked hard to make the most of matured internet infrastructure. 

After several innovative entries in the multiplayer genre, and the prominence of hero shooters and battle royales such as Overwatch, PUBG and Fortnite, we see an increase in variety, with titles such as The Cycle, Among US and the instant success Multiversus, which crossed 20 million players within a month of launching its open beta. 

The success of such games suggests that new online multiplayer games can offer a wide range of experiences and maintain huge fan bases – not participating in a zero-sum game for the players’ attention, but adding to an overall online multiplayer experience far greater than the sum of its parts. 

Gameopedia works with clients across the industry on custom requests and can provide in-depth data about online multiplayer games. Reach out to us for data that can yield novel insights about the billion-dollar online multiplayer gaming market.

Read More

The Rise of Online Multiplayer

Multiplayer games have always been a major aspect of gaming – whether through the internet or local area networks or just co-op mode for playing together in-person. Such games are among the most popular titles in gaming, many of them have become massive esports, and multiplayer gaming itself has evolved steadily over the years to become increasingly complex and nuanced. In this blog we will discuss the history of multiplayer gaming, from the 1980’s to 2010, with a special focus on the two decades from 1990 to 2010, when multiplayer evolved rapidly and matured into a staple of our gaming experience.

 

What is Online Multiplayer and Why it’s Important

A multiplayer video game can be played by more than one person in the same game environment simultaneously – either locally, on the same computing system, or through networks shared by multiple systems. Online multiplayer refers to games played over the internet and networked multiplayer refers to games played on different systems through a local area network. In a multiplayer game, players or teams of players can compete with each other or cooperate toward a common goal. Multiplayer games involve a social element not found in single-player titles and can also offer a higher level of challenge as compared to playing against AI.

Modern multiplayer games often share certain common characteristics – various ‘modes’, which may involve competition or cooperation, a progression system with ‘unlockables’, a steady stream of new content (though this is more applicable to live-service games in general), a system by which players can communicate using voice and/or text, a dedicated server or a single terminal hosting the game, and more. 

A look at Steam stats reveals that multiplayer games are among the most-played, with thousands of daily users. There were 932 million online gamers by 2020 and as of 2022, 54% of the most active gamers worldwide play multiplayer games at least once a week, for seven hours on average. As of 2022, 83% of US gamers play with each other, either in person or online. In comparison, the figure was 65% in 2020. This spike is attributed to Covid-period lockdowns – in fact, the multiplayer game Among US (2018) surged in popularity during the pandemic, amassing a user base of nearly half a billion players. The global online gaming market size was valued at $56 billion in 2021 and is expected to grow to $132 billion by 2030, at a cumulative annual growth rate of 10.2%. In the following sections, we will delve into the history and evolution of multiplayer and its rise to prominence among gamers. 

 

The Origins and Early History of Online Multiplayer

The early years of online multiplayer saw the advent of multi-user dungeons, or MUDs, where multiple players engaged in text-based games through typing commands. This was followed by the arrival of multiplayer FPS in the ’90s– legendary games such as Doom and Quake not only pioneered the FPS genre as we know it today, but also created multiplayer modes that allowed gamers to team up with or fight against each other. Over the course of the decade, as the internet became commonplace, many MMORPGs emerged to the fore, whose graphics brought to life the text-based experience of the early MUDs.

 

Early Years: The Multi-User Dungeon and the Internet

In 1978, two University of Essex students, Roy Trubshaw and Richard Bartle, created a multiplayer adventure that they would call ‘MUD’, or multi-user dungeon. The text-based game was a revelation that allowed you to live in a persistent fantasy world through the networked computers of the institute. A persistent world is a virtual environment that changes dynamically even when the player is logged off. The world continues to exist on the network, enabling other players to continue playing, thereby presenting new activities to any player who logs back in after a certain interval. Bartle and Trubshaw’s text-based world may not have had any graphics to speak of, but its dynamic persistent world gave the fantasy environment a life of its own, independent of player actions

The Multi-User Dungeon Interface

A MUD can be text-based or may use storyboards to flesh out its world. MUDs combine elements of role-playing games, interactive fiction and online chat to create a real-time virtual world where players interact with the game world and each other using text-based commands. Bartle and Trubshaw’s game counts as one of the first of its kind and was also the first MUD to be playable on the internet, when the university connected its internal network to the ARPANET. Multi-user dungeons with persistent worlds would influence the MMORPGs to come.

In 1985, University of Virginia classmates John Taylor and Kelton Flinn created a MUD-like game called Island of Kesmai, a multiplayer adventure that used ASCII-based graphics. Considered a direct forerunner of subsequent MMORPGs, the game was available on the early CompuServe online service and allowed upto 100 players to play simultaneously.

 

The Emergence of Multiplayer FPS

Multiplayer gaming over networks came into its own in the 1990s, with the release of major first-person shooter titles such as Pathway to Darkness and the legendary Doom in 1993. The games’ multiplayer modes led to the birth of the LAN party – people coming together and creating a local area network to play multiplayer games together. 

Doom was not just a revolutionary game, it also pushed multiplayer to new directions. The game offered networked multiplayer supporting two-player teams, and a special matchmaking service known as DWANGO supported online multiplayer, allowing four-player teams to either cooperate in Doom’s main campaign or fight against each other in a deathmatch mode. Doom was one of the first games in the world to offer online multiplayer via a matchmaking service. It was also a highly popular LAN party game, along with Pathway to Darkness, and Marathon (1994), another first-person shooter. Multiplayer over LAN would remain prominent until the internet became more widespread and ushered in online multiplayer on a large scale.

Doom (Courtesy id)

The same year Doom was released, CERN made the software for the world-wide-web open source – a move that would eventually result in our world of browsers, email, streaming services and internet-based multiplayer games played by millions of people simultaneously. Just three years after the world wide web went open-source, id would release Quake, a major milestone in online multiplayer gaming. 

Quake was not only the first game to feature full real-time rendering of 3D environments and 3D acceleration, it was also the first game to enable online multiplayer over TCP/IP on the internet. Multiplayer was easier than ever before because all one had to do was enter an IP address and connect with a friend or a server over the internet to play cooperative or competitive multiplayer. The multiplayer mode ran on dedicated servers, but Quake also allowed players to turn their own machines into custom servers. 

In december 1996, id released QuakeWorld, an update to the Quake engine, which introduced a network optimization feature called client-side prediction to enable an online gaming experience comparable to single player even for players on high-latency connections. An IGN article describes the QuakeWorld update as the first successful large-scale implementation of online multiplayer mode. In 1997, id hosted a nationwide esports tournament in the US called Red Annihilation, featuring Quake, and the winner, Dennis ‘Thresh’ Fong, won a 1987 Ferrari 328 GTS cabriolet that belonged to John Carmack, the wizard behind Quake, Doom and many of id’s hit IPs.

QuakeWorld (Courtesy id)

Multiplayer Grows in Variety

The late ’90s saw the arrival of yet another hugely popular multiplayer genre – the massively multiplayer online role playing game. In an MMORPG, players adopt the role of a character with distinct abilities, traits and weaknesses and take part in a huge persistent world filled with thousands – even millions – of concurrent players. Progression is a key aspect of the MMORPG, where player actions earn them points that they can then use to level up their skills. Like classic MUDs, the world of an MMORPG continues to change even when the player is offline. 

MMORPGs such as Meridian 59 (1996), Ultima Online (1997) and EverQuest (1999), emerged as internet technology matured in the 90s – they were still called ‘graphical MUDs’, evoking their origins in the MUDs of the 80s, and featured persistent worlds with real interactions with other online players, and made their mark as a new genre for online multiplayer. 

While Meridian 59 and Ultima Online helped establish the MMORPG, EverQuest built on the genre’s potential. The game offered a great degree of player choice, a huge world ready to explore and (for its time) high-quality graphics. The title boasted 10,000 active subscribers 24 hours after its launch, and within the year, it had 150,00 active subscribers. EverQuest continues to be played, with a user base of 66,000 subscribers. 

EverQuest (Courtesy Sony Online Entertainment)

Soon after EverQuest, the real-time strategy (RTS) title StarCraft (1998) emerged as a major online multiplayer game. Blizzard’s StarCraft built on the popularity of the RTS genre, which had successful franchises such as Sid Meier’s Civilization series, Age of Empires, Command and Conquer, and WarCraft – StarCraft would introduce sophisticated multiplayer gameplay to this genre. 

StarCraft’s multiplayer mode was facilitated by Blizzard’s Battle.net, a free game hosting and matchmaking service that helped StarCraft – and other Blizzard games – reach huge audiences. By the time StarCraft’s expansion, Brood Wars, was released, the title had become a phenomenon in South Korea, which accounted for a third of StarCraft’s global sales, and spawned a professional esports scene that was broadcast over South Korean media. Starcraft’s masterful game balancing and potential for complex strategies enhanced its multiplayer greatly, allowing for immense variation in gameplay. The game’s success led to increased usage of Blizzard’s Battle.net service, which hosts tens of millions of active players across Blizzard’s library of games today.

The FPS genre then made a resounding comeback in 1999 with the release of Epic’s Unreal Tournament and id’s Quake III Arena, both of which would make multiplayer the main gameplay mode – the first Unreal game released in 1998 did not deliver good multiplayer gameplay, and it became a top priority to improve the multiplayer code, with Epic CEO Tim Sweeney even apologising for Unreal’s poor multiplayer. Epic intended to deliver the updated multiplayer gameplay as an expansion for Unreal, but then decided to make a standalone, multiplayer-focussed instalment called Unreal Tournament, which was hailed as one of the best multiplayer games of the year, along with Quake III Arena. Both games dispensed with plot-based single-player campaigns and featured single-player modes that merely pitted players against bots. Even now, critics cannot decide which offers a better experience – both are incredible in their own ways. The Unreal Engine, used to build Unreal Tournament, would go on to become an industry-standard game engine that would spawn a host of award-winning titles.

Unreal Tournament (Courtesy Epic Games)

Another important FPS multiplayer game released in 1999 was Counter-Strike, a Half-Life mod that would later be purchased by Valve after the title became a staple of LAN events and a hugely successful multiplayer experience. Like StarCraft, CounterStrike would spawn its own esports scene. 

 

The Development of Multiplayer in the 2000s

By the second half of the nineties, the internet and the world-wide web had become commonplace. The web rose in prominence until the dotcom crash in 2000, and within a year, dotcom companies had folded, wiping out trillions of dollars of investment. 

RuneScape (2001) emerged from the ruins of the dotcom bubble to become one of the most enduring MMORPGs of all time. RuneScape is playable in a browser and was supported purely by ads until the crash. It pivoted to a freemium model, where premium users got access to more content, after the dotcom bubble burst. The browser-based MMORPG drew in droves of players, and continues to attract gamers today – nearly 17 million players are estimated to have played RuneScape, and in 2020, it reached its highest-ever concurrent user count, at more than 170,000. RuneScape’s success indicated that online multiplayer games could weather market crises and the 2000s were marked by constant innovation in the field. 

RuneScape (Courtesy Jagex)

Console Makers Enter the Fray

Microsoft launched Xbox Live in 2002, a dedicated service for online multiplayer that would become hugely popular with the release of Halo 2. While the first game in the Halo franchise (2001) was shipped before Xbox Live, its sequel, Halo 2 (2004), offered multiplayer modes with the Xbox’s unified online service. While many aspects of Halo 2 were lauded by gamers and critics alike, it is now known for ‘changing online multiplayer gaming forever’ and is considered the ‘game which showed the world how console multiplayer should be done’. 

Until Halo 2 launched, few gamers were using Xbox Live, although the Xbox itself offered sophisticated broadband compatibility at a time when the technology was still uncommon. By the time Halo 2 was released, broadband infrastructure had grown widespread, and Halo 2 could exploit the possibilities of Xbox Live to the fullest, creating an unprecedented online multiplayer experience on console. While Halo: Combat Evolved had become the killer app for the Xbox, Halo 2 became the killer app for Xbox Live, and made console-based online gaming straightforward and intuitive.

Halo 2 (Courtesy Microsoft)

Sony’s PlayStation 2 also offered online multiplayer with a separate network adapter, which was integrated later with the PS2 Slimline model. The console offered both dial-up and broadband-based connectivity and networked multiplayer using ethernet cables or a router network. Unlike Xbox Live, which functioned as a unified service for all Xbox games, providing online multiplayer for PS2 games was the responsibility of the publisher, who had to use third-party servers.

Sony would catch up with Xbox Live in 2006 with the PlayStation Network – a free, unified service for online multiplayer for the PlayStation 3 that also featured an online store from which to buy games digitally. Online gaming was free for the PS3, but required a subscription to the PlayStation Plus service for the PS4 and PS5. The introduction of online multiplayer for consoles would lead to many games offering the feature out of the box, making online multiplayer gaming a staple for gamers, despite the fact that both Microsoft and Sony charge for the service

 

Modders Create a New Multiplayer Genre: MOBA

In 2003, a WarCraft III fan released a mod called Dota – Defense of the Ancients. The mod would spawn an entirely new genre in gaming, the multiplayer online battle arena (MOBA). Dota enabled players to control hero units and fight an opposing team across three lanes that connected each team’s base. The game involved two teams of five player-controlled characters that battled each other, with the mission being to destroy the opposing team’s base. Soon, other modders were creating their own version of the map, adding new heroes and items. 

Eventually, modder Steve Feak would develop Dota Allstars, a version that would incorporate the best elements from multiple Dota iterations, and would become the most popular version of the mod. As the game was a modification of WarCraft III, modders could not add any original content (such as models/textures or characters) not provided in the modding resources released by Blizzard. Nor could Dota’s popularity result in any monetary gains for the modders. 

Dota AllStars (Courtesy Blizzard)

Steve Feak would hand over the reins of managing Dota to IceFrog, who would go on to collaborate with developers at Steam to release Dota 2 in 2013, one of the biggest esports in the world in terms of prize pools. Feak would himself be hired by Riot Games to develop the free-to-play MOBA League of Legends, one of the most popular esports in the world. A fan-made mod has spawned not one, but two major esports and changed gaming – especially online multiplayer gaming – forever. 

 

Online Multiplayer Goes To War

Around the time when Dota was becoming a phenomenon, two studios released two FPS titles that changed both online multiplayer and the FPS genre beyond recognition. In 2002, Electronic Arts released Battlefield 1942, and in 2003, Activision released Call of Duty, marking the start of a rivalry that has lasted nearly two decades. Both games had a World War II setting, and both of them fleshed out this conflict in masterful detail.  

Both games excelled at multiplayer – the Encyclopaedia Britannica credits Call of Duty for breathing new life into the multiplayer FPS genre spawned by Quake and Unreal Tournament. The first Call of Duty title was a visceral experience, set in World War II, and featured immersive audio-visual effects – when the player is close to an explosion, sounds are muffled, there is a ringing noise in the ear (simulating tinnitus) and vision is blurred as well. The game also featured excellent NPC AI (who are programmed to flank the player and move from cover to cover) and its multiplayer features could easily be modded by gamers themselves. Call of Duty: Modern Warfare (2007), would take the IP to modern settings and advance the multiplayer experience even further, with the introduction of killstreaks, where the player gains special abilities by killing opponents without dying. Staying alive while killing your enemies allowed you to call in UAV reconnaissance scans, airstrikes, and even attack helicopters. 

Call of Duty (Courtesy Activision)

Battlefield 1942’s contributions are just as significant – its online multiplayer allowed for epic, chaotic battles fought by dozens of players in large, detailed maps. The game established the 64-player online gameplay of the series, set in environments with multiple vehicles that you could use while battling with your foes. Its 22 maps were actual real-world settings such as El Alamein, Iwo Jima and Stalingrad from World War II. The game’s numerous vehicles, including tanks, planes, carriers and even submarines, added to the chaos of multiplayer and resulted in innovative tactics. The game’s active modding community introduced various weapons, settings and themes to the title – the well-known Desert Combat mod added modern assault rifles, rocket launchers, helicopters, and planes, while total conversions such as Galactic Conquest attempted to turn the game into a Star Wars title.

Battlefield 1942 (Courtesy Electronic Arts)

The two franchises dominated online multiplayer during the 2000s, and have continued to remain popular, releasing a new game every year. The Call of Duty franchise’s popularity has grown dramatically in recent years, following the release of its first mobile title in 2019 and its free-to-play title Call of Duty: Warzone in early 2020 – the series’ user base grew from 70 million in 2018 to more than 250 million in 2020.

 

Massive Communities and the Proto-Metaverse

The success of multiplayer games starting from RuneScape to Call of Duty would set the stage for the behemoth that was World of Warcraft (2004). The MMORPG still boasts a huge player count and it is known for its large expansion packs, complex lore and gameplay. It has been praised for its fluid combat, and the classic version of the game, as opposed to the retail version, is also known for being more challenging. Despite being around for nearly two decades, WoW is still very accessible, allowing new players to experience it on their own terms, and the latest expansion Shadowlands even includes the tutorial phase, like in the first release of WoW. The game boasts a total of over 120 million registered players.

World of Warcraft (Courtesy Blizzard)

But World of Warcraft is not just about gaming – the game would implement many features that would later be associated with today’s nascent metaverse. WoW was not the first game to come up with player-driven economies, social gathering points or the sale of virtual real estate, but it was the first widespread game to make these features part of the gaming landscape. With its massive community and metaverse-like features, WoW can be considered a proto-metaverse, and we have argued elsewhere that Microsoft’s purchase of Activision Blizzard qualifies as a metaverse play, precisely because Activision Blizzard is used to handling an enormous global community and can help Microsoft get a headstart on its metaverse initiatives.  

Second Life (2003) is another title that can count as a proto-metaverse – it is a vast 3D virtual world and platform where people can interact with each other and with user-generated content in real time. Players, known as ‘residents’, create a digital avatar and freely explore the world, create their own content and even trade goods and services with the in-world currency, the Linden dollar – Second Life hence boasts a thriving in-world economy. The platform has a daily average of 200,000 users from 200 countries, and over 70 million users spread out over 27,334 regions in the world.

 

Second Life (Courtesy Linden Lab)

Unlike games, Second Life has no goals or objectives and social interaction is the core aspect of the experience. Residents have married and even raised children, and created communities with unique customs. The game actively fosters such interaction by ensuring that everyone in any part of the platform will experience the same thing – Second Life consists of an integrated space and not disparate instances. 

The platform even created an early version of the non-fungible token – the digital assets in the world contain tags that record who made them, who owns them, what they cost and what a buyer can do with them. 

While the platform has been hailed as one of the longest-running experiments in a metaverse-like experience, creator Philip Rosedale is sceptical about present-day metaverse initiatives. Rosedale believes that a true metaverse would have to be built by its users rather than software companies, just as Second Life residents create digital assets to enhance their virtual world. Rosedale is also wary of the blockchain, and believes that the metaverse needs a centralised economy to prevent wealth disparity. Second Life is not just a proto-metaverse, it has yielded insight into the possible problems with current conceptions of the metaverse.

 

Multiplayer Matures, Becomes the Norm

By the late 2000s, online multiplayer was ubiquitous – and some of the best games of the decade were focussed on delivering memorable (and addictive) multiplayer experiences, supporting millions of connected players. 

Console games such as Halo 2 and its sequels thrived on the back of their multiplayer mode, modders created an entire genre of multiplayer – MOBA – on their own, games like Battlefield and Call of Duty raised the bar for what could be achieved in multiplayer FPS and became a staple of the multiplayer gaming scene throughout the 2000s and beyond, and MMORPGs such as World of Warcraft and the social platform Second Life led to the formation of massive online communities that persist to this date.

Throughout the course of the 2000s, long-running sports game franchises such as FIFA, Madden and NBA also started to offer robust multiplayer on multiple platforms and spawned their own esports communities and events. 

League of Legends was released towards the end of the decade and is one of the first MOBA games to launch as free-to-play, and to employ a live-service model with continual updates, new heroes, and game-balancing patches. The League of Legends franchise, comprising multiple games, registered a staggering 180 million active players in October 2021. The desktop version is one of the highest grossing free-to-play games as well, and its mobile version is one of the most popular mobile MOBA games as of 2022. Many multiplayer-focussed titles of the 2010s would take a cue from LoL and go free-to-play, deriving revenues from cosmetic upgrades and other microtransactions. 

 

Conclusion

Online multiplayer began as text-based adventures and matured into massively multiplayer games with high-fidelity graphics that support millions of active players. This evolution was spurred in part by developments in internet technology, but was also the result of game developers pushing the limits of what could be achieved with the network infrastructures they had access to. The early MUDs depended on university networks and then the ARPANET, while Doom used a matchmaking service based on a dial-up connection until the arrival of the internet, which QuakeWorld, Unreal Tournament and Quake III Arena used to maximal effect with code bases optimised for online multiplayer. 

The efforts of these pioneers led to widespread online multiplayer in the 2000s, where millions of gamers could participate in MMORPGs, MOBAs, multiplayer FPS games and more. The development of online multiplayer, especially from the ’90s to the 2000s, is characterised by ceaseless innovation and pushing the limits of what can be achieved in a multiplayer experience over the internet.

In a subsequent blog, we will discuss current trends in online multiplayer – the shift toward mobile multiplayer, the rise to prominence of the hero shooter and battle royale genres, and how multiplayer rose in prominence during the pandemic period.

Gameopedia works with clients across the industry on custom requests and can provide in-depth data about online multiplayer games. Reach out to us for data that can yield novel insights about the billion-dollar online multiplayer gaming market.

Read More

The Evolution of Open-World Games

Learn how open world games evolved to deliver immersive experiences in expansive environments.

Read More