In this third a part of our deeper take a look at 3D sport rendering, we’ll be focusing what can occur to the 3D world after the vertex processing has performed and the scene has been rasterized. Texturing is likely one of the most vital levels in rendering, regardless that all that’s occurring is the colours of a two dimensional grid of coloured blocks are calculated and adjusted.

The majority of the visible results seen in video games at present are all the way down to the intelligent use of textures — with out them, video games would boring and lifeless. So let’s get dive in and see how this all works!

As at all times, should you’re not fairly prepared for a deep dive into texturing, do not panic — you will get began with our 3D Game Rendering 101. But when you’re previous the fundamentals, do learn on for our subsequent take a look at the world of 3D graphics.

Part 0: 3D Game Rendering 101
The Making of Graphics Explained
Part 1: How 3D Game Rendering Works: Vertex Processing
A Deeper Dive Into the World of 3D Graphics
Part 2: How 3D Game Rendering Works: Rasterization and Ray Tracing
From 3D to Flat 2D, POV and Lighting
Part 3: How 3D Game Rendering Works: Texturing
Bilinear, Trilinear, Anisotropic Filtering, Bump Mapping & More

Let’s begin easy

Pick any prime promoting 3D sport from the previous 12 months and they’ll all share one factor in widespread: the usage of texture maps (or simply textures). This is such a typical time period that most individuals will conjure the identical picture, when serious about textures: a easy, flat sq. or rectangle that accommodates an image of a floor (grass, stone, metallic, clothes, a face, and so on).

But when utilized in a number of layers and woven collectively utilizing advanced arithmetic, the usage of these fundamental footage in a 3D scene can produce stunningly sensible pictures. To see how that is attainable, let’s begin by skipping them altogether and seeing what objects in a 3D world can seem like with out them.

As now we have seen in earlier articles, the 3D world is made up of vertices — easy shapes that get moved after which coloured in. These are then used to make primitives, which in flip are squashed right into a 2D grid of pixels. Since we’re not going to make use of textures, we have to colour in these pixels.

One technique that can be utilized, referred to as flat shading, includes taking the colour of the primary vertex of the primitive, after which utilizing that colour for the entire pixels that get coated by the form within the raster. It appears one thing like this:

How 3D Game Rendering Works: Texturing

This is clearly not a practical teapot, not least as a result of the floor colour is all incorrect. The colours soar from one stage to a different, there is no such thing as a clean transition. One answer to this downside may very well be to make use of one thing referred to as Gouraud shading.

This is a course of which takes the colours of the vertices after which calculates how the colour adjustments throughout the floor of the triangle. The math used is called linear interpolation, which sounds fancy however in actuality means if one facet of the primitive has the colour 0.2 pink, for instance, and the opposite facet is 0.Eight pink, then the center of the form has a colour halfway between 0.2 and 0.8 (i.e. 0.5).

It’s comparatively easy to do and that is its principal profit, as easy means pace. Many early 3D video games used this system, as a result of the {hardware} performing the calculations was restricted in what it may.

Barrett and Cloud of their full Gouraud shaded glory (Final Fantasy VII – 1997)

How 3D Game Rendering Works: Texturing

But even this has issues, as a result of if a lightweight is pointing proper on the center of a triangle, then its corners (the vertices) won’t seize this correctly. This implies that highlights brought on by the sunshine may very well be missed solely.

While flat and Gouraud shading have their place within the rendering armory, the above examples are clear candidates for the usage of textures to enhance them. And to get a great understanding of what occurs when a texture is utilized to a floor, we’ll pop again in time… all the way in which again to 1996.

A fast little bit of gaming and GPU historical past

Quake was launched some 23 years in the past, a landmark sport by id Software. While it wasn’t the primary sport to make use of 3D polygons and textures to render the atmosphere, it was undoubtedly one of many first to make use of all of them so successfully.

Something else it did, was to showcase what may very well be performed with OpenGL (the graphics API was nonetheless in its first revision at the moment) and it additionally went a really lengthy option to serving to the gross sales of the primary crop of graphics playing cards just like the Rendition Verite and the 3Dfx Voodoo.

How 3D Game Rendering Works: Texturing

Vertex lighting and fundamental textures. Pure 1996, pure Quake.

Compared to at present’s requirements, the Voodoo was exceptionally fundamental: no 2D graphics assist, no vertex processing, and simply the very fundamentals of pixel processing. It was a magnificence nonetheless:

How 3D Game Rendering Works: Texturing

Image: VGA Museum

It had a complete chip (the TMU) for getting a pixel from a texture and one other chip (the FBI) to then mix it with a pixel from the raster. It may do a few extra processes, corresponding to doing fog or transparency results, however that was just about it.

If we check out an summary of the structure behind the design and operation of the graphics card, we are able to see how these processes work.

How 3D Game Rendering Works: Texturing

3Dfx Technical Reference doc. Source: Falconfly Central

The FBI chip takes two colour values and blends them collectively; considered one of them generally is a worth from a texture. The mixing course of is mathematically fairly easy however varies slightly between what precisely is being blended, and what API is getting used to hold out the directions.

If we take a look at what Direct3D offers by way of mixing capabilities and mixing operations, we are able to see that every pixel is first multiplied by a quantity between 0.Zero and 1.0. This determines how a lot of the pixel’s colour will affect the ultimate look. Then the 2 adjusted pixel colours are both added, subtracted, or multiplied; in some capabilities, the operation is a logic assertion the place one thing just like the brightest pixel is at all times chosen.

How 3D Game Rendering Works: Texturing

Image: Taking Initiative tech blog

The above picture is an instance of how this works in observe; observe that for the left hand pixel, the issue used is the pixel’s alpha worth. This quantity signifies how clear the pixel is.

The remainder of the levels contain making use of a fog worth (taken from a desk of numbers created by the programmer, then doing the identical mixing math); finishing up some visibility and transparency checks and changes; earlier than lastly writing the colour of the pixel to the reminiscence on the graphics card.

Why the historical past lesson? Well, regardless of the relative simplicity of the design (particularly in comparison with fashionable behemoths), the method describes the elemental fundamentals of texturing: get some colour values and mix them, in order that fashions and environments look how they’re presupposed to in a given scenario.

Today’s video games nonetheless do all of this, the one distinction is the quantity of textures used and the complexity of the mixing calculations. Together, they simulate the visible results seen in films or how mild interacts with completely different supplies and surfaces.

The fundamentals of texturing

To us, a texture is a flat, 2D image that will get utilized to the polygons that make up the 3D buildings within the seen body. To a pc, although, it is nothing greater than a small block of reminiscence, within the type of a 2D array. Each entry within the array represents a colour worth for one of many pixels within the texture picture (higher generally known as texels – texture pixels).

Every vertex in a polygon has a set of two coordinates (often labelled as u,v) related to it that tells the pc what pixel within the texture is related to it. The vertex themselves have a set of three coordinates (x,y,z), and the method of linking the texels to the vertices is known as texture mapping.

To see this in motion, let’s flip to a device we have used a couple of instances on this collection of articles: the Real Time Rendering WebGL device. For now, we’ll additionally drop the z coordinate from the vertices and maintain all the pieces on a flat aircraft.

How 3D Game Rendering Works: Texturing

From left-to-right, now we have the feel’s u,v coordinates mapped on to the nook vertices’ x,y coordinates. Then the highest vertices have had their y coordinates elevated, however as the feel remains to be straight mapped to them, the feel will get stretched upwards. In the far proper picture, it is the feel that is altered this time: the u values have been raised however this ends in the feel turning into squashed after which repeated.

This is as a result of though the feel is now successfully taller, due to the upper u worth, it nonetheless has to suit into the primitive — basically the feel has been partially repeated. This is a technique of doing one thing that is seen in a number of 3D video games: texture repeating. Common examples of this may be present in scenes with rocky or grassy landscapes, or brick partitions.

Now let’s regulate the scene in order that there are extra primitives, and we’ll additionally carry depth again into play. What now we have under is a traditional panorama view, however with the crate texture copied, in addition to repeated, throughout the primitives.

How 3D Game Rendering Works: Texturing

Now that crate texture, in its authentic gif format, is 66 kiB in dimension and has a decision of 256 x 256 pixels. The authentic decision of the portion of the body that the crate textures cowl is 1900 x 680, so by way of simply pixel ‘space’ that area ought to solely have the ability to show 20 crate textures.

We’re clearly far more than 20, so it should imply that lots of the crate textures within the background have to be a lot smaller than 256 x 256 pixels. Indeed they’re, and so they’ve undergone a course of referred to as texture minification (sure, that could be a phrase!). Now let’s attempt it once more, however this time zoomed proper into one of many crates.

How 3D Game Rendering Works: Texturing

Don’t overlook that the feel is simply 256 x 256 pixels in dimension, however right here we are able to see one texture being greater than half the width of the 1900 pixels large picture. This texture has gone by way of one thing referred to as texture magnification.

These two texture processes happen in 3D video games on a regular basis, as a result of because the digicam strikes in regards to the scene or fashions transfer nearer and additional away, the entire textures utilized to the primitives have to be scaled together with the polygons. Mathematically, this is not an enormous deal, in truth, it is so easy that even probably the most fundamental of built-in graphics chips blitz by way of such work. However, texture minification and magnification current contemporary issues that must be resolved by some means.

Enter the mini-me of textures

The first problem to be mounted is for textures within the distance. If we glance again at that first crate panorama picture, those proper on the horizon are successfully just a few pixels in dimension. So making an attempt to squash a 256 x 256 pixel picture into such a small area is pointless for 2 causes.

One, a smaller texture will take up much less reminiscence area in a graphics card, which is helpful for making an attempt to suit right into a small quantity of cache. That means it’s much less prone to faraway from the cache and so repeated use of that texture will acquire the total efficiency profit of information being in close by reminiscence. The second motive we’ll come to in a second, because it’s tied to the identical downside for textures zoomed in.

A typical answer to the usage of huge textures being squashed into tiny primitives includes the usage of mipmaps. These are scaled down variations of the unique texture; they are often generated the sport engine itself (through the use of the related API command to make them) or pre-made by the sport designers. Each stage of mipmap texture has half the linear dimensions of the earlier one.

So for the crate texture, it goes one thing like this: 256 x 256 → 128 x 128 → 64 x 64 → 32 x 32 → 16 x 16 → Eight x 8 → Four x 4 → 2 x 2 → 1 x 1.

How 3D Game Rendering Works: Texturing

The mipmaps are all packed collectively, in order that the feel remains to be the identical filename however is now bigger. The texture is packed in such a manner that the u,v coordinates not solely decide which texel will get utilized to a pixel within the body, but additionally from which mipmap. The programmers then code the renderer to find out the mipmap for use based mostly on the depth worth of the body pixel, i.e. if it is extremely excessive, then the pixel is within the far distance, so a tiny mipmap can be utilized.

Sharp eyed readers may need noticed a draw back to mipmaps, although, and it comes at the price of the textures being bigger. The authentic crate texture is 256 x 256 pixels in dimension, however as you possibly can see within the above picture, the feel with mipmaps is now 384 x 256. Yes, there’s a number of empty area, however irrespective of the way you pack within the smaller textures, the general enhance to no less than one of many texture’s dimensions is 50%.

But that is solely true for pre-made mipmaps; if the sport engine is programmed to generate them as required, then the rise isn’t greater than 33% than the unique texture dimension. So for a comparatively small enhance in reminiscence for the feel mipmaps, you are gaining efficiency advantages and visible enhancements.

Below is is an off/on comparability of texture mipmaps:

How 3D Game Rendering Works: Texturing

On the left hand facet of the picture, the crate textures are getting used ‘as is’, leading to a grainy look and so-called moiré patterns within the distance. Whereas on the fitting hand facet, the usage of mipmaps ends in a a lot smoother transition throughout the panorama, the place the crate texture blurs right into a constant colour on the horizon.

The factor is, although, who desires blurry textures spoiling the background of their favourite sport?

Bilinear, trilinear, anisotropic – it is all Greek to me

The course of of choosing a pixel from a texture, to be utilized to a pixel in a body, is known as texture sampling, and in an ideal world, there can be a texture that precisely matches the primitive it is for — no matter its dimension, place, course, and so forth. In different phrases, texture sampling can be nothing greater than a straight 1-to-1 texel-to-pixel mapping course of.

Since that is not the case, texture sampling has to account for various components:

  • Has the feel been magnified or minified?
  • Is the feel authentic or a mipmap?
  • What angle is the feel being displayed at?

Let’s analyze these separately. The first one is clear sufficient: if the feel has been magnified, then there will probably be extra texels protecting the pixel within the primitive than required; with minification it is going to be the opposite manner round, every texel now has to cowl a couple of pixel. That’s a little bit of an issue.

The second one is not although, as mipmaps are used to get across the texture sampling problem with primitives within the distance, in order that simply leaves textures at an angle. And sure, that is an issue too. Why? Because all textures are pictures generated for a view ‘face on’, or to be all math-like: the traditional of a texture floor is identical as the traditional of the floor that the feel is presently displayed on.

So having too few or too many texels, and having texels at an angle, require a further course of referred to as texture filtering. If you do not use this course of, then that is what you get:

How 3D Game Rendering Works: Texturing

Here we have changed the crate texture with a letter R texture, to point out extra clearly how a lot of a large number it could actually get with out texture filtering!

Graphics APIs corresponding to Direct3D, OpenGL, and Vulkan all supply the identical vary filtering varieties however use completely different names for them. Essentially, although, all of them go like this:

  • Nearest level sampling
  • Linear texture filtering
  • Anisotropic texture filtering

To all intents and functions, nearest level sampling is not filtering – it’s because all that occurs is the closest texel to the pixel requiring the feel is sampled (i.e. copied from reminiscence) after which blended with the pixel’s authentic colour.

Here comes linear filtering to the rescue. The required u,v coordinates for the texel are despatched off to the {hardware} for sampling, however as an alternative of taking the very nearest texel to these coordinates, the sampler takes 4 texels. These are straight above, under, left, and proper of the one chosen through the use of nearest level sampling.

These Four texels are then blended collectively utilizing a weighted system. In Vulkan, for instance, the system is:

How 3D Game Rendering Works: Texturing

The T refers to texel colour, the place f is for the filtered one and 1 by way of to Four are the 4 sampled texels. The values for alpha and beta come from how distant the purpose outlined by the u,v coordinates is from the center of the feel.

Fortunately for everybody concerned in 3D video games, whether or not enjoying them or making them, this occurs routinely within the graphics processing chip. In reality, that is what the TMU chip within the 3dfx Voodoo did: sampled Four texels after which blended them collectively. Direct3D oddly calls this bilinear filtering, however because the time of Quake and the Voodoo’s TMU chip, graphics playing cards have been capable of do bilinear filtering in only one clock cycle (supplied the feel is sitting handily in close by reminiscence, in fact).

Linear filtering can be utilized alongside mipmaps, and if you wish to get actually fancy along with your filtering, you possibly can take Four texels from a texture, then one other Four from the following stage of mipmap, after which mix all that lot collectively. And Direct3D’s title for this? Trilinear filtering. What’s tri about this course of? Your guess is pretty much as good as ours…

The final filtering technique to say is known as anisotropic. This is definitely an adjustment to the method performed in bilinear or trilinear filtering. It initially includes a calculation of the diploma of anisotropy of the primitive’s floor (and it is surprisingly complex, too) — this worth will increase the primitive’s side ratio alters attributable to its orientation:

How 3D Game Rendering Works: Texturing

The above picture exhibits the identical sq. primitive, with equal size sides; however because it rotates away from our perspective, the sq. seems to develop into a rectangle, and its width will increase over its top. So the primitive on the fitting has a bigger diploma of anisotropy than these left of it (and within the case of the sq., the diploma is strictly zero).

Many of at present’s 3D video games can help you allow anisotropic filtering after which regulate the extent of it (1x by way of to 16x), however what does that really change? The setting controls the utmost variety of extra texel samples which might be taken per authentic linear sampling. For instance, as an instance the sport is about to make use of 8x anisotropic bilinear filtering. This implies that as an alternative of simply fetching Four texels values, it would fetch 32 values.

The distinction the usage of anisotropic filtering could make is evident to see:

How 3D Game Rendering Works: Texturing

Just scroll again up slightly and examine nearest level sampling to maxed out 16x anisotropic trilinear filtering. So clean, it is virtually scrumptious!

But there have to be a worth to pay for all this pretty buttery texture deliciousness and it is certainly efficiency: all maxed out, anisotropic trilinear filtering will probably be fetching 128 samples from a texture, for every pixel being rendered. For even the easiest of the most recent GPUs, that simply cannot be performed in a single clock cycle.

If we take one thing like AMD’s Radeon RX 5700 XT, every one of many texturing models contained in the processor can fireplace off 32 texel addresses in a single clock cycle, then load 32 texel values from reminiscence (every 32 bits in dimension) in one other clock cycle, after which mix Four of them collectively in yet one more tick. So, for 128 texel samples blended into one, that requires no less than 16 clock cycles.

How 3D Game Rendering Works: Texturing

AMD’s 7nm RDNA Radeon RX 5700 GPU

Now the bottom clock charge of a 5700 XT is 1605 MHz, so sixteen cycles takes a mere 10 nanoseconds. Doing this for each pixel in a 4K body, utilizing simply one texture unit, would nonetheless solely take 70 milliseconds. Okay, so maybe efficiency is not that a lot of a problem!

Even again in 1996, the likes of the 3Dfx Voodoo had been fairly nifty when it got here to dealing with textures. It may max out at 1 bilinear filtered texel per clock cycle, and with the TMU chip rocking alongside at 50 MHz, that meant 50 million texels may very well be churned out, each second. A sport working at 800 x 600 and 30 fps, would solely want 14 million bilinear filtered texels per second.

However, this all assumes that the textures are in close by reminiscence and that just one texel is mapped to every pixel. Twenty years in the past, the concept of needing to use a number of textures to a primitive was virtually utterly alien, but it surely’s commonplace now. Let’s take a look at why this modification happened.

Lighting the way in which to spectacular pictures

To assist perceive how texturing turned so vital, check out this scene from Quake:

How 3D Game Rendering Works: Texturing

It’s a darkish picture, that was the character of the sport, however you possibly can see that the darkness is not the identical in all places – patches of the partitions and flooring are brighter than others, to provide a way of the general lighting in that space.

The primitives making up the edges and floor all have the identical texture utilized to them, however there’s a second one, referred to as a mild map, that’s blended with the texel values earlier than they’re mapped to the body pixels. In the times of Quake, mild maps had been pre-calculated and made by the sport engine, and used to generate static and dynamic mild ranges.

The benefit of utilizing them was that advanced lighting calculations had been performed to the textures, slightly than the vertices, notably bettering the looks of a scene and for little or no efficiency price. It’s clearly not good: as you possibly can see on the ground, the boundary between the lit areas and people in shadow could be very stark.

In some ways, a lightweight map is simply one other texture (keep in mind that they’re all nothing greater than 2D information arrays), so what we’re right here is an early use of what turned generally known as multitexturing. As the title clearly suggests, it is a course of the place two or extra textures are utilized to a primitive. The use of sunshine maps in Quake was an answer to beat the constraints of Gouraud shading, however because the capabilities of graphics playing cards grew, so did the purposes of multitexturing.

The 3Dfx Voodoo, like different playing cards of its period, was restricted by how a lot it may do in a single rendering go. This is actually a whole rendering sequence: from processing the vertices, to rasterizing the body, after which modifying the pixels right into a closing body buffer. Twenty years in the past, video games carried out single go rendering just about the entire time.

How 3D Game Rendering Works: Texturing

Nvidia’s GeForce 2 Ultra, circa late 2000. Image: Wikimedia

This is as a result of processing the vertices twice, simply since you needed to use some extra textures, was too expensive by way of efficiency. We needed to wait a few years after the Voodoo, till the ATI Radeon and Nvidia GeForce 2 graphics playing cards had been out there earlier than we may do multitexturing in a single rendering go.

These GPUs had a couple of texture unit per pixel processing part (aka, a pipeline), so fetching a bilinear filtered texel from two separate textures was a cinch. That made mild mapping much more standard, permitting for video games to make them totally dynamic, altering the sunshine values based mostly on adjustments within the sport’s atmosphere.

But there may be a lot extra that may be performed with a number of textures, so let’s have a look.

It’s regular to bump up the peak

In this collection of articles on 3D rendering, we have not addressed how the function of the GPU actually matches into the entire shebang (we’ll do, simply not but!). But should you return to Part 1, and take a look at the entire advanced work concerned in vertex processing, it’s possible you’ll suppose that that is the toughest a part of the entire sequence for the graphics processor to deal with.

For a very long time it was, and sport programmers did all the pieces they may to cut back this workload. That meant reaching into the bag of visible methods and pulling off as many shortcuts and cheats as attainable, to provide the identical visible look of utilizing a number of vertices in every single place, however not really use that many to start with.

And most of those methods concerned utilizing textures referred to as top maps and regular maps. The two are associated in that the latter may be created from the previous, however for now, let’s simply check out a way referred to as bump mapping.

How 3D Game Rendering Works: Texturing

Images created utilizing a rendering demo by Emil Persson. Left / Right: Off / On bump mapping

Bump mapping includes utilizing a 2D array referred to as a top map, that appears like an odd model of the unique texture. For instance, within the above picture, there’s a sensible brick texture utilized to 2 flat surfaces. The texture and its top map seem like this:

How 3D Game Rendering Works: Texturing

The colours of the peak map symbolize the normals of the brick’s floor (we coated what a traditional is in Part 1 of this collection of articles). When the rendering sequence reaches the purpose of making use of the brick texture to the floor, a sequence of calculations happen to regulate the colour of the brick texture based mostly on the traditional.

The result’s that the bricks themselves look extra 3D, regardless that they’re nonetheless completely flat. If you look rigorously, notably on the edges of the bricks, you possibly can see the constraints of the approach: the feel appears barely warped. But for a fast trick of including extra element to a floor, bump mapping could be very standard.

A standard map is sort of a top map, besides the colours of that texture are the normals themselves. In different phrases, a calculation to transform the peak map into normals is not required. You would possibly marvel simply how can colours be used to symbolize an arrow pointing in area? The reply is easy: every texel has a given set of r,g,b values (pink, inexperienced, blue) and people numbers straight symbolize the x,y,z values for the traditional vector.

How 3D Game Rendering Works: Texturing

In the above instance, the left diagram exhibits how the course of the normals change throughout a bumpy floor. To symbolize these similar normals in a flat texture (center diagram), we assign a colour to them. In our case, we have used r,g,b values of (0,255,0) for straight up, after which growing quantities of pink for left, and blue for proper.

Note that this colour is not blended with the unique pixel – it merely tells the processor what course the traditional is dealing with, so it could actually correctly calculate the angles between the digicam, lights and the floor to be textured.

The advantages of bump and regular mapping actually shine when dynamic lighting is used within the scene, and the rendering course of calculates the results of the sunshine adjustments per pixel, slightly than for every vertex. Modern video games now use a stack of textures to enhance the standard of the magic trick being carried out.

How 3D Game Rendering Works: Texturing

Image: Ryan Benno by way of Twitter

This sensible wanting wall is amazingly nonetheless only a flat floor — the main points on the bricks and mortar aren’t performed utilizing tens of millions of polygons. Instead, simply 5 textures and lots of intelligent math will get the job performed.

The top map was used to generate the way in which that the bricks solid shadows on themselves and the traditional map to simulate the entire small adjustments within the floor. The roughness texture was used to alter how the sunshine displays off the completely different parts of the wall (e.g. a smoothed brick displays extra constantly that tough mortar does).

The closing map, labelled AO within the above picture, varieties a part of a course of referred to as ambient occlusion: it is a approach that we’ll take a look at in additional depth in a later article, however for now, it simply helps to enhance the realism of the shadows.

Texture mapping is essential

Texturing is totally essential to sport design. Take Warhorse Studio’s 2019 launch Kingdom Come: Deliverance — a primary particular person RPG set in 15th century Bohemia, an previous nation of mid-East Europe. The designers had been eager on creating as sensible a world as attainable, for the given interval. And the easiest way to attract the participant right into a life tons of of years in the past, was to have the fitting search for each panorama view, constructing, set of garments, hair, on a regular basis objects, and so forth.

Each distinctive texture on this single picture from the sport has been handcrafted by artists and their use by the rendering engine managed by the programmers. Some are small, with fundamental particulars, and obtain little in the way in which of filtering or being processed with different textures (e.g. the rooster wings).

How 3D Game Rendering Works: Texturing

Others are excessive decision, displaying a number of high-quality element; they have been anisotropically filtered and the blended with regular maps and different textures — simply take a look at the face of the person within the foreground. The completely different necessities of the texturing of every merchandise within the scene have all been accounted for by the programmers.

All of this occurs in so many video games now, as a result of gamers count on better ranges of element and realism. Textures will develop into bigger, and extra will probably be used on a floor, however the means of sampling the texels and making use of them to pixels will nonetheless basically be the identical because it was within the days of Quake. The finest expertise by no means dies, irrespective of how previous it’s!