• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry claims PS5 doesn't exhibit any evidence of VRS(Variable Rate Shading) from PS5 showcase.

thelastword

Banned
So I should wait for the multiplatform games to release before I comment but you won’t wait for anything before claiming that the divide is negligible. Got you.
The truth is Sony could have easily mentioned the performance of the games they demoed yesterday and didn’t, not for a single one. For all the flak that the MS 3rd party showing got, everyone knew at the end what performance features to expect of each the games demoed (a table was released with all the info so we don’t have to speculate). Digital Foundry won’t be as secretive when the games release and this thread is a good preview of what will happen once the secrecy is broken.
No because there is precedence at least for Dirt. It's running pretty high frames on all GPU's, it's a game made for pretty high framerates. Remember when Vega GPU's used to dominate even the 1080ti in prior dirt games? It's also crossgen, dirt will run at high framerates on pretty much all mid gen GPU's and a guaranteed 60fps on lower end cards....

Also, you have The Ascent which is an over the top shooter, these games will run at very high framerates on anything. According to PC steam the requirements are an i3 with a GTX 660...…..As for Scorn, that game has been in development for a while, it's not so much a ground up game for next gen, there are gameplay videos if you care to watch. I can easily see a 120fps mode, even at lowered resolutions. However it's not really a quickpaced game like Quake, where 120fps would be more noticeable. Yet, Scorn's Official Requirements, not even minimum Req is an i5 with a GTX 970, it's really not that high. The next gen consoles run circles around such a setup.....120FPS modes is not out of the ordinary for any of these titles. The only one we have not seen gameplay of is the medium, so that is a wait and see......
 

martino

Member
the willing confusion between mesh shader (or ommision in its case), vrs and geometry engine from some people is funny.
Also taking the gears example as what is only achievable (it's not even using tier 2) is disingenuous

edit : it shows fan behaviors though....games not showing it at this point doesn't mean it's not there. the feature is downplayed by fear.
 
Last edited:

rnlval

Member
VRS will be used in unoptimized engines for PS5 that still use ancient LOD's system. Plus it's important for PSVR2 as it'll most likely be 4K@120Hz for each eye (240Hz total).
This is BS. VRS can be used during the pixel shader workloads. Mesh shader or primitive shader deals with geometry workloads.
 

JLB

Banned
right right, we want full shading in shadows with MAX shading precision to get the color hmmm black. Let be less efficient which means we are more efficient.


VRS is closer to the end of the rendering pipeline on a screen space level(and some primitive shading for tier 2).
for example, regions of the screen behind Hud elements, textures in shadow, or regions behind motion blur can get reduced/faster versions of shaders executed against them

In other words, the developer has greater control of where they want the GPU to spend is pixel shading power.
its a win

Please stop making too much sense!!
 

tryDEATH

Member
At 15% it would be great, yeah. But the reality is, if you don't want your game to be visibly blurry, you're getting around 5% more performance. Not terrible by any means, but also not groundbreaking.

Gears Tactics had a old version of VRS and it was able to get around 20% performance gains and had some image quality issues, but the newer of VRS and its full slew of features fixed that issue and upped performance gains even more. VRS is going to offer on average about 15% +/- on the majority of games that use it.
 

Dontero

Banned
They both use a custom RDNA2 based GPU that is only similar but not equal.

Bullcrap. They are as "custom" as PS4 gpu was "custom". They are basically off the shelve parts with minimal changes mostly to I/O. Stuff like VRS is at core of design not something you can bolt on.

Moreover AMD wouldn't develop two architectures for two consoles. They use the same arch and arch is what decides features capability.

That's not a correct assumption. MS has already talked about features that have that others don't.

MS never said something like that.
 
Last edited:

Shin

Banned
That's not a correct assumption. MS has already talked about features that have that others don't.
That's the take away here, instead of people taking it to the platform holder and holding them responsible they rather go at each other.
The only direction fingers should be pointed at is the one(s) that didn't send a clear message or muddied theirs by communicating poorly.
 

SleepDoctor

Banned
That's the take away here, instead of people taking it to the platform holder and holding them responsible they rather go at each other.
The only direction fingers should be pointed at is the one(s) that didn't send a clear message or muddied theirs by communicating poorly.


So far reading this thread
-df are ms shills again
-vrs is trash
-bubbbububuuuttt matt says vrs is in

Guys need to go huddle up back in the spec thread. Everyone came out of that thread an console engineer/architect.

Questions should be lobbied at Sony, nobody else.
 

01011001

Banned
But AC Valhalla runs at 4k 30 fps on the Series X

well they said they are trying to get it to 60fps... and I hope they will because there's honestly no excuse for a cross gen game to be 30fps this time around.

it made sense on PS4 and Xbox One due to their god awful CPUs, but now? nah
 

pyrocro

Member

The tweet says the opposite of what you are trying to say.
VRS = GOOD THING
weather the PS5 has it or not is a different argument, I actually think it does have it.

Nope. VRS blurs image. Unreal Engine 5 has resolution scaling and temporal upsampling that the DF guys cannot tell how many pixel there are. NVidia AI scaling also doesn't blur the image. Checkerboard rendering and VRS have compromises. There are better solutions out there to extract more performance.
Due to performance constraints, graphics renderers cannot always afford to deliver the same quality level on every part of their output image. Variable rate shading, or coarse pixel shading, is a mechanism to enable allocation of rendering performance/power at varying rates across the rendered image.

Visually, there are cases where shading rate can be reduced with little or no reduction in perceptible output quality, leading to “free” performance.
 

pyrocro

Member
What were the benefits of VRS then? Wasn't it running at 24fps/3840x1608? You're saying it would be sub-20fps without it?
If you're arguing if VRS is good or bad, your statement above will still make it good, wouldn't it?. 🙀

Sony has that already via Horizon ZD technique. They've been rendering only what the player sees already for quite some time
No, you are confusing things that the engine is doing with hardware features.
It's not a bad thing but it's a bloody compromise like checkerboarding. Would we be saying why doesn't it have checkerboarding in our games? VRS is like checkerboard rendering it reduces image quality/resolution but on parts of the screen. It's a good technique but why are we trying to push the narrative that because the games didn't use it it must not have hardware support?
Because it's potential free performance.
VRS will be used in unoptimized engines for PS5 that still use ancient LOD's system. Plus it's important for PSVR2 as it'll most likely be 4K@120Hz for each eye (240Hz total).
you're either clueless or trolling, which is it?
 

Eliciel

Member
VRS is still within the render camera's viewport. Variable Rate Shading changes shading resolution which conserves GPU resource.
At what cost the
If you're arguing if VRS is good or bad, your statement above will still make it good, wouldn't it?. 🙀


No, you are confusing things that the engine is doing with hardware features.

Because it's potential free performance.

you're either clueless or trolling, which is it?

Seems to be very Important then. Let's judge the outcomes based on the next 5 years of Games and their individual Graphics. What do you say?

Because anything Else is smoke and mirrors, Lot of talking, let the Games do the talking, right?
 

pyrocro

Member
At what cost the

Seems to be very Important then. Let's judge the outcomes based on the next 5 years of Games and their individual Graphics. What do you say?

Because anything Else is smoke and mirrors, Lot of talking, let the Games do the talking, right?
Conceptually its a win.
The games that currently have it implement are proof of concept with real-world performance gains with real money spent paying developers to implement it.
We are long past the wait and see, it's a win to have it, and the PS5 may very well have it.
I don't see why you or anyone would need to wait and see more.
The only thing to wait and see more for is to confirm the PS5 supports it.(which we all know is the really issue anyone has with it here)
 

jimbojim

Banned
Actually it sharpens detail where it matters most, while blurring things that are less noticed/if noticed at all in the background.

VRS is also "lossy", in that it results in lower image quality for certain portions of the screen. It may be or may be not visible to players.

Anyway, some people should reconsider does PS5 has VRS ( which is a STANDARD feature of RDNA 2 ) when formerly PS5 software engineer said this :

 
Last edited:
Audio and SSD are part of Hardware, so saying that Xbox has better hardware is simply wrong. Xbox has some advantages while PS5 has others, simple as that.

The scenario is somewhat similar to N64 vs PS1 (not to that extreme though) of N64 having better GPU but lost (Ff7 , Metal Gear, RE...etc) due to its limitations in cartridge space, 32MB vs 700MB

Do you have the specs on the XSX audio HW?
 
At what cost the

Seems to be very Important then. Let's judge the outcomes based on the next 5 years of Games and their individual Graphics. What do you say?

Because anything Else is smoke and mirrors, Lot of talking, let the Games do the talking, right?

Interesting that both AMD and Nvidia adopted a technique into their HW stack that is useless. Don't you find that odd?
 
I don't know if I want to talk publicly about some of this stuff yet, but I've had some stuff mentioned my way similar to this. The reason I don't want to go too much into it is because it'll open up a giant can of worms and probably get people questioning things like "is it RDNA1 or RDNA2", even though they tend to forget that both systems are custom RDNA2.

With that said, we never got a full listing of what the standardized RDNA2 features are across all platforms, either, and the one thing that sticks to Sony like a sore spot in this regard opening up the questioning is the fact their GPU is a 40 CU unit (with four units disabled); we know for various reasons 40 CUs was the maximum AMD could do for RDNA1. At the same time, we know they have RDNA2 cards and mobile APUs with much less than 40 CUs, so is that particularly a sore sticking point to PS5 in all honesty?

Die Namek Ability Die Namek Ability Current-gen systems could do a lot of the things next-gen consoles can, albeit if the devs rolled out their own software implementations/solutions, which costed general resources to do.

Also meant there was no standardization as each dev would have their own take, and trying to do some of the things that are basically "free" (or very low cost) on the next-gen systems would've eaten up TONS of resources on the current-gen platforms due to lack of dedicated hardware.

Geometry Engine is more Mesh Shading.
I would have thought Sony would have mentioned VRS by now if they had it, but until Sony gives out some more info, hopefully in the teardown reveal, we cant really judge much.
VRS would have been available to them via RDNA 2 if they wanted to option it, so maybe they have their own solution, or didn't value it.

IIRC isn't the GE based mainly around Primitive Shading? Here's a quote from the Eurogamer article:

On a features level, Cerny reveals features that suggest parity with other upcoming AMD and AMD-derived products based on the RDNA 2 technology. A new block known as the Geometry Engine offers developers unparalleled control over triangles and other primitives, and easy optimisation for geometry culling. Functionality extends to the creation of 'primitive shaders' which sounds very similar to the mesh shaders found in Nvidia Turing and upcoming RDNA 2 GPUs.

So you might be right in that it's an equivalent to mesh shading, but the question is how much of it is actually based on mesh shading versus primitive shading, which was introduced with the first RDNA iteration. The specification of primitive shaders right in the article itself would suggest the GE is moreso built around AMD's Primitive Shaders, which are compiler-controlled.

It might be possible Sony've taken a few features from mesh shaders into the GE, however.

VRS is also "lossy", in that it results in lower image quality for certain portions of the screen. It may be or may be not visible to players.

Anyway, some people should reconsider does PS5 has VRS ( which is a STANDARD feature of RDNA 2 ) when formerly PS5 software engineer said this :



I get that but, at the same time...there's a reason AMD invested in VRS in the first place. If it was less efficient than another method they would not have wasted the time. And the Sony software engineer...well, to some degree they are going to try putting a good spin on any features of the system they worked on even if it by chance doesn't support a feature here and there. Same way we'd probably see a MS engineer downplay cache scrubbers if they don't have that in their hardware.

Also a feature being standard doesn't mean it's inferior to a non-standardized solution. The fact there isn't any outright confirmation of VRS support would possibly indicate something else that, again, I'd rather not go down that road because it opens up a giant can of worms and I feel we've mostly moved past that with these next-gen tech discussions. But maybe it's there and they haven't confirmed it yet. In a way by highlighting the GE in that post it feels a way to potentially focus on why the feature may not be present (if it in fact isn't present).

We'll know sooner or later, in any case.
 
Last edited:

geordiemp

Member
The tweet says the opposite of what you are trying to say.
VRS = GOOD THING
weather the PS5 has it or not is a different argument, I actually think it does have it.


Think VRS is good for racing games where there is lots of sky and edges of screen when moving fast, as devs can render those at a lower quality and get away with it.

Which bits of the below screen would you render at lower quality ? And why bother, its native 4k anyway.

dnvj39p.jpg
 
Last edited:
IIRC isn't the GE based mainly around Primitive Shading? Here's a quote from the Eurogamer article:

So you might be right in that it's an equivalent to mesh shading, but the question is how much of it is actually based on mesh shading versus primitive shading, which was introduced with the first RDNA iteration. The specification of primitive shaders right in the article itself would suggest the GE is moreso built around AMD's Primitive Shaders, which are compiler-controlled.

It might be possible Sony've taken a few features from mesh shaders into the GE, however.

GE isn't anything new. It existed since Vegas. The difference in RDNA 1 is that it was improved. But its still doing the same fundamentally thing. Its improving the current pipeline.

So primitive shaders just caters to the existing pipeline and accelerates it for culling and such, but mesh shaders reinvents the entire pipeline and gives you low level access and control over threading, model selection, topology customization, etc.

So primitive shaders, geometry engine is NOT new. What's actually new in RDNA 2 is support for Mesh shaders.

Here's RDNA 1
2acaceff-ff7a-4d53-91ee-2cd71d23fd7e.PNG
 
Last edited:

Snake29

Banned
As if people play a game and then "OMG....THIS PART IS VRS...OMG".

Lol people will notice SSD speeds, fast loading of assets, etc and not something that is almost invisible for the player. It's not like this is some "Endgame" tech without any alternatives or modifications
 
Last edited:
GE isn't anything new. It existed since Vegas. The difference in RDNA 1 is that it was improved. But its still doing the same fundamentally thing. Its improving the current pipeline.

So primitive shaders just caters to the existing pipeline and accelerates it for culling and such, but mesh shaders reinvents the entire pipeline and gives you low level access and control over threading, model selection, topology customization, etc.

So primitive shaders, geometry engine is NOT new. What's actually new in RDNA 2 is support for Mesh shaders.

Here's RDNA 1
2acaceff-ff7a-4d53-91ee-2cd71d23fd7e.PNG

Thanks for some of the clarifications; so if Sony's essentially using primitive shaders (albeit improved over the Vega versions, and maybe bringing a few bit of mesh shaders if it's possible) and MS's using mesh shaders, I would assume that the latter is going to offer a good deal more efficiency and overall performance boost.

This is a bit why I didn't want to tread on the topic because it kind of supports the idea that while PS5 is certainly custom RDNA2, the range of what RDNA2 features it (or any custom GPU) can have can vary wildly. Even among things we might assume are standard RDNA2!

Also in that graphic I notice they literally call it the Geometry Engine themselves, and this is on AMD's end for RDNA1. Road to PS5 kind of made it sound like it was Sony's own nomenclature. I still think Sony's done some customizations here, FWIW.
 
Last edited:
Thanks for some of the clarifications; so if Sony's essentially using primitive shaders (albeit improved over the Vega versions, and maybe bringing a few bit of mesh shaders if it's possible) and MS's using mesh shaders, I would assume that the latter is going to offer a good deal more efficiency and overall performance boost.

This is a bit why I didn't want to tread on the topic because it kind of supports the idea that while PS5 is certainly custom RDNA2, the range of what RDNA2 features it (or any custom GPU) can have can vary wildly. Even among things we might assume are standard RDNA2!

Also in that graphic I notice they literally call it the Geometry Engine themselves, and this is on AMD's end for RDNA1. Road to PS5 kind of made it sound like it was Sony's own nomenclature. I still think Sony's done some customizations here, FWIW.

I frankly don't see a need for that or what a possible customization would entail and based on Cerny's description on Road To PS5. Its Geometry Engine is doing exactly all that AMD already described. I firmly believe the whole "CUSTOM X" is overhyped/marketing. Its more like picking that you want a tomato, how big of a slice you want and how many tomato slices you want on your subway rather than working at the Gene-editing facility to change the functions and proprieties of the tomatoes.
 

Bo_Hazem

Banned
*looks at avatar and post history*

The desperation is seeping through your post. Yikes. Buckle up, because we're gonna get hundreds of DF comparison videos this gen. Might wanna make sure you're stocked up on blood pressure medication.

Yup, you guys have nothing to add to the discussion so proceed to personal attacks :messenger_clapping::lollipop_tears_of_joy: How smart and intelligent.
 

JimboJones

Member
Very interesting, guess only time will tell. I'm guessing they will have it or some form of it eventually, consoles will need to squeeze as much performance as possible as they will be fixed at these specs for a long time.
 

sendit

Member
well they said they are trying to get it to 60fps... and I hope they will because there's honestly no excuse for a cross gen game to be 30fps this time around.

it made sense on PS4 and Xbox One due to their god awful CPUs, but now? nah

AC:V has a big open world with next gen enhancements. Why do you think it should easily hit 60FPS running at 4K on next gen consoles without compromise?

12 Teraflops != unlimited power.
 

Elog

Member
Thanks for some of the clarifications; so if Sony's essentially using primitive shaders (albeit improved over the Vega versions, and maybe bringing a few bit of mesh shaders if it's possible) and MS's using mesh shaders, I would assume that the latter is going to offer a good deal more efficiency and overall performance boost.

This is a bit why I didn't want to tread on the topic because it kind of supports the idea that while PS5 is certainly custom RDNA2, the range of what RDNA2 features it (or any custom GPU) can have can vary wildly. Even among things we might assume are standard RDNA2!

Also in that graphic I notice they literally call it the Geometry Engine themselves, and this is on AMD's end for RDNA1. Road to PS5 kind of made it sound like it was Sony's own nomenclature. I still think Sony's done some customizations here, FWIW.

I agree with this. My other take - and hence why I am curious over a proper look at that PS5 GPU - is that it seems to me PS5 set out from the beginning to optimise the architecture differently.

In a PC framework the optimisation parameters are set: You want to optimise FPS in a 4k setting with a set (and limited) VRAM budget (with RT and other features of course). This results in a given hardware path for optimisation.

Assume you are Sony and say: We aim for 4K/stable 30FPS but with a much larger amount of textures at a higher resolution on screen (with RT and other features of course).

If that assumption is correct, what - if any - changes to the weight of the different components would you need to go for at the GPU level (not new features as such, but rather where would the bottle necks come into play given the much larger amount of textures and their size)? I might be wrong, but I have a suspicion that Cerny has had something like the above in mind when designing the console.
 

Bo_Hazem

Banned
No matter how many times you post it Bo, it will be ignored, because many here know more than Matt. RDNA 2 VRS is going to change the game, not the VRS with advanced customizations on PS5 though, the normal one you get from the AMD's GPU hardware and software stack.

What's crazier is that so many are now latching unto VRS, but pretty much all modern GPU's landing in 2020 will have VRS supported, it's like your 2020 AF. Everything from the Intel Xe's, to the Ampere GPU's to the RDNA 2 GPU's....Current NV GPU's even support VRS already, it's not proprietary at all. People should be clued in already based on all the custom work on the SSD, that Sony never goes for the standard, they put in their own spin and improvements to the standard VRS that comes with RDNA 2.

They like spreading FUD, just like DF and other suspect websites. As if we do care what's happening inside PS5 to produce this insane CGI movie level of gameplay!

ratchet-and-clank-rift-apart-ps5-playstation-5-5.original.jpg
 
I have a problem with DF saying “little to no evidence” when there’s no reason to believe some form of VRS isn’t being used.

The point of VRS (and especially Tier 2) is to reduce rendering load in parts of the frame that don’t need to be rendered at full res like low contrast, blurred, or distanced objects. You’re not supposed to be able to tell when it’s being used.

Check out these shots comparing Tier 2 on vs. off:

3dmark-vrs-feature-test-tier2-screenshot-vrs-on.png


3dmark-vrs-feature-test-tier2-screenshot-vrs-off.png


You might be able to find some differences in the stills, but in motion? On compressed streams? Would you believe this implementation gives a 60%+ performance improvement on a 2080?

VRS tier 1 runs on integrated Intel gpu’s, and Microsoft’s VRS parent even references Mark Cerny’s patent from 6 years ago for the same thing: https://patents.google.com/patent/US20150287167

This is 2013 and Tiled Resources all over again: https://www.neogaf.com/threads/auto...iling-announced-for-w8-1-and-xbox-one.604641/
 

JimboJones

Member
They like spreading FUD, just like DF and other suspect websites. As if we do care what's happening inside PS5 to produce this insane CGI movie level of gameplay!
You need to chill out man, they arn't spreading fud, they simply didn't see any evidence of it. If you can you're more than welcome to point it out and correct them.
 
Last edited:

Bo_Hazem

Banned
No because there is precedence at least for Dirt. It's running pretty high frames on all GPU's, it's a game made for pretty high framerates. Remember when Vega GPU's used to dominate even the 1080ti in prior dirt games? It's also crossgen, dirt will run at high framerates on pretty much all mid gen GPU's and a guaranteed 60fps on lower end cards....

Also, you have The Ascent which is an over the top shooter, these games will run at very high framerates on anything. According to PC steam the requirements are an i3 with a GTX 660...…..As for Scorn, that game has been in development for a while, it's not so much a ground up game for next gen, there are gameplay videos if you care to watch. I can easily see a 120fps mode, even at lowered resolutions. However it's not really a quickpaced game like Quake, where 120fps would be more noticeable. Yet, Scorn's Official Requirements, not even minimum Req is an i5 with a GTX 970, it's really not that high. The next gen consoles run circles around such a setup.....120FPS modes is not out of the ordinary for any of these titles. The only one we have not seen gameplay of is the medium, so that is a wait and see......

About that Scorn trailer...


Talk about honesty :messenger_tears_of_joy:

hhhh.jpg
 
Last edited:
I frankly don't see a need for that or what a possible customization would entail and based on Cerny's description on Road To PS5. Its Geometry Engine is doing exactly all that AMD already described. I firmly believe the whole "CUSTOM X" is overhyped/marketing. Its more like picking that you want a tomato, how big of a slice you want and how many tomato slices you want on your subway rather than working at the Gene-editing facility to change the functions and proprieties of the tomatoes.

I'm just saying that, going by the GE and PS stuff on RDNA1, both of those were already present in the first generation. So while Primitive Shaders work a lot of Mesh Shaders, they aren't quite the same thing and have some very key differences. If you're insisting Sony's Primitive Shaders aren't customized with at least some of the improvements from Mesh Shaders, and their Geometry Engine isn't too different from what AMD already referred to as Geometry Engines in RDNA1 then...

...well, I dunno. It would mean they chose to optimize/fine-tune aspects of older technology kind of like Elog Elog is suggesting, maybe with inspirations from newer roadmap features, rather than taking those newer features wholesale. The question is though, why would they do that? It would kind of lend credence to an idea that they did have an earlier launch planned, and an earlier launch would have meant going with an RDNA2 base GPU would not have been possible. So why not use RDNA1 as the base and bring some RDNA2 features from the roadmap forward while customizing parts of the RDNA1 to reflect potential performance metrics of RDNA2 GPUs?

Sony using AMD's nomenclature for the GE and PS seems to basically stealth-confirm that for me, and again if you go back to the Ariel and Oberon testing data, you do see a progression from RDNA1 (Ariel) to RDNA2 (Oberon), but how and what type of transition that actually was between them is extremely curious. I think Sony deciding on a 36 CU GPU from the outset also belays the intention of using a custom RDNA2 GPU, in their case possibly meaning RDNA1 as the base blueprint and bringing forward RDNA2 features in a way to fit that base blueprint. And then things on Sony's end like the cache scrubbers possibly influencing AMD with future RDNA3 features. That seems it would be different from MS, who seemingly have gone with RDNA2 as their base blueprint and maybe doing their own customizations that could've also influenced some RDNA3 features in the near future.

Also this is just speculation on my end but since Sony's BC is hardware-based that might've also influenced decision to go with a custom RDNA2 GPU that has base RDNA1 and other RDNA2 features integrated in (even if in terms of how the hardware for those features works is a bit different from standard RDNA2). RDNA1 still has a good deal of GCN hardware support; while RDNA2 got rid of any base GCN hardware architecture in its design (or vast majority of it; IIRC it can still support GCN microcode but basically via emulating it?).

I agree with this. My other take - and hence why I am curious over a proper look at that PS5 GPU - is that it seems to me PS5 set out from the beginning to optimise the architecture differently.

In a PC framework the optimisation parameters are set: You want to optimise FPS in a 4k setting with a set (and limited) VRAM budget (with RT and other features of course). This results in a given hardware path for optimisation.

Assume you are Sony and say: We aim for 4K/stable 30FPS but with a much larger amount of textures at a higher resolution on screen (with RT and other features of course).

If that assumption is correct, what - if any - changes to the weight of the different components would you need to go for at the GPU level (not new features as such, but rather where would the bottle necks come into play given the much larger amount of textures and their size)? I might be wrong, but I have a suspicion that Cerny has had something like the above in mind when designing the console.

You might be on to something. I am starting to think now they've gone with a custom RDNA2 GPU with RDNA1 architecture at the base, but bringing forward some RDNA2 features (even if they are implemented differently on Sony's end) to customize aspects of the RDNA1 feature set and hardware architecture, basically using those RDNA2 features as inspiration to address things in their chosen route that would've caused performance issues.

The question is what exactly those things are, and to what extent those changes may've gone. I think if you look at the timeline of PS5 development and the information we have been getting for the past two years or so, this idea of their custom RDNA2 GPU being an RDNA1 base with alterations bringing RDNA2 roadmap features onboard and implemented as-needed, starts to stick. It'd explain the GPU leaks and testing data, the rumors of an earlier launch, etc. It'd explain the decision to go with a 36 CU GPU on the outset and focus on clocks, and some GPU customizations we know about such as the cache scrubbers. It'd explain why the BC is hardware-dependent, etc.


I have a problem with DF saying “little to no evidence” when there’s no reason to believe some form of VRS isn’t being used.

The point of VRS (and especially Tier 2) is to reduce rendering load in parts of the frame that don’t need to be rendered at full res like low contrast, blurred, or distanced objects. You’re not supposed to be able to tell when it’s being used.

Check out these shots comparing Tier 2 on vs. off:

3dmark-vrs-feature-test-tier2-screenshot-vrs-on.png


3dmark-vrs-feature-test-tier2-screenshot-vrs-off.png


You might be able to find some differences in the stills, but in motion? On compressed streams? Would you believe this implementation gives a 60%+ performance improvement on a 2080?

VRS tier 1 runs on integrated Intel gpu’s, and Microsoft’s VRS parent even references Mark Cerny’s patent from 6 years ago for the same thing: https://patents.google.com/patent/US20150287167

This is 2013 and Tiled Resources all over again: https://www.neogaf.com/threads/auto...iling-announced-for-w8-1-and-xbox-one.604641/

I think the benefits of tier 2 VRS isn't so much in visual differences you can see in screens or in motion, but the fact that the actual hardware is having a good chunk of performance resources freed up which can be applied to other areas, graphics or not.

PS4/Pro IIRC uses foveated rendering, which is what I think is the Cerny patent MS and Intel's stuff refers to. However, foveated rendering was at least primarily designed with the PSVR in mind, which is somewhat different than a similar technique on a non-VR display output. Principally there's some similarities, but in actual practice and implementation there are key differences.
 
Top Bottom