• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry claims PS5 doesn't exhibit any evidence of VRS(Variable Rate Shading) from PS5 showcase.

Xplainin

Banned
Thanks for clearing it out.
He's arguing that since GE prunes geometry beforehand, it has just as much of an impact on performance as VRS.
Ideally, you should do both, of course.
Absolutely. Mesh shading and VRS together are better than the sum of each.
I have said before that Sony knew what VRS was/is, and it's a option for RDNA 2, so they could of had it (so long as they can develop an API for it) if they wanted it.
Maybe Sony didnt value it.
Quite possibly Sony dont have an API for it yet, so dont have the ability to perform VRS yet, but they will/can develop their API to allow it, and so will have it in the future.
There is something funny with Sonys silence, so maybe it's an API thing, and Sony don't want to lie about having it right now, but know that they will have it later and so are choosing to avoid the question until its all set.
Who knows.
 
Last edited:

M1chl

Currently Gif and Meme Champion
Digital Foundry predicted that the new generation would be 5700XT, it made sense to say that it would not be 36CUs because that would be "8TF" and nobody imagined that with RDNA2 the clock could be increased to [email protected] 10.28TF




Why should a Sony answer something to DF?


After Cerny made a presentation saying:

- SSD is not just for loading

- any downclock will be minimal.

- increasing the clock improves everything on the GPU.

DF shit out of his mouth:

- SSD is only for loading.

- PS5 never runs with CPU and GPU simultaneously at full clock.

- clock increase does not improve anything, because the 5700XT above 2ghz does not improve anything.

would you talk to any journalist who calls you a liar?

And guess what, DF isn't interested in listening to Sony, because it didn't hear anything from Cerny's lecture as I just demonstrated.

If I was interested in listening, we would not have the FUD festival encouraged by DF, and by that time DF would have already made a video about Geometry Engine

GE >> VRS.



===========

Linus apologized after talking nonsense about the PS5's SSD.

where is the "dictator" apologizing after saying that the SSD is only for loading? That the PS5 is never at full clock?

Will Leadbetter apologize after implying that Cerny lied, that raising the clock above 2ghz doesn't improve the GPU when AMD releases "6900XT" at 2.2~2.3 ghz?
To be honest, one think it's clear, GPU clock generally (so far probably always) scales pretty bad on PC, with price of terrible % of raise of power consumption. And that goes for memory and GPU chip itself, so it definitely has some merit. If RDNA 2 shows that it can do that, they should ancnowledge it, but even 5700XT gives you very little for big raise in power consumption and it's generally not worth it.

Not sure why tho, thinking about it.
 

ethomaz

Banned
I guess people don't get much that. VRS is a trademark name for a Directx 12 feature, right?
It is more like a software implementation logic for DX12U feature.

Let's break it down, you need two things to VRS works:

- Hardware: All turing and RDNA 2.0 based GPUs has VRS support.
- Software: DX12, Vulkan, OpenGL and even DirectX 11 via nVidia extension has VRS support.


JfQ3QLm.png
 
Last edited:

Deto

Banned
To be honest, one think it's clear, GPU clock generally (so far probably always) scales pretty bad on PC, with price of terrible % of raise of power consumption


Are you comparing overclocking RDNA1 with clock increase achieved with a new architecture (RDNA2)?

Will I overclock the 2080TI and say that increasing the clock on the Ampere doesn't improve anything?


I only have two options about DF in this case:

a) intellectual dishonesty.

b) stupidity
 
Last edited:

M1chl

Currently Gif and Meme Champion
Are you comparing overclocking with clock increase achieved with a new architecture?

Will I overclock the 2080TI and say that increasing the clock on the Ampere doesn't improve anything?


I only have two options about DF in this case:

a) intellectual dishonesty.

b) stupidity
No, but with GPU it exist damn good precendent, because RDNA 2 comes from RDNA 1 and GPU never scaled good with clocks. Hell I could get 100% bigger clocks on 5700LE (that's an nVidia) and I got like 5% more power, again if the RDNA 2 somehow tackles this, then it's going to be amazing achievement, you also cannot believe what it does say in PR, but for something to never worked before (in games, in benchmarks, you can get gains, for sure) to suddenly working it would be interesting....Will see, but you have to work with what you have.

So yeah it would be intelectually dishonest to pretend, that things could change, but also to pretended that based on PR things will change. And with AMD, I am old enough to remember their "hairdriers" (especially something like 2900XT)...well nVidia too, 5700ULTRA it was the same deal.
 
This is just another example of the bias being in your interpretation, even your description of what the article is about is, in my opinion, wrong. There is nothing in the articles, or the quotes that you pulled from them, that offers proof of a bias by the author.
Saying someone is biased does not mean they never say anything true, it means they often spin things in a specific direction. DF has an history of being very optimistic for Xbox and very 'concerned' about PlayStation for a very long time.

I mean, look at the two best looking launch ps4 and Xbox games (killzone and ryze), they praised the 900p output sub 30fps perforanrin ryze while complaining about the unlocked fps in killzone:sf (26fps VS 45ish).

XSX was designed for the highest render resolution target with the highest performance for $$
And they achieved none!
 

Deto

Banned
No, but with GPU it exist damn good precendent, because RDNA 2 comes from RDNA 1 and GPU never scaled good with clocks. Hell I could get 100% bigger clocks on 5700LE (that's an nVidia) and I got like 5% more power, again if the RDNA 2 somehow tackles this, then it's going to be amazing achievement, you also cannot believe what it does say in PR, but for something to never worked before (in games, in benchmarks, you can get gains, for sure) to suddenly working it would be interesting....Will see, but you have to work with what you have.

So yeah it would be intelectually dishonest to pretend, that things could change, but also to pretended that based on PR things will change. And with AMD, I am old enough to remember their "hairdriers" (especially something like 2900XT)...well nVidia too, 5700ULTRA it was the same deal.

and there we go.

you keep comparing overclocking on a GPU with a clock increase after an architecture change.
 

M1chl

Currently Gif and Meme Champion
and there we go.

you keep comparing overclocking on a GPU with a clock increase after an architecture change.
But did they stated it's going to be same, or it was some experiment to show it approximation on current gen HW?
 

Xplainin

Banned
Are you comparing overclocking RDNA1 with clock increase achieved with a new architecture (RDNA2)?

Will I overclock the 2080TI and say that increasing the clock on the Ampere doesn't improve anything?


I only have two options about DF in this case:

a) intellectual dishonesty.

b) stupidity
There have been experiments done where someone like DF or Red Tech have got two cards with the same tflops, but one with a higher clock and one with more Cu's. If memory serves,, the performance was higher on the one with more Cu's.
 

ethomaz

Banned
There have been experiments done where someone like DF or Red Tech have got two cards with the same tflops, but one with a higher clock and one with more Cu's. If memory serves,, the performance was higher on the one with more Cu's.
Because they ran the test near the limit of RDNA clocks where it didn't scale performance anymore.
Ask DF to make the test with lower clocks and you will see different results.

They used RDNA at 2100Mhz lol
After 2000Mhz RDNA performance little increase with clock.
 
Last edited:

Bogroll

Likes moldy games
Saying someone is biased does not mean they never say anything true, it means they often spin things in a specific direction. DF has an history of being very optimistic for Xbox and very 'concerned' about PlayStation for a very long time.

I mean, look at the two best looking launch ps4 and Xbox games (killzone and ryze), they praised the 900p output sub 30fps perforanrin ryze while complaining about the unlocked fps in killzone:sf (26fps VS 45ish).


And they achieved none!
Very true, like if someone said Killzone SF is 45 ish fps.
 

Xplainin

Banned
Because they ran the test near the limit of RDNA clocks where it didn't scale performance anymore.
Ask DF to make the test with lower clocks and you will see different results.
2.23 is near the top limits of RDNA 2 as well. So compared to XSX, I would expect the results to be the same.
 
My honest take is that mods mostly ban Xbox fanboys more than Sony fanboys even though there's more of them and many egregious comments. Evidenced by many threads but especially the speculation thread. As well as banning way more xbox fanboys(who deserved it) than Sony fanboys( who also deserved it but weren't) although both sides are equally in err on some level...per capita(slanted towards Sony more because there are more Sony fanboys, the "percentage" is probably the same though).

I apologise to the mods if that's not true. It's just my honest opinion and perception on this site based on the bans I've seen. I've seen many outrageous and even vulgar statements go unpunished and it I perceive the mods as favoring Sony fanboys. More of them..."get away with it". I can recall 2 total Sony fanboys bans in the past 4 months, but about 12+ Xbox fanboys bans even though there's way more of them, Sony fanboys, and by extension, there's been more ban worthy comments and dialogue from that just based on sheer numbers. I'd love the actual stats just for dialogue s sake, maybe a conversation starter. My honest perception here anyways. If there's an ounce of truth to it(or not I guess), I'm sure others might have this perception too by extension.

I know this opinion is off topic except to reply to your statement, I just wouldnt start a new thread to discuss it 😉

Having Meirl creating so many alts can help skew that perception.

Did you double check if the ones that you saw banned were not Meirls alts?
 

FranXico

Member
Is it? Care to share these RDNA 2 clock tests please.

You can infer that from what Cerny claimed in his talk. They capped the frequency at 2.23 GHz because they couldn't sustain higher clocks without breaking the GPU logic.
They probably reduced the max frequency a bit lower than the actual limit, of course (engineering "play it safe" mentality).
 
Last edited:

ethomaz

Banned
How do you work out what the upper limits of RDNA 1 was?
Watch when the RDNA 2 cards are released by AMD and let's see what the upper clocks are.
Yes so you made an unsustainable claim.

We have no ideia which is the clock limits where it start to not scale performance anymore for RDNA 2.
We only have data of RDNA.
 
Last edited:

ethomaz

Banned
You can infer that from what Cerny claimed in his talk. They capped the frequency at 2.23 GHz because they couldn't sustain higher clocks without breaking the GPU logic.
They probably reduced the max frequency a bit lower than the actual limit, of course (engineering "play it safe" mentality).
That can mean a lot of things.
But what you say can very be true.
 

ethomaz

Banned
I will bet you when AMD release their new GPUs, they wont be more than 5% higher than PS5.
Wanna take it up?
Bet for what? lol
If AMD releases a GPU with 5% clocks from PS5 then the limit of the clocks for RDNA 2 is way over that.
Let's say BigNavi with 80CUs runs at 2100Mhz.... that means it will start to scale in performance with clock at 2.3 or even 2.4Ghz.

But that is speculation about RDNA 2 clocks... there is no evidence.

DF using RDNA with 2100Mhz to estimate performance is what we call bias from their side.
They create a narrative with whatever they have to fill the superiority or their bias.
 
Last edited:

Deto

Banned
Bet for what? lol
If AMD releases a GPU with 5% clocks from PS5 then the limit of the clocks for RDNA 2 is way over that.

Let's say BigNave
i with 80CUs runs at 2100Mhz.... that means it will start to scale in performance with clock at 2.3 or even 2.4Ghz.

But that is speculation about RDNA 2 clocks... there is no evidence.


We will definitely have an AMD graphics card very similar to the PS5 GPU, 36CUs 2.23Ghz in the RDNA2 line.

This was said almost as a fact by Cerny at the PS5 presentation.

I wonder if when AMD announces this, DF will start burping "RDNA2 is fake, this clock increase is useless"
 

ethomaz

Banned
Mind you, what he says is for the PS5 GPU. Limits might be different on other chips (you'd expect that, right?)
Yeap the limit which a chip scale depends of others factors... for example a 40CUs chip scale better clock/performance than a 80CUs.
If we will enter in detailed data you will see even between same chip has those that scale better and others that not... AMD/Intel sells these chips at expensive price labeled for high overclock.

One thing is certain if Cerny choose that clock to PS5 that means the chip is comfortable running at that speed in both performance and lifespan... it is a consumer product after all and they don't want they breaking to use warranty.
 
Last edited:

Xplainin

Banned
You can choose the FUD to be spread:

- increasing the clock does not increase performance.
- only reaches 2.23Ghz in rare situations.

both boil down to saying that the clock is fake because it doesn't work or doesn't improve anything.
Thats nothing to do with what I was saying.
I said that it has been shown that two cards from the same company, with the same tflops, but one with higher frequency, the other higher CU count, e.g.
Card 1 - 40CUs @ 2.0ghz = 10.24tflops
Card 2 -50CUs @ 1.6ghz = 10.24tflops.

When running the same game on the same rig, Card 2 with the lower clock, but higher CU count had the better performance.
 

ethomaz

Banned
Thats nothing to do with what I was saying.
I said that it has been shown that two cards from the same company, with the same tflops, but one with higher frequency, the other higher CU count, e.g.
Card 1 - 40CUs @ 2.0ghz = 10.24tflops
Card 2 -50CUs @ 1.6ghz = 10.24tflops.

When running the same game on the same rig, Card 2 with the lower clock, but higher CU count had the better performance.
Any test/evidence with lower clocks?
I'm interested in read them... DF used 2.1Ghz that just make their test useless.
 
Last edited:

Xplainin

Banned
Bet for what? lol
If AMD releases a GPU with 5% clocks from PS5 then the limit of the clocks for RDNA 2 is way over that.
Let's say BigNavi with 80CUs runs at 2100Mhz.... that means it will start to scale in performance with clock at 2.3 or even 2.4Ghz.

But that is speculation about RDNA 2 clocks... there is no evidence.

DF using RDNA with 2100Mhz to estimate performance is what we call bias from their side.
They create a narrative with whatever they have to fill the superiority or their bias.
What do You want to bet?
AMD is not going to put out a GPU against Nvidia and not run the card to the best possible freq it can to get the best performance.
So if they are releasing the RDNA 2 cards around the same speed as PS5, then thats your ceiling.
If you deny that, then we also have to deny the upper limit you put on RDNA 1.
And for the record, the increase in clock comes more from the reduction of fabrication size than the change in architecture.
 
Last edited:

ethomaz

Banned
No it doesn't. The power will increase linear to the clock, its just the performance per watt is uneconomic.
What do you mean power? Performance doesn't increase linear to clock and at some point the performance little to no increase.
Power consumption increase near linear to clock.
That makes DF tests with 2.1Ghz for RDNA useless.

Scaling-1080Ti.png


And for the record, the increase in clock comes more from the reduction of fabrication size than the change in architecture.
It is both.
 
Last edited:

Redlight

Member
Saying someone is biased does not mean they never say anything true, it means they often spin things in a specific direction. DF has an history of being very optimistic for Xbox and very 'concerned' about PlayStation for a very long time.

I mean, look at the two best looking launch ps4 and Xbox games (killzone and ryze), they praised the 900p output sub 30fps perforanrin ryze while complaining about the unlocked fps in killzone:sf (26fps VS 45ish).
The onus of proof is on the one making the accusation. You are giving your interpretation and claiming it as fact. You need to provide evidence of the claims you make if you want reasonable people to take them seriously.
 
The onus of proof is on the one making the accusation. You are giving your interpretation and claiming it as fact. You need to provide evidence of the claims you make if you want reasonable people to take them seriously.
I agree, however others provided plenty of examples on the same page--I will not re-post the whole thread every time I want to make a point that adds to it--DF's appreciation of respective technology does not pass the smell test for an appearance of neutrality... them giving a correct opinion here and there does not negate this.

I expect those partaking in the conversation to be able to understand the context of what is being said within the conversation itself.
 
He also said it would be 8TF, without SSD, without RT.

As far as I read, Geometry Engine > VRS.

How is GE better than VRS when RDNA1 have Geometry Engine (meaning, by extension, unless AMD completely removed it for RDNA2, XSX also has Geometry Engine)? Also, keep in mind that GE and VRS are completely different things being done at different points of the graphics pipeline, so they aren't even directly comparable.

People are having this take from what Matt said on Twitter but he was merely saying that VRS alone isn't efficient if other parts of your pipeline aren't optimized, which is 100% true. He didn't say that to make a comparison directly, like some of you are taking it. His comment was moreso alluding to Primitive Shaders, since in the Eurogamer article on PS5 (as one example), they specifically do mention Primitive Shaders. So Matt is basically saying you need something to cull unneeded graphics data before trying to optimize the output with something like VRS....

...which the XSX has in the form of Mesh Shaders, introduced in RDNA2 but previously present on Nvidia GPUs. So if Sony and other locations are specifying GE and PS, those were already things in RDNA1 and it's logical to assume the GE is in RDNA2 while PS has generally been replaced with Mesh Shaders. You can read into that however you want, but I take it to mean Sony have stuck (for whatever reason) with Primitive Shaders instead of Mesh Shaders, but might've customized parts of the GE and PS off RDNA1 to have equivalent feature sets of RDNA2 such as RT (adding RT customized support in the CUs), VRS (maybe in equivalent part of the RDNA2 pipeline, or implemented differently in Sony's version), etc.

There's literally no grounds in comparing Geometry Engine to VRS because they aren't even doing the same thing...

------

EDIT: Also, JFC some of you guys really need to get a life. Insane persecution complex after one source and one guy in particular, simply because they have critical thought on your brand of plastic and a preference themselves for another brand of plastic that nonetheless doesn't interfere with their ability to ascertain neutrally the vast majority of the time.

You're literally cherry-picking examples to put a spotlight on and shine almost out-of-context at times, to try discrediting them. It's beyond pathetic. Does every person need to lick the same boots you do in order to be deemed valid in your eyes? Is ANY form of dissent grounds for justifying sullying of their character and integrity to you?

You know who you guys REALLY look like right now? The same twits on Twitter who dig up years-old "problematic" posts to try getting people cancelled in Current Year™ . The same people who whine over every single insignificant opinion that's different from their own, and launches some crusade of public shaming. That. Shit. Is. P a t h e t i c. It's the toxicity of online discourse, this extremist mentality people in their echo chambers and bubbles have created for themselves painting an "us vs. them" mentality.

That shit is pathetic enough from political talking heads and SJWs/anti-SJWs bickering back and forth, yet now it's even infected gaming among platform brands. The same toxic energy, the same smug moral superiority of flawed righteousness. Pathetic.

So it is 7nm+ then. I saw some speculation that it would be higher than that.

The consoles are 7nm DUV enhanced. Which is moreso 7nm. Some RDNA2 GPUs will be 7nm DUV enhanced (the consoles, some upcoming GPUs), others will be 7nm EUV (7nm+, mainly GPUs).

AMD seems intent on moving quickly to 5nm, however, for RDNA3.

Are you comparing overclocking RDNA1 with clock increase achieved with a new architecture (RDNA2)?

Will I overclock the 2080TI and say that increasing the clock on the Ampere doesn't improve anything?


I only have two options about DF in this case:

a) intellectual dishonesty.

b) stupidity

Maybe you should look into GPUs of same architecture and see what higher clocks actually improve? The two things that are most frequency-bound on GPUs in terms of impacting performance are the cache speed (continuous/sequential bandwidth, not parallel bandwidth which depends on amount of physical cache on the GPU) and pixel fillrate. We at least know this looking at many AMD GPU benchmarks.

Those are the two main areas where the smaller yet faster-clocked GPU sees improvements in, and doesn't take into account the amount of power you need to drive in order to get those frequency gains (going by Cerny's own figures at Road to PS5, their system has a 5:1 ratio between power-to-frequency for the gains they have in the clock, which should indicate they are well outside of the sweetspot range for RDNA2 on 7nm DUV enhanced, otherwise the ratio would be much lower).

Almost everything else in terms of GPU performance benefits from a physically larger GPU, assuming the GPUs are of the same architecture and generation. Vast majority of benchmarks verify this. So no one is really saying the faster clocks don't bring any benefits. The thing is they would bring more benefits if the GPU itself were of similar size to the competitor offering that happens to be physically larger and therefore benefits from more L1 and L2 cache, potentially more L3 cache, potentially more ROPs, TMUs etc (talking AMD here).

That's just the bare truth.
 
Last edited:

semicool

Banned
No it's not.

What you're saying is:

I can't see an atom with my backed eye, therefore it doesn't exist.
No I'm saying DF analyized the ps5 showcase and after a technical analysis on the demonstration saw no evidence(results) of a VRS type optimization implementation from the ps5.
 
Last edited:

DeepEnigma

Gold Member
2.23 is near the top limits of RDNA 2 as well. So compared to XSX, I would expect the results to be the same.

No it’s not. Cerny even said they could have went much higher without issue, but they had to cap it due to the logic not keeping up. There’s gonna be many paradigm shifts to how the logic is written in all platforms over the next several years.
 
Last edited:

Xplainin

Banned
What do you mean power? Performance doesn't increase linear to clock and at some point the performance little to no increase.
Power consumption increase near linear to clock.
That makes DF tests with 2.1Ghz for RDNA useless.

Scaling-1080Ti.png



It is both.
Why have you put up a graph of an Nvidia card that is 16nm fab? That has nothing to do with AMDs latest GPU on a 7nm fab?
If you think increasing the GPU speed to 2.1ghz from say 1.9ghz, isn't going to increase the tflops in an linear fashion, then you really need to get onto Cerny and Tell him his whole idea for the PS5 design is bust.
 

ethomaz

Banned
Why have you put up a graph of an Nvidia card that is 16nm fab? That has nothing to do with AMDs latest GPU on a 7nm fab?
If you think increasing the GPU speed to 2.1ghz from say 1.9ghz, isn't going to increase the tflops in an linear fashion, then you really need to get onto Cerny and Tell him his whole idea for the PS5 design is bust.
Because it shows clock didn't scale linearly like you said.
It doesn't matter the arch or fab process.
 
Last edited:

Xplainin

Banned
No it’s not. Cerny even said they could have went much higher without issue, but they had to cap it due to the logic not keeping up. There’s gonna be many paradigm shifts to how the logic is written in all platforms over the next several years.
You can increase the speed until the chip melts if you want.
I said AMDs RDNA 2 cards will be within 5% of PS5 speeds. That's because thats the upper limits.
If 2.23ghz is not that big a deal, why did MS stop at 1.82ghz? They could have just put the speed up to 2.2ghz and had a 14.4tflop XSX. I mean, RDNA 2 is RDNA 2.
 

ethomaz

Banned
So you are saying that the PS5 is less than 10.24tflops and is putting out less power than the XSX pound for pound.
Thats literally what you are saying.
Nope... you really need some read compression classes I guess.

Each chip has it own limit until where the clock and performance increases in similar proportion... after that the clock increases and the performance not catch at the same pace until it stops to increase.

RX 5700 test with 2.1Ghz is already over that limit making DF test useless for any conclusion about RDNA 2 that can go way over 2.2Ghz fine.

You can increase the speed until the chip melts if you want.
I said AMDs RDNA 2 cards will be within 5% of PS5 speeds. That's because thats the upper limits.
If 2.23ghz is not that big a deal, why did MS stop at 1.82ghz? They could have just put the speed up to 2.2ghz and had a 14.4tflop XSX. I mean, RDNA 2 is RDNA 2.
Wrong again.

RDNA 2 40CUs chips has it own clock limit where the performance not scale.
RDNA 2 56CUs chips has it own clock limit where the performance not scale.
RDNA 2 80CUs chips has it own clock limit where the performance not scale.

They are after all different chips with different sizes.
MS is probably very comfortable with 1.85Ghz.
 
Last edited:

Deto

Banned
Nope... you really need some read compression classes I guess.

Each chip has it own limit until where the clock and performance increases in similar proportion... after that the clock increases and the performance not catch at the same pace until it stops to increase.

RX 5700 test with 2.1Ghz is already over that limit making DF test useless for any conclusion about RDNA 2 that can go way over 2.2Ghz fine.


Wrong again.

RDNA 2 40CUs chips has it own clock limit where the performance not scale.
RDNA 2 56CUs chips has it own clock limit where the performance not scale.
RDNA 2 80CUs chips has it own clock limit where the performance not scale.

They are after all different chips with different sizes.
MS is probably very comfortable with 1.85Ghz.


The fact is:

it is stupid or dishonest to say the clock increase on a console is of no use based on the Overclock of a PC video card of ANOTHER architecture.


It's quite simple.

Anyone who doesn't understand is either too dumb or too dishonest.
 

TLZ

Banned
You can increase the speed until the chip melts if you want.
I said AMDs RDNA 2 cards will be within 5% of PS5 speeds. That's because thats the upper limits.
If 2.23ghz is not that big a deal, why did MS stop at 1.82ghz? They could have just put the speed up to 2.2ghz and had a 14.4tflop XSX. I mean, RDNA 2 is RDNA 2.
XSX is using 52 CUs compared to PS5's 36. Either higher CU count, or higher speed. Pick one. Each console took a different route.
 

rnlval

Member
Saying someone is biased does not mean they never say anything true, it means they often spin things in a specific direction. DF has an history of being very optimistic for Xbox and very 'concerned' about PlayStation for a very long time.

I mean, look at the two best looking launch ps4 and Xbox games (killzone and ryze), they praised the 900p output sub 30fps perforanrin ryze while complaining about the unlocked fps in killzone:sf (26fps VS 45ish).


And they achieved none!
XSX GPU has up to 560 GB/s and 10GB storage for framebuffer render targets.

Framebuffer render targets consumes the highest memory bandwidth with relatively small memory storage requirements.

Try again.
 
Top Bottom