alabtrosMyster
Member
But you went on a tangent about the first level meaning of the title, as if that was all you were arguing with.Ok. I never said the opposite.
The title is a lie... that is what I said.
But you went on a tangent about the first level meaning of the title, as if that was all you were arguing with.Ok. I never said the opposite.
The title is a lie... that is what I said.
Tangent? Please read my posts.But you went on a tangent about the first level meaning of the title, as if that was all you were arguing with.
Well, the very basic argument that tflops are tflops... Obviously.Tangent? Please read my posts.
The title is a lie.
There are flops... it is a mensurable metric.Well, the very basic argument that tflops are tflops... Obviously.
The title is a lie if everyone understands flops in their full context (they don't). So in a way, it's a "lie", but I would not have picked that title. It doesn't convey what's im the video very well.
You and a bunch of others commenting without even watching the video are fucking assholes.
You guys are so hung up on the phrase it's a lie. Its not meant to be taken literally. And it's supposed to fit within certain character constraints.
Tflops is a lie is probably a better title than, tflops alone do not determine the actual performance of a video game engine on a particular piece of graphics hardware, and the architecture of that hardware can make a huge difference in how much of that theoretical throughput can actually be tapped into.
Couldn't have said any better.If you want to compare "leaked" (alleged) specs the difference in bus width on the Series X is potentially a huge difference maker in its favour. Honestly, based purely on that information it should be way faster than PS5, but then again I've never argued that wasn't the case. My main thought was that a SKU with those specs is a far more costly device to manufacture, which when passed on to the consumer in terms of RRP could make it restrictively expensive and as a result unpopular at retail.
As NX points out in his video, this is not about console-war bollocks. Its about misinformed people taking a single metric and conferring significance upon it in terms of overall system performance (or lack thereof) that it doesn't warrant in actuality.
You guys are so hung up on the phrase it's a lie. Its not meant to be taken literally. And it's supposed to fit within certain character constraints.
Tflops is a lie is probably a better title than, tflops alone do not determine the actual performance of a video game engine on a particular piece of graphics hardware, and the architecture of that hardware can make a huge difference in how much of that theoretical throughput can actually be tapped into.
Except MS focused the XDK in conjunction with the XB1X to try to eliminate all bottlenecks from the engines.
Do you really think they are going to sacrifice all the research now in order to brute force them?
I can’t imagine how high some of you have to be to imagine this shit.
I am pretty sure that all console vendors have always tried to have no bottlenecks. It is not trivial to ensure that because it is a system to be used for many years, with techniques often not yet popularised at time of launch, so a system that may appear well-balanced pre launch may exhibit unexpected bottlenecks due to usage.BUT to get back to the topic, I think it is save to assume that the TFLOP difference will be the most telling this generation as both companies will try to have no bottlenecks
From the creators of DirectX that makes only balanced systems lolExcept MS focused the XDK in conjunction with the XB1X to try to eliminate all bottlenecks from the engines.
Do you really think they are going to sacrifice all the research now in order to brute force them?
I can’t imagine how high some of you have to be to imagine this shit.
If you have 32 ROPs per Shader Engine in RDNA then 56 CUs doesn’t match with 96 ROPs.... it is either 54 CUs or 60 CUs for 96 ROPs.Anyone? If Series X has 56 CU and PS5 has 40 CU 64 ROPS by GITHUB leak, is it possible that Series X has more than 64 ROPS, seeing there is a jump in ROPS every so CU count, lets say 96 ROPS for example.
Thank youIf you have 32 ROPs per Shader Engine in RDNA then 56 CUs doesn’t match with 96 ROPs.... it is either 54 CUs or 58 CUs for 96 ROPs.
56 CUs can match with 128 ROPs and 4 Shader Engjnes.
Pixel rate will be fine in any case.Thank youcould mean MS also have pixel fill rate covered, even with lower than 2 GHz GPU clock. Don’t you think? If the PS5 64 ROPS leak is true.
Thanks for your technical insights, learning something new everyday!Pixel rate will be fine in any case.
AMD increased from 16 ROPs per Shader Engine to 32 ROPs per Shader Engine... if PS5 has 2 Shader Engines then it really doesn’t need more than 64 ROPs.
I say that because I did not see any test of RX 5700 showing that the pixel fill rate with 64 ROPs is bad with for 2 Shader Engine.
Texture fill rate is based in TMUs x clock.Both, Arden and Oberon from leaked Github have 64ROPs.
Pixel fillrate is result of clock * ROPs
Texture fillrate is result of 4 * CUs * clock.
TF dont tell the whole story if we are comparing different architectures. For example 7.9TF 5700 outperforms Vega 64 with 12.7TF easily. But 7.9TF 5700 wont outperform 9.5TF 5700XT as they are same arches.
Ah thanks, didn’t spot in the leak that Series X also had 64 ROPS.Both, Arden and Oberon from leaked Github have 64ROPs.
Pixel fillrate is result of clock * ROPs
Texture fillrate is result of 4 * CUs * clock.
TF dont tell the whole story if we are comparing different architectures. For example 7.9TF 5700 outperforms Vega 64 with 12.7TF easily. But 7.9TF 5700 wont outperform 9.5TF 5700XT as they are same arches.
It is all good, this is not heat, at all, it is just conversation which is great, we need that, we always need that and I applaud it.NXGamer man, you dropped a video with a title that's a bit clickbairy during what is already a heated console war with fanboys going crazy on both sides...
so you should have expected that some get a bit heated over it.
basically those who believe the GitHub leaks will accuse you of doing damage control on behalf of Sony,
and those who believe the inserders that say that the PS5 is a bit stronger will accuse you of doing damage control for Microsoft.
both of these teams either won't watch your video because they think they know what you are trying to say, or they will watch it and not get wat is being said.
good luck, with not getting drowned by fanboys lol.
BUT to get back to the topic, I think it is save to assume that the TFLOP difference will be the most telling this generation as both companies will try to have no bottlenecks
so I think we can expect that, if one of these consoles will have a 2 or 3 TF advantage, it will pretty much 1 to 1 show that in multiplatform games
even more so this gen as it seems like they use the same type of RAM, most likely a very similar amount and they both, again, use the same vendor and architecture.
Next NXGamer video Title:
E3 was a Lie.
The Xbox One GPU lives on a really fast 32MB ESRAM: having a fast memory helps speeding up rendering task, it already happend with 360, which used a GPU around the same of level of the PS3 one, but the faster eDRAM made possible full resolution alpha effects at a steadier framerate, meanwhile PS3 usually had lower resolution effects and worse dips during those moments. (I hope to god no one ever creates a system like PS3, even if, given the right attentions, it was an extreeeemely powerful system).What happened with PS4 vs Xbox One? was the 1.84 vs 1.21 teraflop advantage a lie?
I mean is that really even that important? ROP's are merely the theoretical ceiling for the output, it doesn't mean they're being made use of fully or in an actual beneficial capacity.Thank youcould mean MS also have pixel fill rate covered, even with lower than 2 GHz GPU clock. Don’t you think? If the PS5 64 ROPS leak is true.
Sony made sure that their GPU wasn't bound by anything else, so they could take full advantage of the flops they had available to them - the PS4 is a very well balanced system.
From what data we see (rumors) I don't believe any next-gen console will be RDNA 2.0.I don't disagree, which brings me to my point, the lesser flop console doesn't necessarily mean it will have the better design or viceversa, so at this point it's pretty silly to ignore the fact that we may have consoles with 12 vs 9, that's the discussion.
Taking Xbox One and PS4 as examples, one could look at 1.21 TF and 1.84 TF and shortly after create a whole video just like NXGamer just did and claim that "The Teraflops are a lie", sure it might be nice and educational, but the timing would be odd right when Durango and Orbis are leaked to come out and say that, like don't be excited that PS4 might be 1.84 TF compared to Xbox One 1.21 TF because who knows what wizardry MS has in store to make up for it, or maybe Sony screwed up and the PS4 is chock-full of bottlenecks.
The bottom line is, all else equal, the higher teraflops WIN.
What we know about the architecture:
Xbox Series X = RDNA 2.0
PS5 = At least RDNA 1.0 maybe 2.0 (No one is sure)
Xbox Series X = GDDR6
PS5 = GDDR6
What we don't know is how many ROPs either will have, so I'll give you that little uncertainty, but everything else looks equal, both are shooting for GDDR6, so there won't be a bandwidth issue like there was with Xbox One, that was the other side of my point. If we had some kind of information that pointed to PS5 having more ROPs or using GDDR6 and XSX still using GDDR5 or something like that, then it would've made sense to publish this video, otherwise it's defensive nonesense.
lol love your videos my man.Too late, I called this last year in my E3 2019 video and I got hate for that also, Guess I am an insider now![]()
While your argument is correct, I am fairly certain that both consoles will have similar GPU architecture.TF dont tell the whole story if we are comparing different architectures. For example 7.9TF 5700 outperforms Vega 64 with 12.7TF easily. But 7.9TF 5700 wont outperform 9.5TF 5700XT as they are same arches.
Microsoft did all that clever game analysis to understand how the games and their engines operate, and used it to their advantage of developing Xbox One X
Well, this is how Microsoft designed the Xbox one... I have no problem with what others call "brute force", so long as the results are there.Except MS focused the XDK in conjunction with the XB1X to try to eliminate all bottlenecks from the engines.
Yup, and so far it looks like the series x will win, but I claim that this is no so clear.The bottom line is, all else equal, the higher teraflops WIN.
Actually that conversation happened shortly after launch (I believe the arguments came partly from the way DF treated the interview with MS's engineer back in the days).. and MS insisted on their secret sauce.like don't be excited that PS4 might be 1.84 TF compared to Xbox One 1.21 TF because who knows what wizardry MS has in store to make up for it, or maybe Sony screwed up and the PS4 is chock-full of bottlenecks.
From what data we see (rumors) I don't believe any next-gen console will be RDNA 2.0.
They could of course borrow some features from RDNA 2.0 but they will be at core RDNA 1.0.
Just like no mid-gen upgrade was based on Vega... it was all Polaris with some Vega features (in Pro case).
This is what I laugh at the most sometimes. Microsoft did all that clever game analysis to understand how the games and their engines operate, and used it to their advantage of developing Xbox One X, and we are now suppose to believe that Microsoft has decided to just go all sloppy on us and just brute force everything with nothing but power. No clever optimizations, features or techniques. Just hulk smash their way through everything. Nope. Some would like to believe that with the amazing power in the console Microsoft won't also be smart to push it as far as humanely possible, but no such luck. Microsoft will ensure this thing sings, and I doubt are relying on purely raw power.
I still feel Xbox will have more horsepower overall but Sony has a faster SSD and a better RT solution where Xbox will try to brute force everything.
Was asking because I was curious, how it all worked, thanks.I mean is that really even that important? ROP's are merely the theoretical ceiling for the output, it doesn't mean they're being made use of fully or in an actual beneficial capacity.
The Pro has 64 ROP's and as a result a theoretical 21 GPixel/s advantage over the 32 ROP Xbox One X GPU for pixel fillrate. Is the X in any way struggling against the Pro? Not by a country mile, and its practical throughput is far beyond it.
My mum always said thatDude should work for the verge he has potential
Actually, having a faster embedded RAM pool, doesn't mean your system is bottlenecked. And, not having it, doesn't mean your system is balanced. Going by PS2, Gamecube and XBOX 360, having a strong console and adding embedded RAM is just going to make it fly. On the otherside, XBOX ONE really needed that RAM pool just to breath.not using embedded ram custom saving solution to cut corners, so there's little room for there to be some hidden bottleneck similar to what we saw with Xbox One.
Actually, having a faster embedded RAM pool, doesn't mean your system is bottlenecked. And, not having it, doesn't mean your system is balanced. Going by PS2, Gamecube and XBOX 360, having a strong console and adding embedded RAM is just going to make it fly. On the otherside, XBOX ONE really needed that RAM pool just to breath.
On the perfomance side, having a faster ram pool for rendering is always a good idea, the problem is that developers would have to spend additional time to use it properly. So I think they might be excluding it on Series X just for ease of development, not for performance reasons.
.......... Deep, deep inside, I was wishing for new consoles to have something like 1 or 2 additional GB of some alien ultra fast RAM, just for rendering tasks. But I keep forgetting that consoles have a price xD