CrysisFreak
Banned
Insane how much time and effort people invest in DualSense mockups lol
Insane how much time and effort people invest in DualSense mockups lol
Btw are there any audiophile aproved hdmi/usb to aux boxes for headphonesYes, your TV has a designated hdmi port labeled arc. Then depending on the maker the setup is automatic or you need to manually change the audio output on your TV.
So the guy who made that tweet about BCPack is a former MS employee. Good to know
But no only that, there is more.
Moore's law is death: "MS does not have better compression than Sony, that is BS"
"MS compression is worse according to developers"
Timestamped:
So, what is the concensis of clockrate increasing performance beyond a lower clocked gpu which has the same tflop number?
People like to use this as an explanation for why the ps5s GPU will punch above its weight.
Cernys example quoted a significantly lower clocked gpu, i wonder what the sweet spot is for cus and clockspeed.
I really dont believe that a 36 cu @ 2.23ghz will have more performance then a gpu of 44cus @ 1826mhz.
Doesn't the playstation OS use decimal GB?or as Sony calls it 825 decimal GB
The hardware and software involved according to Microsoft are
No i won't ignore you but you are really committed to calling me a fanboy. Lol
- NVMe SSD,
- a dedicated hardware decompression block,
- the all new DirectStorage API,
- and Sampler Feedback Streaming (SFS).
I think they knew what they were doing when they oversimplified the explanation like that. Just like adding a 13TF estimate on top of 12 when talking about ray tracing, and that wasn't for clarity.
In certain tasks it will, but in other tasks it won't. Basically, the smaller chip clocked higher can do more batches of smaller operations per second sequentially, while the larger chip clocked smaller will do fewer batches per second, but each batch will be bigger (so it can do more per second in parallel).
So you can thing of the smaller chip as a highway with fewer lanes but the speed limit is higher, and the larger chip as a different highway with more lanes but a lower speed limit. In the case of PS5 and XSX, both highways are trying to reach a destination so the main question is which approach works out better for the given destination. Some destinations will favor PS5's approach and others will favor XSX's approach.
That's one way to look at it![]()
I get what you are saying, but that doesn’t really make sense based on where Xbox is at – going into next-gen against a backdrop of PS4 and Switch sales success.
If they’ve got a numbers advantage on anything, then as the challenger for market dominance they would have been singing about it already(IMHO) – especially when considering the discourse revelations that would dovetailed with such Xbox positivity, and would double down on the message that their console will be running a 12 TF workload all the time and PS5 will occasionally be at 10.3 TF.
(AFAIK) They aren’t so interested in convincing the 30+ gamer that is into GAF, Era, DF, etc doing extensive scrutiny of the hardware/software strategy. They seem more interested in setting the minds of game shop employees (early doors before things are cleared up), as their personal opinions land the whales (school kids) on a platform IMO and control the future landscape of console brands.
.... but they do?Then why in pc gpus do we not see higher clocked gpus perform better then lower clocked ones.
If the RDNA 2 chips does allow for a greater at higher frequency performance as Cerny stated that it would be ''greater than Linear performance'' which according to rumours stating that the new RDNA 2 could be clocked above the PS5 clocks then why would MS no have gone for the higher frequency on the XBX?In certain tasks it will, but in other tasks it won't. Basically, the smaller chip clocked higher can do more batches of smaller operations per second sequentially, while the larger chip clocked smaller will do fewer batches per second, but each batch will be bigger (so it can do more per second in parallel).
So you can thing of the smaller chip as a highway with fewer lanes but the speed limit is higher, and the larger chip as a different highway with more lanes but a lower speed limit. In the case of PS5 and XSX, both highways are trying to reach a destination so the main question is which approach works out better for the given destination. Some destinations will favor PS5's approach and others will favor XSX's approach.
That's one way to look at it![]()
Then why in pc gpus do we not see higher clocked gpus perform better then lower clocked ones.
.... but they do?
If its the exact same GPU architecture, there are two advantagesSo, what is the concensis of clockrate increasing performance beyond a lower clocked gpu which has the same tflop number?
Consoles aim for the sweet spot, no point in increasing costs for negligent gainsCernys example quoted a significantly lower clocked gpu, i wonder what the sweet spot is for cus and clockspeed.
XSX will remain on top, PS5 cant go beyond its established computational limits, it will just get closer to peak (higher VALU utilization) compared to XSXI really dont believe that a 36 cu @ 2.23ghz will have more performance then a gpu of 44cus @ 1826mhz.
Doesn't the playstation OS use decimal GB?
The 5700 @ 2150mhz and a stock 5700xt perform about the same. According to cerny the higher clocked 5700 should perform better in this situation.
Where did you get this info fromThe gains from BCPACK could be and are expected to be higher than 4.8...
The 5700 @ 2150mhz and a stock 5700xt perform about the same. According to cerny the higher clocked 5700 should perform better in this situation.
There are lots of points made like this. For example the xsx audio block takes weight off of the equivalent of 5 zen cores according to MS. Sony says theirs replaces the equivalent of 10...
So if GPU clock increase does not yield (varied) performance increase...why do manufacturers release "overlocked" versions?
5700 is hitting disminishing returns on clocks way beyond its sweetspot for power delivery, logic timing, bandwidth etc.The 5700 @ 2150mhz and a stock 5700xt perform about the same. According to cerny the higher clocked 5700 should perform better in this situation.
So PS5 consumers will see the full 825GB sans OS allocation ?I think so. These use "Linux" BSD which afaik uses decimal GB numbers generally for file listings etc. Windows is different and has always used the bigger binary GB..
That's my understanding as well, but in their rush to give catchy names to things and not only that alluded to 100GB of assets rather than saying that the entire SSD can be accessed very fast they just caused confusion hence this conversation i'm having here.
Then why in pc gpus do we not see higher clocked gpus perform better then lower clocked ones.
If the RDNA 2 chips does allow for a greater for higher frequency performance as Cerny stated that it would be ''greater than Linear performance'' which according to rumours stating that it could be clocked above the PS5 clocks then why would MS no have gone for the higher frequency on the XBX?
Seeing as it also made with the same RDNA 2 chip as the PS5 then they would be leaving quite a bit of performance on the table unless their architecture for some reason cannot support it?
Well, compared to PCs and current-gen systems yes the entire drive can be accessed very fast, it only seems slow in relation to PS5's SSD is all. Its just that the 100 GB partition will be (likely) specifically set up for specific game-related texture and data streaming.
100% no. That's the top end figure before the wear-levelling structures have been factored in (~10%), Then the OS on top of the remainder. I'd guess just under 700GB for games (about 640GB binary)So PS5 consumers will see the full 825GB sans OS allocation ?
So if GPU clock increase does not yield (varied) performance increase...why do manufacturers release "overlocked" versions?
If the RDNA 2 chips does allow for a greater at higher frequency performance as Cerny stated that it would be ''greater than Linear performance'' which according to rumours stating that the new RDNA 2 could be clocked above the PS5 clocks then why would MS no have gone for the higher frequency on the XBX?
Seeing as it also made with the same RDNA 2 chip as the PS5 then they would be leaving quite a bit of performance on the table unless their architecture for some reason cannot support it?
Actually a toslink has a datarate of 125mbit for second, so really low, that's the reason it doesn't support modern audio codec and got replaced by HDMI long ago. Toslink is a dead technology we didn't want to abandon, and the fault lies in console makers for never offering standard solutions for headsets.
I'm hoping there is no 100GB partition for streaming, that sounds like a drive killer (similar to an SSD cache on a SQL server). Creating a lot of unnecessary writes, copying data from one part of the drive to another. I'm hoping the games are packaged so that the install itself is accessible by the CPU/GPU directly, this would eliminate wasted writes.
afaik Cerny never claimed greater than linearBecause I actually have to take Cerny's words with a pinch of salt there. You can't actually get greater-than-linear performance because that would mean you're using some other silicon altogether
Speaking of MS screwing me over - does anyone know if Series X will support USB Audio ?
40GB OS?100% no. That's the top end figure before the wear-levelling structures have been factored in (~10%), Then the OS on top of the remainder. I'd guess just under 700GB for games (about 640GB binary)
But it does ask the question, if the whole SSD can (potentially) facilitate what we've been specifying to the 100 GB partition the whole time, then what would the 100 GB partition really be used for? Couldn't just be more of the same otherwise no reason to highlight it. Guess we'll find out some time xD.
I think so, they already removed it from the slim. Everything you could save money on will probably the road they tookInteresting, so we could expect PS5 to ditch the optical audio as well?
I estimated 10. Something wrong with my maths. Add 30GB to that.40GB OS?
Off topic question, how do you timestamp a youtube video then post it here?
Thanks in advance.
The 5700 @ 2150mhz and a stock 5700xt perform about the same. According to cerny the higher clocked 5700 should perform better in this situation.
afaik Cerny never claimed greater than linear
1. You haven't countered my arguments.1. This doesn't do what you think it does, it doesn't affect or change how interleaved memory works, its mainly CPU & GPGPU oriented, it also under utilizes bandwidth and again it has no impact on interleaved memory. Nothing changes
2. It has already been pointed out to you this is not the case, only one decompression unit which handles both compression algorithms
Having two decompressors its not even good design
afaik Cerny never claimed greater than linear
I think they can achieve such speed by splitting that big file among each lane, so it can be read in parallel and saturate PCiE bus.Absolutely. And that's what everyone seems to forget, there's a vast difference between one 10GB file and thousands of tiny 1MB files. MS and Sony obviously just as any HDD/SSD producer provided the maximum (sequential) possible value, which rarely represents the actual real-life performance. I assume the custom chips build into the consoles will help with that, but still to a certain degree.
afaik Cerny never claimed greater than linear
Edit: I think it came from DF interview with Cerny not sure if that is what Cerny said or if that's what Richard thinks that the PS5 should be more capable than its TFLOPSI just checked you are right he was speaking about non linear frequency and power consumption.
“Sony’s pitch is essentially this: a smaller GPU can be a more nimble, more agile GPU, the inference being that PS5’s graphics core should be able to deliver performance higher than you may expect from a TFLOPs number that doesn’t accurately encompass the capabilities of all parts of the GPU,” Digital Foundry editor Richard Leadbetter explains.
Your argument is bullshit. Who are you? Do you have the authority to override MS's Andrew Goossen?Nope
It only has one decompression hardware block that handles both zlib and bcpack, there are no alternative io paths
Second component they mean in addition to the SSD (2.4GB/s of guaranteed throughput)
6GB/s its just a peak figure the decompression block can handle just like 22GB/s in the PS5s
That 4.8GB/s figure already accounts BCpack higher compression (100%) for textures
xbsex price spotted in a polish supermarket approx £482 = $600
Xbox Series X – mamy potwierdzenie ceny. Dużo czy mało?
Koronawirus paraliżuje gospodarkę, ograbia graczy z najważniejszej imprezy (E3), ale nie zabierze raczej im ich najwięk…translate.google.com
I have been on this thread for a while, and I know who question it & who downplay it. I say what I like based on my observations, you like it or not that's up to you. I'm allowed to say anything as long as it doesn't go against the thread policy. And I know very well what I'm talking about.Ah okay, that clears things up. It's not just the fact the SSDs are too slow for data to be worked on in the same way as RAM, but NAND just has its own quirks that prevent that being the case (granularity of read/write operations being too large, cell integrity degradation with prolonged power/erase cycles, etc.).
However the thing with both systems is that the SSDs are intended moreso for direct read access by the GPU, CPU, and other chips. So the speeds are fast enough for things such as certain types of texture streaming, streaming of audio assets, etc. Other neat little things as well.
I guess MS just chose the term "virtual memory" because they thought it would be the easy way for most gamers to comprehend the concept? It's not a particularly accurate description though.
Who is downplaying? The truth is we don't have all the critical info on the SSDs for either system so it's hard to discern what the delta on that front will actually be until we get that information. This is a very sane and rationalized way to look at the situation for the time being.
PS5 SSD will still have the raw advantage, but customizations and optimizations on both ends could either keep the delta the same or have it shrink. It has a probability of happening so it's okay to keep that possibility open. Again, we don't know what specific type of NAND the companies are using (not just in terms of QLC, TLC, MLC etc. but even just the manufacturer part numbers because that could help with finding documentation), we don't know the random access times on first page or block, we don't know the random access figures in general, the latency of the chips, page sizes, block sizes etc.
We don't even know everything about the compression and decompression hardware/software for them yet, or full inner-workings of the flash memory controllers. I don't think questioning these things automatically translates to trying to downplay one system or another. People are allowed to question things like GPU CU cache amounts for the systems (usually in question if XSX has made increases to the cache size to scale with the GPU size and offset compromises with the memory setup and slower GPU clockspeed for example), so questioning the SSD setup in both systems should also be allowed on the table.
Meanwhile you are speculating with some of your own numbers (priority levels for XSX SSD have not been mentioned IIRC), which you're fair to do, but don't feel as if you can throw that type of speculation out there and then get away with insisting people merely speculating on aspects of the SSDs that haven't been divulged yet is them trying to downplay the system with an SSD advantage.
It's not that serious![]()
Speaking of MS screwing me over - does anyone know if Series X will support USB Audio ?
90% of 825GB is 742.5GBI estimated 10. Something wrong with my maths. Add 30GB to that.
1. Your response wasn't relevant to i what I said, you named drop a random technique that has nothing to do with interleaved memory. For simultaneous GPU/CPU access using 16bit address you are effectively halving their respective bandwidth 280GB/s (GPU) & 168GB/s (CPU). Physical limitation of each chip1. You haven't countered my arguments.
2. How are you? Disprove this.
One decompression unit handles both compression algorithmsThe decompression hardware supports Zlib for general data and a new compression [system] called BCPack that is tailored to the GPU textures that typically comprise the vast majority of a game's package size."
Well no... If you can have a full atmos setup at your home is something beautiful, full speakers I mean. But I know people that can't recognize a 128kbps mp3 from a flac, so different solutions for different audiences
Yep, I had the chance to test some high grade headphones... And they were something sublime (you need a proper amp and dac too). But considering that ppl like how beats sounds... I like how everyone is praising (rightfully? We'll see) sony audio solution and then same person will use TV speakers or beats cans...Yeah 128Kbps sounds like shit compared to 320Kbps, but higher bitrates like on CD with 1411kbps feels like they're fucking singing next to you! You can hear the slightest soft touches on a guitar string.
But it's like trying to convince someone that 8K is much cleaner and sharper than 4K, when he's still arguing that 1080p is enough.![]()
CorrectThe decompression hardware has two supported compression formats
1. ZLib for general datatypes.
2. BCpack for texture datatypes.
Wrong. The same hardware block handles both formatsThat's two different ASIC IP block for each compression formats
You are the only person i came across with the (incorrect) interpretation that there's two decompressing blocks. DF is clear, only oneThis is not rocket science to detect compression format's headers
40GB OS?