• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

draliko

Member
meh it can do 2 channel 24 bit 96kHz I can't hear more.. I only have 2 ears.

7.1 etc is a buzz word on a spec sheet to sell more sound systems.
Well no... If you can have a full atmos setup at your home is something beautiful, full speakers I mean. But I know people that can't recognize a 128kbps mp3 from a flac, so different solutions for different audiences
 

Tripolygon

Banned
MS has not revealed the technical elements of exactly how this partition works. You seem pretty angry so go back and read through the 1692 pages for your answer.

No one said 100gb/s.

Again you sound so wrapped up in whether your favorite console has the technology that you think allows it "win", that you are making assumptions about whats possible elsewhere.

There are more than enough posts on this technology in this thread that you can educate yourself.

When we get more insight about this tech from MS, I'm sure everyone will dissect it that point.

Tl;Dr

MS has a technology to allow direct GPU access to anything within a specified 100GB partition with no vram swap or cpu fetch necessary.

Its a public XSX feature.
All your posts have been Microsoft this and Microsoft that. You asked for information on what Sony is doing that is similar to what Microsoft was doing and i provided it to you.

Let me give you a 101 on what virtual memory is.

You have a main storage that is the SSD and the RAM.

RAM is usually very small in size (16GB) and usually very fast.

Applications (games) are very large in size.

The operating system transfers any data that is needed by the CPU and GPU into the RAM and when it needs more data, it overwrites old data with new data.

This is where virtual ram comes in play. Instead of overwriting the old data, it moves the old data to the virtual ram on the SSD and changes the address space to point to the SSD then a new data takes the space in the RAM.

If the CPU or GPU needs the data that was moved to the SSD, it first checks in the RAM and if it is not there it checks in the SSD, the data is then transferred.

All you have done is just sequestered a partition of your main storage to be used as virtual ram. The OS knows that there is 100GB SSD storage that it can use to store inactive data. The speed at which the data is moved is bound by the bandwidth of the storage. It is not a new thing. PS4 does this as well, it has an extra 1GB slower memory that it moves background applications to when it is not active.
 

Chun Swae

Banned
Exclusive Kraken vs BCPack test footage:

tenor.gif

PS5 SSD vs XSX:

tumblr_mrhd7g878K1rr13fko1_500.gif


Also XSX vs PS5 GPU:

the-avengers-angry-hulk-smash-loki.gif


hulk-vs-loki-gif-5.gif


:messenger_tears_of_joy:
Now do one about first party games! Haha!
 

CrysisFreak

Banned
Damn, the marketing guys at Xbox have lots of work to do. Ms basically showed it all already and we even know the games that will be released in November.

We only have a logo and a gamepad from Sony.
Well now let's just hope that the PS5 launch lineup is better than the not particularly impressive PS4 launch lineup. Yeah later it became legendary but the beginnings were humble.
 
Well, what about the 12 channel controller on the custom flash interface? Does Series X have that?

This isn't just about channels. Channels alone don't tell you everything if you don't know the chip density per channel and other parts of the chips such as their latency figures, NAND type, etc.

I don't see why you would choose to just focus on one part of a much bigger picture. Also you're making this as a versus system comparison which betrays your entire intent behind asking the question. I.e I'm not sure a genuine discussion on the topic can be had here between you and I.

All your posts have been Microsoft this and Microsoft that. You asked for information on what Sony is doing that is similar to what Microsoft was doing and i provided it to you.

Let me give you a 101 on what virtual memory is.

You have a main storage that is the SSD and the RAM.

RAM is usually very small in size (16GB) and usually very fast.

Applications (games) are very large in size.

The operating system transfers any data that is needed by the CPU and GPU into the RAM and when it needs more data, it overwrites old data with new data.

This is where virtual ram comes in play. Instead of overwriting the old data, it moves the old data to the virtual ram on the SSD and changes the address space to point to the SSD then a new data takes the space in the RAM.

If the CPU or GPU needs the data that was moved to the SSD, it first checks in the RAM and if it is not there it checks in the SSD, the data is then transferred.

All you have done is just sequestered a partition of your main storage to be used as virtual ram. The OS knows that there is 100GB SSD storage that it can use to store inactive data. The speed at which the data is moved is bound by the bandwidth of the storage. It is not a new thing. PS4 does this as well, it has an extra 1GB slower memory that it moves background applications to when it is not active.

FYI a bit mistaken to call what PS4 does as the same thing, because it's using actual RAM (DDR3) to offload those processes to. The whole idea being virtual memory (RAM) is that the technology being used isn't actually RAM, but is being facilitated to serve somewhat a purpose analogous to it.

It's been a concept since the days of platter-based HDDs so it isn't inherently new.
 
Last edited:
All your posts have been Microsoft this and Microsoft that. You asked for information on what Sony is doing that is similar to what Microsoft was doing and i provided it to you.

Let me give you a 101 on what virtual memory is.

You have a main storage that is the SSD and the RAM.

RAM is usually very small in size (16GB) and usually very fast.

Applications (games) are very large in size.

The operating system transfers any data that is needed by the CPU and GPU into the RAM and when it needs more data, it overwrites old data with new data.

This is where virtual ram comes in play. Instead of overwriting the old data, it moves the old data to the virtual ram on the SSD and changes the address space to point to the SSD then a new data takes the space in the RAM.

If the CPU or GPU needs the data that was moved to the SSD, it first checks in the RAM and if it is not there it checks in the SSD, the data is then transferred.

All you have done is just sequestered a partition of your main storage to be used as virtual ram. The OS knows that there is 100GB SSD storage that it can use to store inactive data. The speed at which the data is moved is bound by the bandwidth of the storage. It is not a new thing. PS4 does this as well, it has an extra 1GB slower memory that it moves background applications to when it is not active.

There is a heck of a lot more custom silicon, low-level controller access, and api extension involved in the xsx architecture that you are ignoring and that we don't have a deep dive into yet.

But thank you for taking me back to my college cs courses and explaining vmem to me.

Feel free to ignore or block me if you feel like I talk too much about the console you either aren't interested in or don't prefer.
 

Tripolygon

Banned
FYI a bit mistaken to call what PS4 does as the same thing, because it's using actual RAM (DDR3) to offload those processes to. The whole idea being virtual memory (RAM) is that the technology being used isn't actually RAM, but is being facilitated to serve somewhat a purpose analogous to it.

It's been a concept since the days of platter-based HDDs so it isn't inherently new.
I know, i used it as an example of inactive data being moved to to a slower memory to free up space in the main ram.
 
Last edited:
Nope
It only has one decompression hardware block that handles both zlib and bcpack, there are no alternative io paths


Second component they mean in addition to the SSD (2.4GB/s of guaranteed throughput)
6GB/s its just a peak figure the decompression block can handle just like 22GB/s in the PS5s

That 4.8GB/s figure already accounts BCpack higher compression (100%) for textures

The gains from BCPACK could be and are expected to be higher than 4.8...
 
It is not actually channels but PCI Express lanes.
It is between the SSD and the IO controller.

Xbox probably uses the standard 4 lanes.

Both will only be using 4 PCIe lanes from the controller to the APU, this is almost assured by both offering an expansion port. Also, using more would be wasteful since the theoretical peaks of both drives are well within the limits of Gen4 X4. The number of channels is between the controller and the nand, and is not directly related to the PCIe bus.
 
Last edited:

ethomaz

Banned
Both will only be using 4 PCIe lanes from the controller to the APU, this is almost assured by both offering an expansion port. Also, using more would be wasteful since the theoretical peaks of both drives are well within the limits of Gen4 X4. The number of channels is between the controller and the nand, and is not directly related to the PCIe bus.
Right I mistook what you guys are saying.
 
Last edited:

husomc

Member
All your posts have been Microsoft this and Microsoft that. You asked for information on what Sony is doing that is similar to what Microsoft was doing and i provided it to you.

Let me give you a 101 on what virtual memory is.

You have a main storage that is the SSD and the RAM.

RAM is usually very small in size (16GB) and usually very fast.

Applications (games) are very large in size.

The operating system transfers any data that is needed by the CPU and GPU into the RAM and when it needs more data, it overwrites old data with new data.

This is where virtual ram comes in play. Instead of overwriting the old data, it moves the old data to the virtual ram on the SSD and changes the address space to point to the SSD then a new data takes the space in the RAM.

If the CPU or GPU needs the data that was moved to the SSD, it first checks in the RAM and if it is not there it checks in the SSD, the data is then transferred.

All you have done is just sequestered a partition of your main storage to be used as virtual ram. The OS knows that there is 100GB SSD storage that it can use to store inactive data. The speed at which the data is moved is bound by the bandwidth of the storage. It is not a new thing. PS4 does this as well, it has an extra 1GB slower memory that it moves background applications to when it is not active.
Thanks for clarifying. After reading all his posts I was under the impression that the SEX for sure had 100Gb of system ram hidden in the CPU or something :messenger_tears_of_joy::messenger_tears_of_joy:
 

Tripolygon

Banned
There is a heck of a lot more custom silicon, low-level controller access, and api extension involved in the xsx architecture that you are ignoring and that we don't have a deep dive into yet.

But thank you for taking me back to my college cs courses and explaining vmem to me.

Feel free to ignore or block me if you feel like I talk too much about the console you either aren't interested in or don't prefer.
The hardware and software involved according to Microsoft are
  1. NVMe SSD,
  2. a dedicated hardware decompression block,
  3. the all new DirectStorage API,
  4. and Sampler Feedback Streaming (SFS).
No i won't ignore you but you are really committed to calling me a fanboy. Lol
 
Last edited:
I know, i used it as an example of inactive data being moved to to a slower memory to free up space in the main ram.

Okay. Still though, it's not exactly comparable to what MS seem to be doing with the 100 GB pool on their SSD. This is a direct quote from their page:

Enter Xbox Velocity Architecture, which features tight integration between hardware and software and is a revolutionary new architecture optimized for streaming of in game assets. This will unlock new capabilities that have never been seen before in console development, allowing 100 GB of game assets to be instantly accessible by the developer. The components of the Xbox Velocity Architecture all combine to create an effective multiplier on physical memory that is, quite literally, a game changer.

Bolded the important parts. The 100 GB pool is accessible by the GPU directly for streaming of texture and other data assets, effectively same as PS5 albeit slower. It's not the idea of simply dumping non-essential assets from the RAM to a specifically partitioned portion of the NAND memory on the SSD drive to swap back into RAM and THEN be operated on by the CPU and GPU, as one of your other responses seemed to indicate.
 
Last edited:

M-V2

Member
Many people here UNFORTUNATELY they're trying so hard to downplay the ps5, and they try to spin it so the SSD on SX be on par or closer to the PS5 when Mark Cerny spent half the presentation talking about the SSD and how he desgined the console around that SSD, yet some people trying to be tech experts when they don't understand a thing.


Just accept the fact that the SX is better in Teraflops, you should play around that area, don't play on the SSD part when it comes to SX because the SSD on that thing is slow compared to what the ps5 has. End of the story.

12 channels vs 3 or 4
6 priorities vs 2
5.5gb vs 2.4 (raw)
8/9gb vs 4.8 (compressed)

These numbers are not from my pocket, so plz stop spinning it, because it sounds pity.
 
Last edited:

Tripolygon

Banned
Bolded the important parts. The 100 GB pool is accessible by the GPU directly for streaming of texture and other data assets, effectively same as PS5 albeit slower. It's not the idea of simply dumping non-essential assets from the RAM to a specifically partitioned portion of the NAND memory on the SSD drive to swap back into RAM and THEN be operated on by the CPU and GPU, as one of your other responses seemed to indicate.
Correct, that's what i said (not referring to the bolded but your general sentiment). But you would have to take that up with Microsoft because they've been calling it virtual memory. I never said the data will be worked on from the SSD i said transferred. SSDs are too slow to be used as such, Microsoft is that one that are alluding to that.
 
Last edited:

SonGoku

Member
No. Those are theoretical compressed max numbers without BCpack, to be directly compared with 22GB/s. And I think BCPack is not a lossless compression
I agree 4.8GB/s its quite optimal/ideal 100% compression on only textures that won't be consistently reached but i still think it will be more commonly reached than 22GB/s
 
Many people here UNFORTUNATELY they're trying so hard to downplay the ps5, and they try to spin it so the SSD on SX be on bar or closer to the PS5 when Mark Cerny spent half the presentation talking about the SSD and how he desgined the console around that SSD, yet some people trying to be tech experts when they don't understand a thing.


Just accept the fact that the SX is better in Teraflops, you should play around that area, don't play on the SSD part when it comes to SX because the SSD on that thing is slow compared to what the ps5 has. End of the story.

12 channels vs 3 or 4
6 priorities vs 2
5.5gb vs 2.4 (raw)
8/9gb vs 4.8 (compressed)

These numbers are not from my pocket, so plz stop spinning it, because it sounds pity.

Who is questioning the stated specs? Also, it's not possible to reach 1TB with 3 nand chips (at least not in symmetrical fashion), I think it's safe to assume that 4 nand chips are in use.
 
Correct, that's what i said. But you would have to take that up with Microsoft because they've been calling it virtual memory. I never said the data will be worked on from the SSD i said transferred. SSDs are too slow to be used as such.

Ah okay, that clears things up. It's not just the fact the SSDs are too slow for data to be worked on in the same way as RAM, but NAND just has its own quirks that prevent that being the case (granularity of read/write operations being too large, cell integrity degradation with prolonged power/erase cycles, etc.).

However the thing with both systems is that the SSDs are intended moreso for direct read access by the GPU, CPU, and other chips. So the speeds are fast enough for things such as certain types of texture streaming, streaming of audio assets, etc. Other neat little things as well.

I guess MS just chose the term "virtual memory" because they thought it would be the easy way for most gamers to comprehend the concept? It's not a particularly accurate description though.

Many people here UNFORTUNATELY they're trying so hard to downplay the ps5, and they try to spin it so the SSD on SX be on bar or closer to the PS5 when Mark Cerny spent half the presentation talking about the SSD and how he desgined the console around that SSD, yet some people trying to be tech experts when they don't understand a thing.


Just accept the fact that the SX is better in Teraflops, you should play around that area, don't play on the SSD part when it comes to SX because the SSD on that thing is slow compared to what the ps5 has. End of the story.

12 channels vs 3 or 4
6 priorities vs 2
5.5gb vs 2.4 (raw)
8/9gb vs 4.8 (compressed)

These numbers are not from my pocket, so plz stop spinning it, because it sounds pity.

Who is downplaying? The truth is we don't have all the critical info on the SSDs for either system so it's hard to discern what the delta on that front will actually be until we get that information. This is a very sane and rationalized way to look at the situation for the time being.

PS5 SSD will still have the raw advantage, but customizations and optimizations on both ends could either keep the delta the same or have it shrink. It has a probability of happening so it's okay to keep that possibility open. Again, we don't know what specific type of NAND the companies are using (not just in terms of QLC, TLC, MLC etc. but even just the manufacturer part numbers because that could help with finding documentation), we don't know the random access times on first page or block, we don't know the random access figures in general, the latency of the chips, page sizes, block sizes etc.

We don't even know everything about the compression and decompression hardware/software for them yet, or full inner-workings of the flash memory controllers. I don't think questioning these things automatically translates to trying to downplay one system or another. People are allowed to question things like GPU CU cache amounts for the systems (usually in question if XSX has made increases to the cache size to scale with the GPU size and offset compromises with the memory setup and slower GPU clockspeed for example), so questioning the SSD setup in both systems should also be allowed on the table.

Meanwhile you are speculating with some of your own numbers (priority levels for XSX SSD have not been mentioned IIRC), which you're fair to do, but don't feel as if you can throw that type of speculation out there and then get away with insisting people merely speculating on aspects of the SSDs that haven't been divulged yet is them trying to downplay the system with an SSD advantage.

It's not that serious :LOL:
 
Last edited:

FranXico

Member
I guess MS just chose the term "virtual memory" because they thought it would be the easy way for most gamers to comprehend the concept?
I think they knew what they were doing when they oversimplified the explanation like that. Just like adding a 13TF estimate on top of 12 when talking about ray tracing, and that wasn't for clarity.
 

Tripolygon

Banned
Ah okay, that clears things up. It's not just the fact the SSDs are too slow for data to be worked on in the same way as RAM, but NAND just has its own quirks that prevent that being the case (granularity of read/write operations being too large, cell integrity degradation with prolonged power/erase cycles, etc.).

However the thing with both systems is that the SSDs are intended moreso for direct read access by the GPU, CPU, and other chips. So the speeds are fast enough for things such as certain types of texture streaming, streaming of audio assets, etc. Other neat little things as well.

I guess MS just chose the term "virtual memory" because they thought it would be the easy way for most gamers to comprehend the concept? It's not a particularly accurate description though.
That's my understanding as well, but in their rush to give catchy names to things and not only that alluded to 100GB of assets rather than saying that the entire SSD can be accessed very fast they just caused confusion hence this conversation i'm having here.
 

PaintTinJr

Member
MS has not revealed the technical elements of exactly how this partition works. You seem pretty angry so go back and read through the 1692 pages for your answer.

No one said 100gb/s.

Again you sound so wrapped up in whether your favorite console has the technology that you think allows it "win", that you are making assumptions about whats possible elsewhere.

There are more than enough posts on this technology in this thread that you can educate yourself.

When we get more insight about this tech from MS, I'm sure everyone will dissect it that point.

Tl;Dr

MS has a technology to allow direct GPU access to anything within a specified 100GB partition with no vram swap or cpu fetch necessary.

Its a public XSX feature.
I get what you are saying, but that doesn’t really make sense based on where Xbox is at – going into next-gen against a backdrop of PS4 and Switch sales success.
If they’ve got a numbers advantage on anything, then as the challenger for market dominance they would have been singing about it already(IMHO) – especially when considering the discourse revelations that would dovetailed with such Xbox positivity, and would double down on the message that their console will be running a 12 TF workload all the time and PS5 will occasionally be at 10.3 TF.

(AFAIK) They aren’t so interested in convincing the 30+ gamer that is into GAF, Era, DF, etc doing extensive scrutiny of the hardware/software strategy. They seem more interested in setting the minds of game shop employees (early doors before things are cleared up), as their personal opinions land the whales (school kids) on a platform IMO and control the future landscape of console brands.
 

joe_zazen

Member
Again if a game use SOFTWARE kraken support the game developer is paying RAD GAme Tools. That's how 3rd party tools works. It's simple, i don't get why you don't like this. And btw they totally are like dolby, you have to pay for atmos encoding and decoding, same goes for DTS or to use BT or pretty much everything, even WIFI

RaD wants to get paid at both ends for sure. Only way sony isnt paying is RAD for hardware inclusion is if rad wants to encourage more use by devs and ps5 inclusion will help with this.
 

joe_zazen

Member
I get what you are saying, but that doesn’t really make sense based on where Xbox is at – going into next-gen against a backdrop of PS4 and Switch sales success.
If they’ve got a numbers advantage on anything, then as the challenger for market dominance they would have been singing about it already(IMHO) – especially when considering the discourse revelations that would dovetailed with such Xbox positivity, and would double down on the message that their console will be running a 12 TF workload all the time and PS5 will occasionally be at 10.3 TF.

(AFAIK) They aren’t so interested in convincing the 30+ gamer that is into GAF, Era, DF, etc doing extensive scrutiny of the hardware/software strategy. They seem more interested in setting the minds of game shop employees (early doors before things are cleared up), as their personal opinions land the whales (school kids) on a platform IMO and control the future landscape of console brands.

12>10 is pretty easy to communicate, but do people in general care? How many know how many TFs their phone, console, pc, or tablet have? And of those that do, how many based their purchasing decision on it? I honestly don’t know.
 
I think they knew what they were doing when they oversimplified the explanation like that. Just like adding a 13TF estimate on top of 12 when talking about ray tracing, and that wasn't for clarity.

Well they also oversimplified the GPU performance when mentioning '2x power of X' going by Moore's Law, because actual performance is more than 12 TF GCN. Also IIRC, they just mentioned that the customization for the RT was the equivalent of 13 TF if done with general hardware itself.

It was moreso some YT channels (mainly some of the Xbox ones) that took that and ran with the 25 TF narrative, which wasn't a smart thing to do and obviously the wrong way to take it. But a lot of YT gaming channels do that kind of stuff all the time, over-embellish on certain specifications and exaggerate them to no end. Annoying, but enough people seem to know not to buy into it hook, line and siner.

I get what you are saying, but that doesn’t really make sense based on where Xbox is at – going into next-gen against a backdrop of PS4 and Switch sales success.
If they’ve got a numbers advantage on anything, then as the challenger for market dominance they would have been singing about it already(IMHO) – especially when considering the discourse revelations that would dovetailed with such Xbox positivity, and would double down on the message that their console will be running a 12 TF workload all the time and PS5 will occasionally be at 10.3 TF.

(AFAIK) They aren’t so interested in convincing the 30+ gamer that is into GAF, Era, DF, etc doing extensive scrutiny of the hardware/software strategy. They seem more interested in setting the minds of game shop employees (early doors before things are cleared up), as their personal opinions land the whales (school kids) on a platform IMO and control the future landscape of console brands.

It's a little more complicated than that. I think, after the Road to PS5 presentation, focus regarding tech for the systems has kind of shifted towards optimizations rather than just pure raw numbers. The impression being this (somewhat false) narrative that customization/optimization beats brute force, and that MS are just "brute forcing" their way to a next-gen system solution. The "fast and narrow" versus "wide and slow" stuff also kind of plays off of that.

The truth is, both systems have areas where they are just pure, raw stronger in than the other (brute force), and many areas where they are elegantly optimized in as well. Both systems have aspects of their architecture that are a mix of being wide & slow plus narrow & fast. It's not the simple case that only one system is ONLY this thing and the other system is ONLY that other thing, though a lot of people tend to distill it to that simplicity for narrative purposes (I don't mean narrative in the context of "agenda" here, just narrative in a storytelling POV, i.e it's something established in storytelling techniques to paint stronger contrasts because those can have more impact on the audience).
 
Last edited:

B_Boss

Member
The speaker on DS4 isn't widely used as well but in GoW(2018), the speaker plays a little sound along with vibration everytime we recall the Leviathan axe. It is so satisfying, it cannot be overstated.

Not all games will use it but if we don't take risks and innovate, we never truly progress and learn.

Oh my favorite way that it is used is in The Division/2. You can set it so that the AI system that assists your agent in-game (ISAC) speaks through the DS4’s speaker lol. It’s awesome personally.
 

PaintTinJr

Member
12>10 is pretty easy to communicate, but do people in general care? How many know how many TFs their phone, console, pc, or tablet have? And of those that do, how many based their purchasing decision on it? I honestly don’t know.

I agree, but when a parent asks a game shop employee which is best(brand? console SKU, value for money?) that's the difference between a successful lockhart selling the games that Xbox make, because very few kids will get bought both consoles even if they really want to play games that are on the other, and pocket money seems to fit with lootbox spending, helping make great profit without needing to spend tens of millions on risky AAA games IMHO.
 

Sosokrates

Report me if I continue to console war
So, what is the concensis of clockrate increasing performance beyond a lower clocked gpu which has the same tflop number?

People like to use this as an explanation for why the ps5s GPU will punch above its weight.

Cernys example quoted a significantly lower clocked gpu, i wonder what the sweet spot is for cus and clockspeed.

I really dont believe that a 36 cu @ 2.23ghz will have more performance then a gpu of 44cus @ 1826mhz.
 
Status
Not open for further replies.
Top Bottom