• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Kusarigama

Member
There is statement about Sony passing licenses along. I assume Sony needed to pay RAD Tools, since Kraken is proprietary.
So is Sony for PS4 and MS for Xbox one also paying RAD for Kraken, since current gen machines are also able to use Kraken?
It's the game dev's choice, if they use kraken compression then they must make sure that decompression is also taken care of. Game development middlewares are not like other techonlogies like Dolby atmos and so forth.
 

FranXico

Member
So is Sony for PS4 and MS for Xbox one also paying RAD for Kraken, since current gen machines are also able to use Kraken?
Oh the PS4 and Xbox One also have a hardware dedicated Kraken decompression chip?

I am talking about the PS5 implementing the decompression algorithm in hardware. As it should have been obvious.

Edit: I skipped the word "no" in a sentence above. That's probably how you got confused.
 
Last edited:

draliko

Member
So is Sony for PS4 and MS for Xbox one also paying RAD for Kraken, since current gen machines are also able to use Kraken?
It's the game dev's choice, if they use kraken compression then they must make sure that decompression is also taken care of. Game development middlewares are not like other techonlogies like Dolby atmos and so forth.
Again if a game use SOFTWARE kraken support the game developer is paying RAD GAme Tools. That's how 3rd party tools works. It's simple, i don't get why you don't like this. And btw they totally are like dolby, you have to pay for atmos encoding and decoding, same goes for DTS or to use BT or pretty much everything, even WIFI
 
Last edited:

geordiemp

Member
Let me ask you this, Do you think Sony is paying RAD for PS4 as PS4 also seems to be decompressing kraken data without a dedicated unit?

Do you think MS is paying for their propriatery compression ?

Both consoles are using others technology...FFS...

Move along, nothing to see here.

Some of the fanboy arguments in here take some beating...

Ah but console A maybe buys it a dollar cheaper than console B concern - YAWN !
 
Last edited:

Kusarigama

Member
Oh the PS4 and Xbox One also have a hardware dedicated Kralen decompression chip?

I am talking about the PS5 implementing the decompression algorithm in hardware. As it should have been obvious.
There is no decompression algorithm implemented in hardware. It is hardware which can decompress such data faster. Like dedicated ray-tracing hardware. The ray-tracing algorithm is not in the hardware but the game engine provides the ray-tracing algorithm and the hardware is made to handle that algorithm faster.
 

Kusarigama

Member
Do you think MS is paying for their propriatery compression ?

Both consoles are using others technology...FFS...

Move along, nothing to see here.

Some of the fanboy arguments in here take some beating...

Ah but console A maybe buys it a dollar cheaper than console B concern - YAWN !
Don't bring fanboyism here. It's a question that i wasn't even asking you in the first place.

btw, which side did you think I am from, Xbox fanboy or PlayStation fanboy?
 
Last edited:

draliko

Member
There is no decompression algorithm implemented in hardware. It is hardware which can decompress such data faster. Like dedicated ray-tracing hardware. The ray-tracing algorithm is not in the hardware but the game engine provides the ray-tracing algorithm and the hardware is made to handle that algorithm faster.
that's not how hw work, to create a piece of hw that can hardware decompress you need to know how the algorihtm works, so you need to pretty much have access to the source code. You can't create something that accellerate decompression based on nothing.. really this is not how those things works.
 

Kusarigama

Member
that's not how hw work, to create a piece of hw that can hardware decompress you need to know how the algorihtm works, so you need to pretty much have access to the source code. You can't create something that accellerate decompression based on nothing.. really this is not how those things works.
dedicated to it doesn't mean you have to literally make that algorithm in it. There's specific type of logics that it needs to handle in order to accomplish the task.
 
Last edited:

draliko

Member
dedicated to it doesn't mean you have to literally make that algorithm in it. There's specific type of logics that it needs to handle in order to accomplish the task.
it totally means that, dedicated hw support means you have to know the ins and out to make it works, otherwise is general sw support, pretty much like now, you can decompress using the apu a x86 general usage chip that can adapt to everything you throw at it.
 

Imtjnotu

Member
This is XSX's CG render die shot and PCB. This is not the real world PCB and die shot.

digitalfoundry-playstation-5-vs-xbox-series-x-specs-comparison-cpu-gpu-storage--1584554631072.jpg


My guess is eight normal 32 bit physical PHY GDDR6 controllers (like NAVI 10) with two extra 32 PHY GDDR6 controllers above the two CCX CPU modules.

I placed 2GB chips closer to the CCX CPU modules i.e. locality design rules?

XSX's CG render die shot reminds me of X1X's die shot layout.

Microsoft-Xbox-One-X-Scorpio-Engine-Hot-Chips-29-02.png



NAVI 10 PCB example

4-1080.254b13a3.jpg
If you watch the DF video you see the final mother board multiple times and the ram setup.)
 

Tripolygon

Banned
dedicated to it doesn't mean you have to literally make that algorithm in it. There's specific type of logics that it needs to handle in order to accomplish the task.
Sony created a custom decompressor for a proprietary format. They have to pay the owners of the format or come to some kind of deal but this means that it becomes free for developers to use since PS5 sdk will have to ship with the libraries required to use it since it is part of the hardware. This is a cost Sony has deemed important to take on as developers are starting to use it.

PS5 custom decompressor also supports zlib.
 
Last edited:

B_Boss

Member
That's really nothing more than wishful thinking. The two devices are designed around the same basic technologies so advantages in power will be there for all to see. This is the equivalent of saying "The PS5's SSD is technically superior in every way but many factors are involved in making a specific device "better".

You can trust the specs far more than any spin or commentary that contradicts them.

Well if a detailed explanation as to how the PS5’s SSD is superior is given, then ideally that would, ideally I’d imagine, clarify the very factors that are involved that result in its superiority (or inferiority depending lol).
 
Last edited:

ZywyPL

Banned
Are those still a thing on SSDs? There's no more seek time like HDDs

Absolutely. And that's what everyone seems to forget, there's a vast difference between one 10GB file and thousands of tiny 1MB files. MS and Sony obviously just as any HDD/SSD producer provided the maximum (sequential) possible value, which rarely represents the actual real-life performance. I assume the custom chips build into the consoles will help with that, but still to a certain degree.
 
I'm still mightily confused as to how a GCN @ 13.8TF can be worse than an RDNA1 @ 9.75TF.

To me, it's like someone telling me that a ton of bricks is heavier than a ton of feathers.

Is a teraflop really such a bad measurement of things? After all, a flop is the measurement of floating point operations per second. A terraflop is a billion floating point operations per second.

Therefore, is one piece out hardware is outputing 13.8TF per second, while the other is outputting 9.75T per second, it should follow that the one with the higher number is handling more of these calculations per second and therefore is the better device.

What else is going on to affect the performance?

Another Quote because is just stumbled upon a youtube video explainning this:

Also at those here discussing about the decompresaion engines/chips in the next gen. Would it be possible for pc hardware mainboard manufactureres to create their own decompression engines for the pc segment?

I mean this could actually be a selling point if you target gamers as an audience: This mainboar has a decompression engine for xyz formats boosting your gaming performance!
 

rnlval

Member
Another Quote because is just stumbled upon a youtube video explainning this:

Also at those here discussing about the decompresaion engines/chips in the next gen. Would it be possible for pc hardware mainboard manufactureres to create their own decompression engines for the pc segment?

I mean this could actually be a selling point if you target gamers as an audience: This mainboar has a decompression engine for xyz formats boosting your gaming performance!

WinZip has GpGPU unzip acceleration or buy 12 to 16 cores Zen 2.
 
This is XSX's CG render die shot and PCB. This is not the real world PCB and die shot.

digitalfoundry-playstation-5-vs-xbox-series-x-specs-comparison-cpu-gpu-storage--1584554631072.jpg


My guess is eight normal 32 bit physical PHY GDDR6 controllers (like NAVI 10) with two extra 32 PHY GDDR6 controllers above the two CCX CPU modules.

I placed 2GB chips closer to the CCX CPU modules i.e. locality design rules?

XSX's CG render die shot reminds me of X1X's die shot layout.

Microsoft-Xbox-One-X-Scorpio-Engine-Hot-Chips-29-02.png



NAVI 10 PCB example

4-1080.254b13a3.jpg

uipOmum.jpg


Here's why I brought up the contention earlier; this is a shot of the XSX PCB from DF's teardown video on March 16th. Now, you could be right and this is just a prototype board, but I think most assume this to be the final PCB.

Also when you go to the XSX website page and scroll down a bit, you see this:

5vjWlnD.jpg


A much prettier version of the other shot but essentially the same PCB, and the chip layout for the GDDR6 is exactly the same.

Now, I'd like for the mix of chips to be accessible simultaneously, that'd be great! Whether or not that's something MS's done with the XSX is up for debate. However, it most likely wouldn't be with the chip setup you had in your graphs since we now have two official PCB shots with pretty much final configs that have the GDDR6 arranged differently from the graphs.

not official



How tf are you gonna do a concept vid and not even show off your design?

I'm still mightily confused as to how a GCN @ 13.8TF can be worse than an RDNA1 @ 9.75TF.

To me, it's like someone telling me that a ton of bricks is heavier than a ton of feathers.

Is a teraflop really such a bad measurement of things? After all, a flop is the measurement of floating point operations per second. A terraflop is a billion floating point operations per second.

Therefore, is one piece out hardware is outputing 13.8TF per second, while the other is outputting 9.75T per second, it should follow that the one with the higher number is handling more of these calculations per second and therefore is the better device.

What else is going on to affect the performance?

Well, heat for starters. The 13.8 TF GCN card requires a hell of a lot more power (and thus generates much more heat) than the 9.75 RDNA1 card. You can think of the RDNA1 card as basically a 13.2 TF GCN card but with much better thermals, additional architectural features baked into the silicon, and needing much less power and thus generating much less heat.

That's helped by the fact the RDNA1 card is on a smaller node process, too. To the other question. NO, a teraflop isn't inherently a bad measurement of things, especially when dealing with systems or cards based on the same architecture. However, it only specifies GPU performance, and the GPU still needs data to operate on to pull off that performance. So things like physical memory amount, memory bandwidth, storage I/O (for example, PS5 and XSX are basically taking an approach similar to AMD's SSG card line), balance of CPU/GPU contention, etc.

And when it's between two wildly different architectures? Direct TF comparisons in that case become extremely difficult to really say much, because those architectures will have very different means from producing their claimed amounts of performance.
 
Last edited:

Kusarigama

Member
Such a waste of time.
I agree.

So many posts but I still have no idea where you stand, what your perspective or what your point of view is.
I believe in the vision that Sony has shown with the PS5. In particular the ultra fast boot time, no loading screens(not merely reduced loading times), more rich immersive audio and the DualSense controller, while also providing generational leap in graphical fidelity.
 
Last edited:

rnlval

Member
uipOmum.jpg


Here's why I brought up the contention earlier; this is a shot of the XSX PCB from DF's teardown video on March 16th. Now, you could be right and this is just a prototype board, but I think most assume this to be the final PCB.

Also when you go to the XSX website page and scroll down a bit, you see this:

5vjWlnD.jpg


A much prettier version of the other shot but essentially the same PCB, and the chip layout for the GDDR6 is exactly the same.

Now, I'd like for the mix of chips to be accessible simultaneously, that'd be great! Whether or not that's something MS's done with the XSX is up for debate. However, it most likely wouldn't be with the chip setup you had in your graphs since we now have two official PCB shots with pretty much final configs that have the GDDR6 arranged differently from the graphs.



How tf are you gonna do a concept vid and not even show off your design?
Not enough to identify GDDR6's number model number for each chip.
 

CrysisFreak

Banned
I agree.


I believe in the vision that Sony has shown with the PS5. In particular the ultra fast boot time, no loading screens(not merely reduced loading times), more rich immersive audio and the DualSense controller, while also providing generational leap in graphical fidelity.
I believe in it, too. I just wish we got more glimpses into this future.
It's funny since 2013 I've been complaining about current generation load times every week and now they give me exactly what I wanted. They even go above and beyond.
Cannot wait to play whenever I want to play, no waiting, no loading, no boredom.
Hopefully we can get TLOU2 and GOT out of the way as soon as possible (by enjoying the hell out of those). I believe once that is done Sony will start talking as this generation will be finally over (with a bang).
Right now I feel like a starved fucking dog.
It may sound cynical, but I love Corona, just rotting at home coping hard on my Pro, playing Nioh 2 and FF7. But man if I had PS5 right now it would be so much better.

There is one worry I do have, the internal restructuring at SIE does not sound good. Layden gone, Hirai gone, Yoshida downgraded to indie stuff.
No need to worry about PS5 tech though, it sounds insane.
 

xool

Member
meh 18 days 120 pages later.. did I miss much ?

(apart from the DS5 or whatever it's called )

Hot take nobody asked for it looks like ass. Like a cheap body kit. Select and Share buttons got even smaller. Do you need a pen to push them now like a reset button. How does big hands people cope with this.

They kept the light though ..why .. VR cam can't even see it now right?

Transparent face buttons could be cool - if they light up..

bw3u281.jpg

Cool concept.

Though I don't like the Dual Sense, I think the console could look amazing if it follows the same design cues.. Like this fan render - I'd bet some money on the "touchpad gap" being mirrored design wise in the console, AND, it forming part of the main inlet (or outlet) for air flow (round the sides) ..
 
Last edited:

Kusarigama

Member
I believe in it, too. I just wish we got more glimpses into this future.
It's funny since 2013 I've been complaining about current generation load times every week and now they give me exactly what I wanted. They even go above and beyond.
Cannot wait to play whenever I want to play, no waiting, no loading, no boredom.
Hopefully we can get TLOU2 and GOT out of the way as soon as possible (by enjoying the hell out of those). I believe once that is done Sony will start talking as this generation will be finally over (with a bang).
Right now I feel like a starved fucking dog.
It may sound cynical, but I love Corona, just rotting at home coping hard on my Pro, playing Nioh 2 and FF7. But man if I had PS5 right now it would be so much better.

There is one worry I do have, the internal restructuring at SIE does not sound good. Layden gone, Hirai gone, Yoshida downgraded to indie stuff.
No need to worry about PS5 tech though, it sounds insane.
My biggest complaint was also the loading times! and I am so happy that it is one of the key points of focus for the design of PS5. And I can't wait to just see the games on PS5.
 
Last edited:

Tamy

Banned
more rich immersive audio and the DualSense controller

Do you think that Developers for multiplatform games will build their games around this? That they will take their time to build their games around 3D audio and will use all those special features of the DualSense controller?
I doubt it, since xbox supports dolby atmos but there are only a few dolby atmos games and for PS4 not many games supported the touchpad or for xbox not many games supported Impulse triggers.

thing is, game development is as expensive as ever, not sure if taking time to develop features that only a single console can take advantage of is worth it to the developers, but we will see.
hopefully more devs use it, but I really doubt that, since in the past, devs did not use those features.
 
Last edited:
I think it's clear the I/O and SSD set up in PS5 is manifestly superior.

What is that based on? We don't actually have all information on the SSD and I/O for either system, so it seems a bit premature to claim.

If you're just going by paper specs, keep in mind a lot of people have also said to rule out claiming one system as superior to the other just because its on-paper GPU specs (particularly TF) are better.

Just a bit on what we don't know regarding the SSDs for each system:

-Random access latency on first block

-Random access latency (general)

-NAND type (we are assuming QLC but they could be using TLC, or some mix of SLC NAND as a very small cluster)

-Bandwidth (only sequential read speeds have been given, and speed is not the same as bandwidth)

-Page size

-Block size

-Full performance of compression/decompression tools (as a general rule: both systems support Zlib, BCPack is superior for texture compress/decompress, Kraken is better suited for general data. BCPack's top end compression figure lower than Kracken's but Kracken's top end figure is mainly applicable to only data that can actually compress that well without data integrity loss)

And for I/O:

-Bus contention (Tempest Engine already specified it can use up to 20 GB/s bandwidth; no evidence they are using an Onion bus for it)

-USB hub speeds and latency figures

-Ethernet and Wifi/Bluetooth figures (Bluetooth less so because it's not very power-hungry thankfully).

-Data caching for SSD (to speed up read and write operations)

-SSD bandwidth contention (kind of more directed at PS5 since keeping Cerny's power limit talk in mind. I'm curious if maximum data speed rate on the SSD will have a factoring impact on the variable frequency of the PS5 since the SSD does use a good amount of power itself and could factor into the potential 2% frequency drop Cerny mentioned in the presentation).

Basically, like with quite a few other things with these systems, it's better to wait and see, or at least get a lot more specific data before drawing absolute conclusions.
 
Last edited:
When you have near just in time access to any data in the 825GB SSD, you don't need a 100GB partition of virtual memory.


This combination of hardware and software to enable near instant access of data is what Microsoft termed "Velocity Architecture" only Sony does not give it a fancy name and PS5 is faster at it based on specs of the storage and IO.

Even with 100GB sequestered which would be maybe a whole game in vmem or maybe two.. why would you need to stream at all. The vmem solution is still faster than streaming...

With 100 gb sequestered for vmem the SSD still has 900 gb of ssd room (not really by play along) available to it for all other purposes...

So you get 560gb/s of ram, 100 gb of vmem and still 4.8-6gb/s of streaming. What's there not to like here? The logo on the outside of the box?
 

Kusarigama

Member
Do you think that Developers for multiplatform games will build their games around this? That they will take their time to build their games around 3D audio and will use all those special features of the DualSense controller?
I doubt it, since xbox supports dolby atmos but there are only a few dolby atmos games and for PS4 not many games supported the touchpad or for xbox not many games supported Impulse triggers.

thing is, game development is as expensive as ever, not sure if taking time to develop features that only a single console can take advantage of is worth it to the developers, but we will see.
hopefully more devs use it, but I really doubt that, since in the past, devs did not use those features.
The speaker on DS4 isn't widely used as well but in GoW(2018), the speaker plays a little sound along with vibration everytime we recall the Leviathan axe. It is so satisfying, it cannot be overstated.

Not all games will use it but if we don't take risks and innovate, we never truly progress and learn.
 
Even with 100GB sequestered which would be maybe a whole game in vmem or maybe two.. why would you need to stream at all. The vmem solution is still faster than streaming...

With 100 gb sequestered for vmem the SSD still has 900 gb of ssd room (not really by play along) available to it for all other purposes...

So you get 560gb/s of ram, 100 gb of vmem and still 4.8-6gb/s of streaming. What's there not to like here? The logo on the outside of the box?

I'm actually kinda curious to learn more about their virtual memory setup since you brought it up. It's pretty interesting all around to see how MS and Sony have taken divergent paths with even the storage systems xD
 

geordiemp

Member
uipOmum.jpg


Here's why I brought up the contention earlier; this is a shot of the XSX PCB from DF's teardown video on March 16th. Now, you could be right and this is just a prototype board, but I think most assume this to be the final PCB.

Also when you go to the XSX website page and scroll down a bit, you see this:

5vjWlnD.jpg


A much prettier version of the other shot but essentially the same PCB, and the chip layout for the GDDR6 is exactly the same.

Now, I'd like for the mix of chips to be accessible simultaneously, that'd be great! Whether or not that's something MS's done with the XSX is up for debate.

On the accessibility Lady Gaia said that its unlikely as it would need much more tracks to RAM chips and memory controller. See below


3zlOwit.png



In later posts she gave her cedentials. She knows this stuff.
 

Gudji

Member
On the accessibility Lady Gaia said that its unlikely as it would need much more tracks to RAM chips and memory controller. See below


3zlOwit.png



In later posts she gave her cedentials. She knows this stuff.

She knows a ton, I speak with her from time to time.

xbsex price spotted in a polish supermarket approx £482 = $600


You know what time it is...

1049542-kazrollin.jpg
 
Last edited:
What is that based on? We don't actually have all information on the SSD and I/O for either system, so it seems a bit premature to claim.

If you're just going by paper specs, keep in mind a lot of people have also said to rule out claiming one system as superior to the other just because its on-paper GPU specs (particularly TF) are better.

Just a bit on what we don't know regarding the SSDs for each system:

-Random access latency on first block

-Random access latency (general)

-NAND type (we are assuming QLC but they could be using TLC, or some mix of SLC NAND as a very small cluster)

-Bandwidth (only sequential read speeds have been given, and speed is not the same as bandwidth)

-Page size

-Block size

-Full performance of compression/decompression tools (as a general rule: both systems support Zlib, BCPack is superior for texture compress/decompress, Kraken is better suited for general data. BCPack's top end compression figure lower than Kracken's but Kracken's top end figure is mainly applicable to only data that can actually compress that well without data integrity loss)

And for I/O:

-Bus contention (Tempest Engine already specified it can use up to 20 GB/s bandwidth; no evidence they are using an Onion bus for it)

-USB hub speeds and latency figures

-Ethernet and Wifi/Bluetooth figures (Bluetooth less so because it's not very power-hungry thankfully).

-Data caching for SSD (to speed up read and write operations)

-SSD bandwidth contention (kind of more directed at PS5 since keeping Cerny's power limit talk in mind. I'm curious if maximum data speed rate on the SSD will have a factoring impact on the variable frequency of the PS5 since the SSD does use a good amount of power itself and could factor into the potential 2% frequency drop Cerny mentioned in the presentation).

Basically, like with quite a few other things with these systems, it's better to wait and see, or at least get a lot more specific data before drawing absolute conclusions.

Well, what about the 12 channel controller on the custom flash interface? Does Series X have that?
 
Last edited:

Tripolygon

Banned
Even with 100GB sequestered which would be maybe a whole game in vmem or maybe two.. why would you need to stream at all. The vmem solution is still faster than streaming...

With 100 gb sequestered for vmem the SSD still has 900 gb of ssd room (not really by play along) available to it for all other purposes...

So you get 560gb/s of ram, 100 gb of vmem and still 4.8-6gb/s of streaming. What's there not to like here? The logo on the outside of the box?
You haven't a clue what you're talking about. That 100GB virtual ram still has a bandwidth of (drum roll) 2.4GB/s. You've only just partitioned some of your main storage so the operating system can move data that is not being actively used to the SSD while data actively being used remains in RAM. When the GPU needs the data, it has to be moved from the "Virtual RAM" through a 2.4GB/s bandwidth.

You're under the impression that the virtual ram has a bandwidth of 100GB/s?

Logo outside the box? I'm just telling you how the shit works, you are easily swayed by catchy nomenclature. Xbox Series X SSD is plenty fast and it met my expectation for next gen SSD. PS5 just pretty much went over my expectations. Both companies had their priorities and design goals.
 
Last edited:
You haven't a clue what you're talking about. That 100GB virtual ram still has a bandwidth of (drum roll) 2.4GB/s. You've only just partitioned some of your main storage so the operating system can move data that is not being actively used to the SSD while data actively being used remains in RAM. When the GPU needs the data, it has to be moved from the "Virtual RAM" through a 2.4GB/s bandwidth.

You're under the impression that the virtual ram has a bandwidth of 100GB/s?

MS has not revealed the technical elements of exactly how this partition works. You seem pretty angry so go back and read through the 1692 pages for your answer.

No one said 100gb/s.

Again you sound so wrapped up in whether your favorite console has the technology that you think allows it "win", that you are making assumptions about whats possible elsewhere.

There are more than enough posts on this technology in this thread that you can educate yourself.

When we get more insight about this tech from MS, I'm sure everyone will dissect it that point.

Tl;Dr

MS has a technology to allow direct GPU access to anything within a specified 100GB partition with no vram swap or cpu fetch necessary.

Its a public XSX feature.
 
Last edited:

Bo_Hazem

Banned
Why would you want an optical audio port when higher quality audio can be send via HDMI? Source

Because headsets?

417xINupWPL.jpg


553676-AstroHead.jpg


You can as well use a very long 3.5 AUX wire to connect to your TV to share what HDMI is offering, and it's not guaranteed. New USB 3.1 Gen 2 can provide 10Gbps though, not sure if that's enough to match/surpass optical audio output. USB 2.0 was 0.48Gbps, USB 3.0 was 5Gbps. Upcoming USB 4.0 is 40Gbps which matches Apple's Thunderbolt.

Not sure what's the speed of fiber optic audio but it's probably 10Gbps just like the USB 3.1 Gen 2.
 

SonGoku

Member
AMD GPUs use the "combined scatter" and "combined gather" methods.
This doesn't do what you think it does, it doesn't affect or change how interleaved memory works, its mainly CPU & GPGPU oriented, it also under utilizes bandwidth and again it has no impact on interleaved memory. Nothing changes
XSX's SSD area has two hardware decompressors which are
It has already been pointed out to you this is not the case, only one decompression unit which handles both compression algorithms
Having two decompressors its not even good design
Sequential or random?
Good question that neither Sony/MS touched. Im guessing streaming data can be prearranged sequentially and the customizations minimize the impact of non sequential reads
My simplified XSX work-in-progress interleaved memory model for the logical single 320-bit channel model.
For 2G GDDR6 chips, I factored the dual 16bit channels and AMD GPU's "combined scatter" memory access patterns.
I'm following this example
There are other details to work out. Without documentation, I don't know XSX's customizations done on the "logical view" to "physical view" resolve map.
You can split each chip using their 16bit address for simultaneous access but you are still working with immutable physical limitations, you are effectively halving memory bandwidth for each respective pool
 
Last edited:

draliko

Member
Because headsets?

417xINupWPL.jpg


553676-AstroHead.jpg


You can as well use a very long 3.5 AUX wire to connect to your TV to share what HDMI is offering, and it's not guaranteed. New USB 3.1 Gen 2 can provide 10Gbps though, not sure if that's enough to match/surpass optical audio output. USB 2.0 was 0.48Gbps, USB 3.0 was 5Gbps. Upcoming USB 4.0 is 40Gbps which matches Apple's Thunderbolt.

Not sure what's the speed of fiber optic audio but it's probably 10Gbps just like the USB 3.1 Gen 2.
Actually a toslink has a datarate of 125mbit for second, so really low, that's the reason it doesn't support modern audio codec and got replaced by HDMI long ago. Toslink is a dead technology we didn't want to abandon, and the fault lies in console makers for never offering standard solutions for headsets.
 
Last edited:

xool

Member
Actually a toslink has a datarate of 125mbit for second, so really low, that's the reason it doesn't support modern audio codec and got replaced by HDMI long ago. Toslink is a dead technology we didn't want to abandon, and the fault lies in console makers for never offering standard solutions for headsets.
meh it can do 2 channel 24 bit 96kHz I can't hear more.. I only have 2 ears.

7.1 etc is a buzz word on a spec sheet to sell more sound systems.
 
Last edited:
Status
Not open for further replies.
Top Bottom