• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Digital Foundry] Xbox Series X Complete Specs + Ray Tracing/Gears 5/Back-Compat/Quick Resume Demo Showcase!

D

Deleted member 775630

Unconfirmed Member
LOD and pop in hasn't been an issue for pc since og xb1 and ps4. Ssd's have been around for a while you know. I would ask for examples, but I doubt you saw anything next gen shown the other day. I only seen last gen games and nothing impressive or new. It all sounds good on paper, but until you or I have seen proof, it would make more sense to wait and see than hype all of this up.
But you are not waiting... You are already claiming that it's not enough
 

pawel86ck

Banned
Good luck with bottlenecks if you think it'll work like that. Ssds will never be anywhere near the speeds of ram. Especially when you have the cram all of the graphical data and everything, while trying to move slower data into that same pipeline. Frame pacing galore. The best engineering can't beat physics.

Also what about the read/write on the ssd? Wouldn't that increasingly degrade the drive? And if they are going to stream directly from the drive like that, there would need to be a partition specifically for that, and I doubt it would magically grow in size.


LOD and pop in hasn't been an issue for pc since og xb1 and ps4. Ssd's have been around for a while you know. I would ask for examples, but I doubt you saw anything next gen shown the other day. I only seen last gen games and nothing impressive or new. It all sounds good on paper, but until you or I have seen proof, it would make more sense to wait and see than hype all of this up.
SDD durability isnt affected during reading. Also memory speed will be not a problem because still the same 16 GB memory will be used, just 2-3 more efficiently than before. According to MS engineers without velocity architecture only about 1/3 allocated memory is used to render what's on the screen, and now finally all available RAM will be used fully. So technically speaking there wont be 32-48GB RAM on XSX, but on PC developers would need 2-3x more physical memory compared to XSX just to accomplish the same thing.
 

VFXVeteran

Banned
You seem confused, and intent on making this about yourself. Let's revisit what was said:
-I stated the following facts:
1. DF said benchmark results for Gears 5 were like 2080.
2. Consoles aren't like PC with memory requirement. Try using RX 480/580 8GB with 4GB system RAM and find out.
3. Path tracing is computationally expensive.

You respond:
1. MS memory saving techniques are "Nothing"
2. "16G of memory isn't enough for the visuals that everyone wants."
3. "1080p will surely be the standard for the consoles" with RT enabled

You're just making guesses, and poor ones at that. Now you're on about, "High end PCs will have access to more power to run ALL the effects available at 4k@60FPS or very near it with Ampere's release."

How did we end up there from my original statements? Why are you intent on twisting Xbox Series X discussion into talking about Nvidia GPUs?

1. DF stated, "we were shown benchmark results that, on this two-week-old, unoptimised port(Gears 5), already deliver very, very similar performance to an RTX 2080."

That statement is vague and doesn't reveal any metrics. You are coming to conclusions without knowing ALL the facts. You are implying that the XSX is somehow going to be "faster" than the 2080 just by mentioning this.

2. 16GB of RAM with all the new memory-saving techniques and SSD tech will be fine.

Again, no that is not fine. See my previous comments as to why. Yes, I gave an example of a scenario that I tested. Yet, you ignore my tests. It seems that you want to discredit my own experience just so your argument is made to look true. Yet you have no experience yourself with how memory models work in realtime scenario.

Even the 2080 Super has "only" 8GB VRAM and it should be more performant than XSX.

And this is the crux of your argument which is why the comparisons of the PC GPUs come up. You are mentioning Nvidia by your own argument, but I can't mention - "High end PCs will have access to more power to run ALL the effects available at 4k@60FPS or very near it with Ampere's release." - so that you can come down off of your pedestal concerning the next-gen consoles? You take a jab at my experience without understanding the scope of what a graphics programmer mean. That's why I made it personal.

PC to console for memory requirements is like comparing apples-to-oranges(i.e. - X1X has 12GB RAM total, 9GB for games. Try playing games on a PC with RX 580 8GB and 4GB system memory and see what happens).

Here you go again spouting off YOUR supposed facts as if you programmed a graphics pipeline before. It's in your tone that is the offense, not what you are saying.

3. Path tracing is extremely hard on performance. Quake II RTX, Path tracing demo on 2060 Super:

This is what brings up topic of ray-tracing. Do you know how a path-tracer works? Have you ever cast rays into a scene to test for polygon intersection? Can you metric an application to see what bottlenecks using Nvidia's Nsight tool debugger give you under certain memory-transfer scenarios?
 
Last edited:

VFXVeteran

Banned
No current modern game is fully ray traced. Just thought I'd mention that. Most likely you already know that, but, I want to make sure everyone else does. And honestly, I see no reason why the Xbox Series X would not be able to do this. It seems perfectly capable of doing what RTX is doing at this point, which is rasterization with a handful of ray traced effects.

It's not that the next-gen consoles can't implement the specific ray-tracing features, it's that the resolution has to be lower than usual to run at interactive framerates.
 

pawel86ck

Banned
VFXVeteran, any last predictions before PS5 reveal? I hope you were right about 2 SKU, because otherwise it will be not a good day for you 😃.
 

VFXVeteran

Banned
VFXVeteran, any last predictions before PS5 reveal? I hope you were right about 2 SKU, because otherwise it will be not a good day for you 😃.

I don't care. And no, I never had predictions. I only produced numbers based on my sources. If it comes out to be wrong, so be it.

If there are 2 Skus, it might not be shown today. MS still hasn't mentioned their 2nd SKU and we all know that's going to happen.
 
Last edited:

Shin

Banned
MS still hasn't mentioned their 2nd SKU and we all know that's going to happen.
Given the Anaconda print on the PCB of Series X, I expect Lockhart to become a reality also.
If it's a casual vs. hardcore situation, they could have added Wi-Fi AX to Series X, more memory (20GB which would also widen the bus to 384-bit).
But I guess they really don't want to burn the house to the ground with a $599 price tag as that didn't end up well for Sony back in 2006.
 

Gavin Stevens

Formerly 'o'dium'
Erm... Xbox One X (scorpio) has... a scorpion. It doesn’t mean anything, it’s just what they put on their boards as an Easter egg.

5Cmloph.jpg
 

xool

Member
I was looking at the photos at xbox wire, the console is amazing and all but I just realized that it doesn't have an optical out port on the back... How am I going to connect my sound system to this now?
Damn - there was one on the prototype .. Bluetooth not activated either.. and not even a 3.5mm audio. That's all a lot of soundbars have.

What happened MS ?
scKUg6K.png


YG9qrYx.jpg
 

Mendou

Banned
Then there's always this;


Definitely not RTX 2080 level based on what? Let's do simple math shall we?

RX 5700XT = 40 CU
Xbox series X = 52 CU
That is 30% more CUs.

Then we have;
RX 5700XT game clock = 1755 MHz
Xbox Series X clock = 1825 MHz
That's a 4% clock boost, although, some boost clocks of AIB cards are in the same range as the Xbox Series X, so for ease, let's assume the same clock speed for both.

Then we still have;
5700XT = RDNA1
Xbox Series X = RDNA2
We don't know anything about RDNA2 at this point. So once again, let's assume the worst case scenario, which is that RDNA2 performs exactly the same as RDNA1.

So at worst, the Xbox Series X GPU is 30% faster than the 5700XT. The RTX 2080 is 15% faster than a 5700XT. Do you know what is closest to 30% faster? A 2080 Ti, which is 34% faster. What happens if you take the clock speed and architectural improvements into account?

In conclusion, assuming that the Xbox Series X GPU is the equivalent of an RTX 2080 is actually conservative. It can actually be faster than a 2080Ti, if RDNA2 has significant improvements over RDNA1. The only way this would not be true is if they significantly cut down on ROPs and TMUs, which we currently don't have info about. But it would be really weird to increase CUs and decrease those, so...
That Gears 5 demo running at 100fps native 4k with ultra settings sets the deal that the console might be reaching 2080ti levels of rasterisation performance. As for Ray Tracing, we should probably expect something between 2060-2070 levels of performance.
 

Mista

Banned
That Gears 5 demo running at 100fps native 4k with ultra settings sets the deal that the console might be reaching 2080ti levels of rasterisation performance. As for Ray Tracing, we should probably expect something between 2060-2070 levels of performance.
I think 2080ti might be a bit hard to achieve but still between 2060-2070 isn’t bad at all
 

darkinstinct

...lacks reading comprehension.
The raytracing performance is incredible. I was scared after I watched the video and it stuttered, glad to find out it was because of video encoding and actually runs at 30 to 60 fps with four weeks and one programmer. That's crazy. And even full path tracing, that means raytracing with less demanding effects in 60 fps should absolutely be possible. At 1080p of course, and then DirectML does its thing.
 
Last edited:

Ascend

Member
The recommendation for gpu vram for 4k Ultra was 8+gb in 2019. So Games this year will only consume more
Rarely any modern game uses more than 6GB of VRAM at 4k. This has been tested multiple times by multiple websites. Allocated amount of VRAM on a graphics card is generally not the required amount of VRAM.

"A recent example of this was seen on our Resident Evil 2 benchmark. We often saw VRAM allocation go as high as 8.5 GB when testing with the RTX 2080 Ti at 4K, but there was no performance penalty when using a graphics card with only 6GB of VRAM. There was however a big performance penalty for cards with less than 6 GB.
That is to say, while the game will allocate 8GB of VRAM at 4K when available, it appears to be using somewhere between 4 and 6 GB of memory, probably closer to the upper end of that range."

Bottom Line
It's clear that right now, even for 4K gaming, 6GB of VRAM really is enough.



Article is a bit old, but not much has changed since then.
 

Ascend

Member
That Gears 5 demo running at 100fps native 4k with ultra settings sets the deal that the console might be reaching 2080ti levels of rasterisation performance. As for Ray Tracing, we should probably expect something between 2060-2070 levels of performance.
I don't know. According to a user on resetera, it will be faster than a 2080Ti;

"We know the RDNA2 design has one RT core in each of its TMUs, we know that amd designs have 4 TMUs per compute unit. So for an xbox series X with 52 CUs, at 1825mhz you've got 52 * 4 * 1825mhz = intersections/second, with a 10 deep bvh you divide that 379.6 billion figure by 10 to get 37.96 gigrays/second.

Do that same math for a 2060, or a 2080Ti, or any other turing gpu, turing has one RT core per SM, so for a 2060 you have 30 SMs, and its official turbo clock is 1680mhz, for 50.4 billion intersections, or 5.04 gigrays/sec, nvidia quotes 5 gigrays.
For a 2080Ti you have 68 RT units, at 1545mhz, for 10.56 gigrays, which nvidia also quotes. "


 
Last edited:

ZywyPL

Banned
Yeah this needs to he known too

Nah. Haters gonna hate, bitches gonna bitch, crybabies gonna cry. Kids who are not interested at all in XBX will always find something to complain about to justify their bias and seek attention by doing so. It's been said clearly enough that all existing XB1 external drives will work on XBX, leaving the internal 1TB SSD solely for next-gen titles. But retards still prefer to force the narrative as if the XBX doesn't have any build-in drive at all and they are forced to buy the external one on day one.
 
It's time to upgrade your receiver. Many will need to as HDMI 2.1 will force it if we want to take full advantage of the systems capabilities.

It's also possible we can get a optical to HDMI adapter, a lot like they did with the Kinect & XOX adapter, as XOX has no plug-in for the Kinect like One S & OG Xbox One had. I received mine free, just had to call MS & it was at my door within the week. People just need to be loud enough & MS will listen.
 
Last edited:

VFXVeteran

Banned
That Gears 5 demo running at 100fps native 4k with ultra settings sets the deal that the console might be reaching 2080ti levels of rasterisation performance. As for Ray Tracing, we should probably expect something between 2060-2070 levels of performance.

It wasn't rendered at 4k.
 

pawel86ck

Banned
I don't care. And no, I never had predictions. I only produced numbers based on my sources. If it comes out to be wrong, so be it.

If there are 2 Skus, it might not be shown today. MS still hasn't mentioned their 2nd SKU and we all know that's going to happen.
You made a bold claim, so you will either win big or lose big.
 
Last edited:

psorcerer

Banned
Rarely any modern game uses more than 6GB of VRAM at 4k. This has been tested multiple times by multiple websites. Allocated amount of VRAM on a graphics card is generally not the required amount of VRAM.

RE2 is a remake of a PS2 game.
They by the nature of game architecture have an easy target.
Next gen games will use much more assets.
But.
We have another extreme, when using virtual texture atlases the memory requirements for one 4K frame are 256Mb (mega bytes) (that's for PC with average ram uncached read ~16Gb/sec)
From here we can calculate actual VRAM requirements based on target frame time/latency.
For example we have SSD with actual uncached read of 2Gb/sec (let's remove the compression figures, and use a lower target than max)
Which is 1/8 of typical PC RAM bw, i.e. it means we need 2GB of VRAM for the same buffer on next-gen console, to read assets from SSD directly(!)
But it's unrealistic, because latency of SSD is higher (although these are resource reads, not too dependent on latency)
Anyway, add high latency and you get to 8-16Gb in a console.
Pretty tight, but doable.
 

MCplayer

Member
I don't know. According to a user on resetera, it will be faster than a 2080Ti;

"We know the RDNA2 design has one RT core in each of its TMUs, we know that amd designs have 4 TMUs per compute unit. So for an xbox series X with 52 CUs, at 1825mhz you've got 52 * 4 * 1825mhz = intersections/second, with a 10 deep bvh you divide that 379.6 billion figure by 10 to get 37.96 gigrays/second.

Do that same math for a 2060, or a 2080Ti, or any other turing gpu, turing has one RT core per SM, so for a 2060 you have 30 SMs, and its official turbo clock is 1680mhz, for 50.4 billion intersections, or 5.04 gigrays/sec, nvidia quotes 5 gigrays.
For a 2080Ti you have 68 RT units, at 1545mhz, for 10.56 gigrays, which nvidia also quotes. "


is there any info on the rtx 3000 series? they might be about 30 Gigarays
 

Ascend

Member
RE2 is a remake of a PS2 game.
They by the nature of game architecture have an easy target.
Next gen games will use much more assets.
But.
We have another extreme, when using virtual texture atlases the memory requirements for one 4K frame are 256Mb (mega bytes) (that's for PC with average ram uncached read ~16Gb/sec)
From here we can calculate actual VRAM requirements based on target frame time/latency.
For example we have SSD with actual uncached read of 2Gb/sec (let's remove the compression figures, and use a lower target than max)
Which is 1/8 of typical PC RAM bw, i.e. it means we need 2GB of VRAM for the same buffer on next-gen console, to read assets from SSD directly(!)
But it's unrealistic, because latency of SSD is higher (although these are resource reads, not too dependent on latency)
Anyway, add high latency and you get to 8-16Gb in a console.
Pretty tight, but doable.
They tested more games at 4K. Maybe now we've reached 8GB-ish for VRAM. Next gen games will indeed use more, but the console really is up to standards. It is in fact more powerful than the majority of gaming PCs out there. How many people have a Zen 2 class CPU with at least 8C/16T, combined with at least an RTX 2080? Most people are rocking RTX 2060 - RTX 2070S performance cards with an R5 3600 CPU.
 

Dunnas

Member
The
Then it's an amazing result.
DLSS looks much much worse on 1080p->4k upscale.
There is misunderstanding here. The demo shown was 4K, but there was no mention of frame rate. They did state that it was around 2080 performance though in the internal benchmark. The 100 fps comment didn’t specify the res or settings. Just that they already have it running at around 100fps and will be considering a 120 fps mode.
 

psorcerer

Banned
They tested more games at 4K. Maybe now we've reached 8GB-ish for VRAM. Next gen games will indeed use more, but the console really is up to standards. It is in fact more powerful than the majority of gaming PCs out there. How many people have a Zen 2 class CPU with at least 8C/16T, combined with at least an RTX 2080? Most people are rocking RTX 2060 - RTX 2070S performance cards with an R5 3600 CPU.

If it was using PC approach of load+run - it's not up to standards (for the next 5-7 years?)
But if it relies on data streaming everywhere - it is.
I just hope that typical multiplat developers won't cry and whine again...they will, whom I kidding.
 

Mattyp

Not the YouTuber
'System', you mean like slapping regular HDDs in a proprietary plastic case and charging customers 2 times as much like the good old days?

Lets see if they allow those who know how to insert a single board use whatever cheap housing they use instead of actively banning and blocking anyone using standard parts.

You still haven't told me a system or tech capable that exists today of plugging nvme speed drives into a exterior port on any kind of hardware. I'll continue to wait don't worry.

Once they don't allow any other hdd manufactures to do this then bitch. But until then name me the other solution.
 

Nero_PR

Banned
Actually I do? I want all my shit installed at once, because it's my shit. Even buying physical games requires this storage since this last generation, and a lot of gamers own a lot of games. Maybe I'm not playing them all at once, but you know when the mood strikes me and I want to play one of my random games, I don't want to wait 2-3 hours for a fucking install and update because these machines always take that long even when playing off a disc now.
I stopped doing that. I had 20+ games (obvious not all of them are triple A games with 60+ gigs) and I wasn't playing all of them or even playing half of them. Thinking back it was more a hoarding issue than anything else, but that is because I can download them quickly with my internet. It would all be different if I had a slow ass internet.
These days I am using just 4 to 5 games at a time. With this approach, I started to play a lot of my backlog.
 
Last edited:
lol at people getting worked up about expandable storage.
We had replaceable HDDs in the Playstation systems since they have had them... and this is not like a disk the size the series x has transferring 2 500MB/sec is unheard of, so they should have put a small "door" on the side where you could place a standard mid-range nvme drive... if the drive had much faster speeds they could argue that it is something "special".

Also, the USB drive is now limited in use.
 

Kenpachii

Member
They tested more games at 4K. Maybe now we've reached 8GB-ish for VRAM. Next gen games will indeed use more, but the console really is up to standards. It is in fact more powerful than the majority of gaming PCs out there. How many people have a Zen 2 class CPU with at least 8C/16T, combined with at least an RTX 2080? Most people are rocking RTX 2060 - RTX 2070S performance cards with an R5 3600 CPU.

I have a different look on it.

The idea about xbox360 was to bring PC gaming towards console space with a easy platform to develop on for developers. PS3 was a clusterfuck of garbage that sony kept alive but honestly nobody wanted to bother with it at the end it was very much a meme under developers. which resulted in a PS4 which was more designed like the xbox360 before it and massively succesful with developers because its easy to develop for.

The big selling point about that console was that it was a PC architecture and had lots of ram to work with so there were no constraints anymore what holded there last box down, with xbox one actually falling behind on this front made the PS4 shine for everybody for all kinds of reasons.

Sadly we see a repeat now of last generation PS3, where most likely lots of GPU activity is going to be slammed on the CPU because the GPU simple can't render it. Now 12tflops of RDNA2 which i honestly think is going to perform like a 2080ti at the end of the generation or pretty soon even because of other reasons however the jump to 4k 60 fps and raytracing, ssd traffic and cpu > gpu data transfers is going to slam the memory and gpu core hard and with that a generation leap forwards. Memory is the indeed the weak spot of this box that could make the box overly complex to develop for at the end.

Now obviously devs could stay in this generation with ram usage but honestly is it a next generation console at that point? And this makes microsoft reasoning about there line up of consoles that support all there games even more valid. They probably don't even see it as a next generation console but just a 4k 60 fps raytracing with faster loading current gen box. Which explains the naming after all its a series X.

This also means in my view that comparing the 8 core which will have core or cores locked for security probably additional tasks done on the CPU that PC doesn't have to worry about and a resolution on the GPU they push which again pc gamers don't have to worry about not really comparable.

While the stats do look nice and high end and they sure aren't bad even remotely, a 3600 ryzen guy with a 8gb of v-ram card and about 1080 performance will probably stay relevant for a very very long time as result.

Also

I do think microsoft didn't had much a choice on this front tho, however spending just a bit more and taking a hit on 24gb of ram would probably be preferable and i honestly aspect sony to do just that and be praised by everybody for it once again.
 
Last edited:
Is it just me or does the XSX Sampler Feedback Streaming sound just like AMD's HBCC ?.

I don't think so. Sampler Feedback is kinda recording the usage of texture data for better undestanding which mip levels of textures are actually needed. Use cases are 1) Streaming 2) Texture space shading.


With this system in place, memory and memory bandwidth savings can be substantial.
 
Top Bottom