• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox One GPU Specs reveal date?

The eSRAM is not an advantage, it's merely a measure to have a less crippled system due to the DDR3. PS4 ram solution is superior. Both in bandwith and ease of use.
 
They will not officially announce the specs.

Sony could do it with the PS4, because the PS4 is simply a PC in a console box, there is no additional "magic" that could raise the power.

On the other side there are Xbox One and Wii U who has lower raw specs of the GPU but some additional hardware to raise the power (especially Wii U). Xbox One has eSDRAM. Wii U has eDRAM, DSP etc. But it is not easy for marketing to explain those advantages for a normal consumer who only looks on big numbers.
 

Sid

Member
They probably wont reveal the specs. If they are as rumored, they would look inferior. But if they say 8GB RAM 8 core CPU, they dont look inferior.

Anyways the games looked awesome at E3, so I wouldn't worry.

I've heard rumblings these architectures dont get much gain from using more than 14 CU's on gfx anyway (this is where 14+4 split vgleaks rumor started, from Sony developer suggestions/slides to use a 14/4 gfx/compute split as more than 14 CU's on GFX granted little return). So maybe 12-14CU's is the sweet spot anyway.

It wasn't said what's limiting them, it has to be CPU or bandwidth, my guess is the CPU. 1.6 ghz is kinda slow, even with 6.

Also, the esram can be an advantage.

If MS successfully convinces people "this black box's shadowy innards are ~equivalent to that other black box's shadowy innards, pay no attention to the man behind the curtain" they've done their job. So far it's working imo. Joe Blow isn't going to see a difference based on E3. But we'll see in the future.
That point is moot anyways since he wouldn't see a difference between a PS3 and a PS4 game.
 

StevieP

Banned
Exactly. A.I., physics that not only look nice but also affect gameplay, driving physics for Gran Turismo, there is almost unlimited to usage for this stuff.

A CPU is extremely smart but very weak. A GPU is extremely strong but very dumb. The PS4 processor is (when fully utilized) extremely smart and extremely strong at the same time.



I don't know about XBox One since Microsoft avoids talking about the technical stuff (which speaks volumes) but Wii U has nothing to do with this. Wii U is dated tech.



Hard to say.

There is one indicator which tells us that it's not a big deal: The ROPs. PS4 has 32 ROPs and XBox One only has 16 ROPs. Microsoft wouldn't use 16 ROPs if the eSRAM was able to close the gap to the PS4.

PS4 looks like a perfect gaming machine to me. XBox One is one big compromise.

Your over simplification ("CPU weak, GPU strong!") is a bit too... Simplistic. The Wii U also has GPU compute abilities, both in terms of API support and on the GPU itself. It's just of an older generation of chip than GCN, which makes it less potent in practical terms. Doesn't make it unusable, because developers are using it.

To the other poster above: the Xbox one also has a sound processor, one which is quite a bit more potent than the Wii u's due to kinect basically. All the systems have various things to assist the relatively weak CPUs.
 

Sid

Member
They will not officially announce the specs.

Sony could do it with the PS4, because the PS4 is simply a PC in a console box, there is no additional "magic" that could raise the power.

On the other side there are Xbox One and Wii U who has lower raw specs of the GPU but some additional hardware to raise the power (especially Wii U). Xbox One has eSDRAM. Wii U has eDRAM, DSP etc. But it is not easy for marketing to explain those advantages for a normal consumer who only looks on big numbers.
WiiU vs 1.28 vs 1.84 TF,what secret sauce will 'magically' raise that number?

Your over simplification ("CPU weak, GPU strong!") is a bit too... Simplistic. The Wii U also has GPU compute abilities, both in terms of API support and on the GPU itself. It's just of an older generation of chip than GCN, which makes it less potent in practical terms. Doesn't make it unusable, because developers are using it.

To the other poster above: the Xbox one also has a sound processor, one which is quite a bit more potent than the Wii u's due to kinect basically. All the systems have various things to assist the relatively weak CPUs.
PS4 has a sound processor as well.
 

Espada

Member
Exactly. A.I., physics that not only look nice but also affect gameplay, driving physics for Gran Turismo, there is almost unlimited to usage for this stuff.

A CPU is extremely smart but very weak. A GPU is extremely strong but very dumb. The PS4 processor is (when fully utilized) extremely smart and extremely strong at the same time.

That analogy is actually a perfect description of why GPGPU setups are so exciting. Good choice with Gran Turismo, as that's the kind of game and developer I expect to see utilize this to a great extent. Of course Yamauchi is going to take 15,000 years to release the game. Cerny's decision to go for 64 compute queues is a move that's sure to pay off in the long run.
 

TheD

The Detective
Data Moving Engines. Microsoft integrated them to relieve the shader cores which would otherwise have to copy all the data into the eSRAM.

Uhh, I am not sure you know how a processor's memory system works.

Writes to RAM are done by a memory controller.
 

Ushae

Banned
I wouldn't worry too much about these things, it's for the developers to work with and depends entirely on them. What we will need to worry about is if there will be tangible differences between multiplats, so far devs are saying they struggle to use 4-5GB of RAM which is a good thing meaning plenty of room for growth on the systems.

My guess is the differences will tranaslate to extra particle effects, AA etc .. What I would like to know is by what margin? Would the margin be noticeable to most people? How would devs work with the extra power in PS4?
 

StevieP

Banned
Obviously it wasn't simplistic enough... ^_^

Wii U can't use heterogeneous algorithms that use CPU cores and shader cores in concert since it is not heterogeneous processor. Wii U can use standard GPGPU algorithms for some fancy graphics effects (particles for example) but it can't use GPGPU for AI or driving physics.

Well colour me silly! Here I thought you could do stuff like physics on the GPUs that came before way, way before HSA (i.e. including all of them out there now) via OpenCL - that include the 4000 series.I must've been wrong on that!
 
Its pretty clear they are hiding them, theyve done that with most of this system to date. They have not been clear on anything, your never vague when you have good news to share.

Yup.
It was told to me before E3 they would be hiding the specs and that will remain the case. There will be no official reveal of the GPU specifications
 
They will not officially announce the specs.

Sony could do it with the PS4, because the PS4 is simply a PC in a console box, there is no additional "magic" that could raise the power.

On the other side there are Xbox One and Wii U who has lower raw specs of the GPU but some additional hardware to raise the power (especially Wii U). Xbox One has eSDRAM. Wii U has eDRAM, DSP etc. But it is not easy for marketing to explain those advantages for a normal consumer who only looks on big numbers.

The Xbone is also a PC in a box, just one with a weaker GPU and an inferior memory setup. Secret sauce pipe dreams are dead son.
 

StevieP

Banned
This has nothing to do with OpenCL at all. We're talking about an extremely huge latency bottleneck for the communication of CPU and GPU. As long as this bottleneck exists, there is no sense in using GPGPU for a lot of tasks. Driving physics or AI for example. You can't solve this problem in software.

Perhaps you should use the word "impractical" or "slower" or something along those lines. And not the word "can't" as you did.

Because I assure you, OpenCL works and is very real and practical for non-"visual" calculation - even on older GPUs (though less-so the older you go). People have been coding OpenCL (and PhysX) software that has been doing AI, physics, and other "non-visual" things with GPU compute for a very long time now without the requirements of HSA which is only a very recent development
 

velociraptor

Junior Member
You can use something like Crysis 3 as a very rough basis for comparison between the two console GPUs. For multiplats, at least.

lT0myVm.gif


7770
1.28 TFLOPS (XB1 - 1.24 TF)
16 ROPS (XB1 - 16 ROPS)
1GB @ 72 GB/s (XB1 - 5GB @ 67 GB/s)

7850
1.76 TFLOPS (PS4 - 1.84 TF)
32 ROPS (PS4 - 32 ROPS)
2 GB @ 154 GB/s (PS4 - 7GB [?] @ 176 GB/s)

Crysis 2, max settings DX11, 1080p, high-resolution textures, 4xMSAA http://www.guru3d.com/miraserver/images/2012/r7800/Untitled-19.png

7850: 42 fps
7770: 25 fps

Battlefield 3 DX11, Ultra mode, 4xMSAA, 16x AF enabled, HBAO enabled http://www.guru3d.com/miraserver/images/2012/r7800/Untitled-28.png

7850: 32 fps
7770: 21 fps

Crysis 3 1080p, DX11, High settings, FXAA: http://www.guru3d.com/articles_pages/crysis_3_graphics_performance_review_benchmark,6.html

7850: 35 fps
7770: 23 fps

Crysis Warhead 1920x1080 - Enthusiast Quality + 4xAA: http://gpuboss.com/graphics-card/Radeon-HD-7770

7850: 38.9fps
7770: 23fps

Tomb Raider, 1080p, Ultra, DX11: http://kotaku.com/5990848/tomb-raider-performance-test-graphics-and-cpus

7850: 42 fps
7770: 29 fps

Metro 2033, 1080p, Max, AAA, DX11: http://www.guru3d.com/miraserver/images/2012/r7800/Untitled-17.png

7850: 27 fps
7770: 15 fps
 

strata8

Member
Crysis 2, max settings DX11, 1080p, high-resolution textures, 4xMSAA http://www.guru3d.com/miraserver/images/2012/r7800/Untitled-19.png

7850: 42 fps
7770: 25 fps

Battlefield 3 DX11, Ultra mode, 4xMSAA, 16x AF enabled, HBAO enabled http://www.guru3d.com/miraserver/images/2012/r7800/Untitled-28.png

7850: 32 fps
7770: 21 fps


Crysis Warhead 1920x1080 - Enthusiast Quality + 4xAA: http://gpuboss.com/graphics-card/Radeon-HD-7770
7850: 38.9fps
7770: 23fps

Tomb Raider, Ultra, DX11: http://kotaku.com/5990848/tomb-raider-performance-test-graphics-and-cpus
7850: 42 fps
7770: 29 fps

Right - 40-50% advantage for the 7850 except in cases where the 1GB of VRAM chokes the 7770 (Crysis 2, Crysis Warhead).
 

StevieP

Banned
Real time in interactive video games?

tumblr_lasmpzzeeO1qe0eclo1_r2_500.gif

GPU compute is limited by performance and compliance, not whether you're using it to make a car crash or cloth movement more realistic or to assist in the search for extra terrestrial life and curing cancer.
 

StevieP

Banned
This discussion isn't getting us anywhere, StevieP. Let's just accept that we disagree. ^_^

Disagree all you'd like. I was using a GPU to do physx calculations in more than a few games I have played over the last few years, without hsa. Just a cpu and a gpu or two. These are things you can look up and read about If you wish to do so.
 

Chumpion

Member
That analogy is actually a perfect description of why GPGPU setups are so exciting. Good choice with Gran Turismo, as that's the kind of game and developer I expect to see utilize this to a great extent. Of course Yamauchi is going to take 15,000 years to release the game. Cerny's decision to go for 64 compute queues is a move that's sure to pay off in the long run.

Yes! Now we just have to wait for clever developers to leverage this. I.e., create algorithms where the CPU and GPU alternate making passes over the same data. The CPU does "logic" and the GPU crunches shit, in tight synchrony.
 

vazel

Banned
Disagree all you'd like. I was using a GPU to do physx calculations in more than a few games I have played over the last few years, without hsa. Just a cpu and a gpu or two. These are things you can look up and read about If you wish to do so.
Physx is all eye candy. The only game I know of that had interactive physx was GRAW2, but that was only for a small tech demo island called Ageia Island.
 

StevieP

Banned
Physx is all eye candy. The only game I know of that had interactive physx was GRAW2, but that was only for a small tech demo island called Ageia Island.

And that island showed that its up to the developer what you do with GPU compute. It is not up to hsa. Hsa makes it better, sure, but the other poster spoke in absolutes.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
GPU compute is limited by performance and compliance, not whether you're using it to make a car crash or cloth movement more realistic or to assist in the search for extra terrestrial life and curing cancer.

Agree. The reason why GPGPU has not been employed in more use cases has not much to do with hardware limitiations, but with the fact you have to rethink and rewrite algorithms substantially (in practice: from the ground up) to fit the SIMD computation model that underlies GPGPU. And that is (a) not trivial and (b) not possible/feasable for many computations that have substantial amounts of non-parallelizable computation or don't work on large, homogeneous data sets.
 

thelastword

Banned
Nice comparison so far, it appears though that the PS4 has an advantage with higher bandwidth due to it's 7gb of available ram, but what about the compute units which I've heard so much about, how does it factor in?
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Nice comparison so far, it appears though that the PS4 has an advantage with higher bandwidth due to it's 7gb of available ram, but what about the compute units which I've heard so much about, how does it factor in?

Compute Units are modules in the GPU's pipeline that do the (programmable) computation on the GPU. There are other parts of the pipeline that are not really programmable, but configurable or fixed, like the ROPs, but the CUs do the general computation. They consist of 4 "SIMD" modules, each of which has 16 ALUs (Arithmetic Logic Unit) which perform a calculation step. The important thing is, that every ALU in a compute unit executes the same program, each on a different set of data.
 
Yup.
It was told to me before E3 they would be hiding the specs and that will remain the case. There will be no official reveal of the GPU specifications


which is stupid anyway because the specs WILL leak, and Microsoft will look even all the stupider for trying to hide them.
 

Pug

Member
which is stupid anyway because the specs WILL leak, and Microsoft will look even all the stupider for trying to hide them.

The specs were leaked long ago. They haven't changed.

MS will discuss the "whole" system but they certainly won't get into a flops war.
 

GameSeeker

Member
Microsoft knows that they have significantly under-powered graphics hardware vs. the PS4. They even publicly stated that they were not going for highest performance with Xbone, but rather "All-in-one".

So Microsoft won't reveal anymore graphics specs other than 5 billion transistors and 768 ops/cycle. Fortunately for us, almost the entire graphics spec were leaked by various sources prior to the Xbone reveal. And they have proved to be very accurate. Between tear-downs and future developer leaks we will quickly confirm what we know and may learn anything currently missing.
 

TheD

The Detective
Well, what triggers a read or write command? Is it a ghost? Magic? Or is it a shader program? Explain to me, please. ;)


The processor reads the instruction stream, the processor has logic that handles everything the instruction tells it to do, including loading and storing data.

The so called "move engines" are mainly just DMA engines that move data between the SRAM and main RAM.
 

Durante

Member
This discussion isn't getting us anywhere, StevieP. Let's just accept that we disagree. ^_^
"Agree to disagree" is nice when discussing opinions, but in this case your initial statement was just wrong.

Agree. The reason why GPGPU has not been employed in more use cases has not much to do with hardware limitiations, but with the fact you have to rethink and rewrite algorithms substantially (in practice: from the ground up) to fit the SIMD computation model that underlies GPGPU. And that is (a) not trivial and (b) not possible/feasable for many computations that have substantial amounts of non-parallelizable computation or don't work on large, homogeneous data sets.
Exactly. HSA may slightly increase the set of problems GPU compute is applicable to, but it does not solve the main challenge of rethinking algorithms (and obviously it does not invalidate Amadahl's law).
 

Liamario

Banned
You'll probably find out, but not from MS. They've got nothing to boast no doubt and if they had, you'd have heard about it by now.
 

Durante

Member
What statement was wrong?
This one:
That's not the same. GPGPU on a system like Wii U is only for some visual effects. GPGPU on a system like PS4 is not only visual but also interactive (like Driving Physics).
There's nothing stopping you from using GPU compute for non-visual tasks on a Wii U (or a PC).

Back in 2011, when I last investigated the topic in detail, OpenCL kernel invocation overheads on PCIe GPUs were around 20 microseconds. While that is more than it should be, it's not so much that it would prohibit their use for tons of viable algorithms.
 
Well that wasn't my initial statement. My initial statement was that the CPU and the iGP of the PS4 can work on tasks together, but I can see your point.



I believe you. But why does nobody us it then?

Bitcoin mining is primarily done by GPUs. Not gaming related but still.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
I believe you. But why does nobody us it then?

It's a new paradigm that requires new algorithms as well as software-architectural adaptations of migrated game engine subsystems. Such new approaches take some time before they are widely adopted. Efficient developers reuse as much as possible, e.g. developer knowledge/experience (a factor that cannot be underestimated), exiting libraries and frameworks. Migrating to GPGPU implies the need to invest time and money in development, because there is simply nothing (or little) to be reused. In the end, a developer might conclude that the reuse of existing technology is more efficient and “enough for the time being”. In addition, nobody will migrate everything on day one, but gradually migrate part after part.
 

TheD

The Detective
I know and you had good reason to do so. My formulation wasn't appropriate. :)



What statement was wrong?

I read what you wrote as sending the output of of the GPU processing to the RAM and somehow the "move engines" made that better, on rereading it looks like you could also be saying that the "move engines" could move data between the SRAM and the main RAM without having to be handled by the GPU.
If that is the case, that is possible for it to do, but it is not a proper advantage due to the fact it is just trying to make up for something the PS4 does not even need to do in the first place.
 

strata8

Member
Why not using high bandwidth GDDR5 from the start?

Microsoft needed 8GB RAM for their OS-heavy system design, and they couldn't guarantee that amount would be available in GDDR5 form by the time the system launched. Hence they went with 8GB DDR3 + eSRAM to mitigate some of the potential bandwidth issues.
 
What other consumer products are companies allowed to sell without the buyer being allowed to know what is in the product they are buying? People know what's in their phone or car when they buy it. I cant think of any product where the contents and capabilities of the product are kept secret from the customer. When I buy a videocard for my PC I know whats in it. Nvidia doesnt say to me "hey it's a GTX 670. Were not going to tell you whats in it though. Just give us your money and trust us it's really powerful."

Whole thing comes across as kind of shady and it should probably be illegal.
 

McHuj

Member
Microsoft needed 8GB RAM for their OS-heavy system design, and they couldn't guarantee that amount would be available in GDDR5 form by the time the system launched. Hence they went with 8GB DDR3 + eSRAM to mitigate some of the potential bandwidth issues.

It's not just that, but long term cost as well. I think MS chose the speed of DDR3 specifically for cost reductions with DDR4 at some point in the life of the console.

Right now, you have an SOC that's around 400mm with 256-bit bus for 2.133 Gb/s DDR3 (16 chips). After a couple of shrinks, down in the 16/14nm range, they'll have a SOC that's under 200mm (maybe even 150mm) and 4.266 Gb/s DDR4 on a 128-bit bus (8 or maybe even 4 chips).

Unless some new tech comes out (it could be some interposer based solution), Sony will be stuck with GDDR5 on a 256-bit bus. Around 2018/2019, the X1 should be a lot cheaper to manufacture than the PS4. The question is will the X1 matter at that point?
 

Takuya

Banned
It's not just that, but long term cost as well. I think MS chose the speed of DDR3 specifically for cost reductions with DDR4 at some point in the life of the console.

Xbone's specs won't change throughout its lifecycle. They're not going to change the type of memory inside the unit, especially if you're talking cost reductions, since the price of the units should go down and not stay steady at $500.
 
X One might be weaker but the games looked great at e3 IMO. Why does Forza 5 look so much better than DriveClub? There was so much pop ups and slowdowns in PS4 games....
 
For having much weaker specs than the PS4, I will say that the Xbone's games don't look much if any different than something we'd see on the PS4. Ryse and Forza 5 looked absolutely spectacular from a visual standpoint.
 
Top Bottom