Windom Earle
Member
The eSRAM is not an advantage, it's merely a measure to have a less crippled system due to the DDR3. PS4 ram solution is superior. Both in bandwith and ease of use.
This.Probably release date unless one of those sites gets a hold of one early and disassembles it.
That point is moot anyways since he wouldn't see a difference between a PS3 and a PS4 game.They probably wont reveal the specs. If they are as rumored, they would look inferior. But if they say 8GB RAM 8 core CPU, they dont look inferior.
Anyways the games looked awesome at E3, so I wouldn't worry.
I've heard rumblings these architectures dont get much gain from using more than 14 CU's on gfx anyway (this is where 14+4 split vgleaks rumor started, from Sony developer suggestions/slides to use a 14/4 gfx/compute split as more than 14 CU's on GFX granted little return). So maybe 12-14CU's is the sweet spot anyway.
It wasn't said what's limiting them, it has to be CPU or bandwidth, my guess is the CPU. 1.6 ghz is kinda slow, even with 6.
Also, the esram can be an advantage.
If MS successfully convinces people "this black box's shadowy innards are ~equivalent to that other black box's shadowy innards, pay no attention to the man behind the curtain" they've done their job. So far it's working imo. Joe Blow isn't going to see a difference based on E3. But we'll see in the future.
Exactly. A.I., physics that not only look nice but also affect gameplay, driving physics for Gran Turismo, there is almost unlimited to usage for this stuff.
A CPU is extremely smart but very weak. A GPU is extremely strong but very dumb. The PS4 processor is (when fully utilized) extremely smart and extremely strong at the same time.
I don't know about XBox One since Microsoft avoids talking about the technical stuff (which speaks volumes) but Wii U has nothing to do with this. Wii U is dated tech.
Hard to say.
There is one indicator which tells us that it's not a big deal: The ROPs. PS4 has 32 ROPs and XBox One only has 16 ROPs. Microsoft wouldn't use 16 ROPs if the eSRAM was able to close the gap to the PS4.
PS4 looks like a perfect gaming machine to me. XBox One is one big compromise.
WiiU vs 1.28 vs 1.84 TF,what secret sauce will 'magically' raise that number?They will not officially announce the specs.
Sony could do it with the PS4, because the PS4 is simply a PC in a console box, there is no additional "magic" that could raise the power.
On the other side there are Xbox One and Wii U who has lower raw specs of the GPU but some additional hardware to raise the power (especially Wii U). Xbox One has eSDRAM. Wii U has eDRAM, DSP etc. But it is not easy for marketing to explain those advantages for a normal consumer who only looks on big numbers.
PS4 has a sound processor as well.Your over simplification ("CPU weak, GPU strong!") is a bit too... Simplistic. The Wii U also has GPU compute abilities, both in terms of API support and on the GPU itself. It's just of an older generation of chip than GCN, which makes it less potent in practical terms. Doesn't make it unusable, because developers are using it.
To the other poster above: the Xbox one also has a sound processor, one which is quite a bit more potent than the Wii u's due to kinect basically. All the systems have various things to assist the relatively weak CPUs.
Exactly. A.I., physics that not only look nice but also affect gameplay, driving physics for Gran Turismo, there is almost unlimited to usage for this stuff.
A CPU is extremely smart but very weak. A GPU is extremely strong but very dumb. The PS4 processor is (when fully utilized) extremely smart and extremely strong at the same time.
Data Moving Engines. Microsoft integrated them to relieve the shader cores which would otherwise have to copy all the data into the eSRAM.
Obviously it wasn't simplistic enough... ^_^
Wii U can't use heterogeneous algorithms that use CPU cores and shader cores in concert since it is not heterogeneous processor. Wii U can use standard GPGPU algorithms for some fancy graphics effects (particles for example) but it can't use GPGPU for AI or driving physics.
Its pretty clear they are hiding them, theyve done that with most of this system to date. They have not been clear on anything, your never vague when you have good news to share.
They will not officially announce the specs.
Sony could do it with the PS4, because the PS4 is simply a PC in a console box, there is no additional "magic" that could raise the power.
On the other side there are Xbox One and Wii U who has lower raw specs of the GPU but some additional hardware to raise the power (especially Wii U). Xbox One has eSDRAM. Wii U has eDRAM, DSP etc. But it is not easy for marketing to explain those advantages for a normal consumer who only looks on big numbers.
That's not the same. GPGPU on a system like Wii U is only for some visual effects. GPGPU on a system like PS4 is not only visual but also interactive (like Driving Physics).
This has nothing to do with OpenCL at all. We're talking about an extremely huge latency bottleneck for the communication of CPU and GPU. As long as this bottleneck exists, there is no sense in using GPGPU for a lot of tasks. Driving physics or AI for example. You can't solve this problem in software.
You can use something like Crysis 3 as a very rough basis for comparison between the two console GPUs. For multiplats, at least.
![]()
7770
1.28 TFLOPS (XB1 - 1.24 TF)
16 ROPS (XB1 - 16 ROPS)
1GB @ 72 GB/s (XB1 - 5GB @ 67 GB/s)
7850
1.76 TFLOPS (PS4 - 1.84 TF)
32 ROPS (PS4 - 32 ROPS)
2 GB @ 154 GB/s (PS4 - 7GB [?] @ 176 GB/s)
Crysis 2, max settings DX11, 1080p, high-resolution textures, 4xMSAA http://www.guru3d.com/miraserver/images/2012/r7800/Untitled-19.png
7850: 42 fps
7770: 25 fps
Battlefield 3 DX11, Ultra mode, 4xMSAA, 16x AF enabled, HBAO enabled http://www.guru3d.com/miraserver/images/2012/r7800/Untitled-28.png
7850: 32 fps
7770: 21 fps
Crysis Warhead 1920x1080 - Enthusiast Quality + 4xAA: http://gpuboss.com/graphics-card/Radeon-HD-7770
7850: 38.9fps
7770: 23fps
Tomb Raider, Ultra, DX11: http://kotaku.com/5990848/tomb-raider-performance-test-graphics-and-cpus
7850: 42 fps
7770: 29 fps
Real time in interactive video games?
![]()
This discussion isn't getting us anywhere, StevieP. Let's just accept that we disagree. ^_^
That analogy is actually a perfect description of why GPGPU setups are so exciting. Good choice with Gran Turismo, as that's the kind of game and developer I expect to see utilize this to a great extent. Of course Yamauchi is going to take 15,000 years to release the game. Cerny's decision to go for 64 compute queues is a move that's sure to pay off in the long run.
Physx is all eye candy. The only game I know of that had interactive physx was GRAW2, but that was only for a small tech demo island called Ageia Island.Disagree all you'd like. I was using a GPU to do physx calculations in more than a few games I have played over the last few years, without hsa. Just a cpu and a gpu or two. These are things you can look up and read about If you wish to do so.
Physx is all eye candy. The only game I know of that had interactive physx was GRAW2, but that was only for a small tech demo island called Ageia Island.
GPU compute is limited by performance and compliance, not whether you're using it to make a car crash or cloth movement more realistic or to assist in the search for extra terrestrial life and curing cancer.
Nice comparison so far, it appears though that the PS4 has an advantage with higher bandwidth due to it's 7gb of available ram, but what about the compute units which I've heard so much about, how does it factor in?
Yup.
It was told to me before E3 they would be hiding the specs and that will remain the case. There will be no official reveal of the GPU specifications
which is stupid anyway because the specs WILL leak, and Microsoft will look even all the stupider for trying to hide them.
which is stupid anyway because the specs WILL leak, and Microsoft will look even all the stupider for trying to hide them.
Well, what triggers a read or write command? Is it a ghost? Magic? Or is it a shader program? Explain to me, please.![]()
"Agree to disagree" is nice when discussing opinions, but in this case your initial statement was just wrong.This discussion isn't getting us anywhere, StevieP. Let's just accept that we disagree. ^_^
Exactly. HSA may slightly increase the set of problems GPU compute is applicable to, but it does not solve the main challenge of rethinking algorithms (and obviously it does not invalidate Amadahl's law).Agree. The reason why GPGPU has not been employed in more use cases has not much to do with hardware limitiations, but with the fact you have to rethink and rewrite algorithms substantially (in practice: from the ground up) to fit the SIMD computation model that underlies GPGPU. And that is (a) not trivial and (b) not possible/feasable for many computations that have substantial amounts of non-parallelizable computation or don't work on large, homogeneous data sets.
What do you mean?
This one:What statement was wrong?
There's nothing stopping you from using GPU compute for non-visual tasks on a Wii U (or a PC).That's not the same. GPGPU on a system like Wii U is only for some visual effects. GPGPU on a system like PS4 is not only visual but also interactive (like Driving Physics).
Well that wasn't my initial statement. My initial statement was that the CPU and the iGP of the PS4 can work on tasks together, but I can see your point.
I believe you. But why does nobody us it then?
I believe you. But why does nobody us it then?
I know and you had good reason to do so. My formulation wasn't appropriate.
What statement was wrong?
I love the first post:A while back we had a thread about a Microsoft rep who said the Xbox One was not targeting high specs.
Cool. I hope the price reflects that.
So, presumably the Xbone has worse GPU and worse RAM than the PS4? Is there anything it has an advantage in?
So, presumably the Xbone has worse GPU and worse RAM than the PS4? Is there anything it has an advantage in?
The specs have been leaked for months. The Xbone is weaker, that's the bottom line.which is stupid anyway because the specs WILL leak, and Microsoft will look even all the stupider for trying to hide them.
Why not using high bandwidth GDDR5 from the start?
Microsoft needed 8GB RAM for their OS-heavy system design, and they couldn't guarantee that amount would be available in GDDR5 form by the time the system launched. Hence they went with 8GB DDR3 + eSRAM to mitigate some of the potential bandwidth issues.
It's not just that, but long term cost as well. I think MS chose the speed of DDR3 specifically for cost reductions with DDR4 at some point in the life of the console.