• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Shuhei: PS3 was losing a billion dollars and was only saved by the Sony TV sales covering the losses; the PSN outage was unbelievably hard internally

Panajev2001a

GAF's Pleasant Genius
It's not that great compared to today's standards. The way modern multi-core, multi-threaded processors work is way more efficient than what Cell tried to do. It had a raw floating point calculation advantage over PC processors at the time, but it was way less versatile and not well suited for the direction CPU's were going overall. That's why it never made it out of specialized applications.
DevEx maybe, but then again modern software is not screaming “efficiency” not even the ML/AI training these days.

SPUs did integer and FP math, could be extended to process sparse matrices more efficiently and smaller INT and FP formats. Weaker points were the PPE and devs having to manage the LS instead of being coddled by large caches.

Then again GPUs are taking a LOT of of the same workload, they evolved to do much more general computation and grew more HW acceleration for other parts like Tensor cores and RT cores, still the demand for AI capable chips (training and inferencing) and the properties these chips have to have (see Cerny’s presentation) I think may push people in re-discovering the same concept.

As we are dealing with incredibly higher, for the time, transistors budgets, traditional high performance CPU cores like Zen5c or Zen5 itself are very small and you can pack 8+ of them in a small portion of the area, raster workload improvements are ceding space to RT and ML workloads (SPU like units are suited for), I would not be surprised someone packing 6-8 Zen 5+ class CPU cores and multiple groups of 8-16 SPE like units each (maybe in a multi-chip on an interposer) and having the latter sport some custom HW (like rasteriser and texture sample units).

This is where the design space is kinda converging anyways and with More’s Law current status software complexity (to extract performance) will have to rise, there is no other way really.
 

PaintTinJr

Member
he did assist Insomniac tho didn't he? I guess that's why I remember it being Insomniac and other studios showing that they barely can get passed PS2 graphics on a Cell
But the shader fragment work is ROPs, which were the responsibility of the Toshiba RSX 2D GPU, so that level of performance was completely expected when the job of the CELL SPUs would have been to simplify the operations to a 2D brute force acceleration, rather than do ROPs on SPUs.

Had the extra CELL's 6 SPUs been available the extra SPUs would have massively accelerated vertex shaders or on the fly tessellation in geometry shaders - which Xenos couldn't do anyway - and had so much more headroom for physics simulations for cloth or even 3D audio, and would have had vertex/geometry performance closer to a GTX 280 with similar fillrate from a Toshiba RSX embedded memory, rather than the slightly better or worse GTX 240 rendering the actual PS3's RSX+CELL produced.

6 more SPUS would have also meant an effective doubling of Cell embedded memory, because each SPU would have provided another 256KBs of memory meaning tasks maybe unsuited to tiled 6x192KB would have worked in a single pass on 12x192KB, where the task effectively doubled its tile count (memory and processing).

The other major gain would have been that the Cell EiB would have double the effective bandwidth, because each CELL splitting the same workloads could work simultaneously and wouldn't then need to go in and out of XDR and GDDR3, because both CELLs would have controlled ringbus access and would just efficiently use the XDR to prepare work for the Toshiba RSX. Complexity would have been crazy so it wouldn't have been a total free lunch, but it would have out lasted the PS3 comfortably for competing with PCs IMO
 
Last edited:

Three

Member
Yeah. Clearly the numbers Sony have shared, convincing their fans that they are highly successful are bullshit.

Doesn't surprise me that they say they need to increase margins. They are probably still fudging the numbers to hide the real facts.
Dude, he's taking the piss out of xbox taking losses for 20years, hiding it, and using Windows/Office money to fund it. PS post their profits and losses and it was well known, Aaron Greenberg was even celebrating them "hemorrhaging at retail" during the PS3. you suggesting Sony is not highly successful is crazy talk.
 
Last edited:

Auto_aim1

MeisaMcCaffrey
It was more traumatic for PS3 fanboys. L upon L everyday. From the Bayonetta graphics comparisons to GT5 cockpits and the highs of Uncharted 2 to the lows of Geohot hack. It was a wild ride.
 
Last edited:

Spiral1407

Member
The original design was supposed to be 64MB eDram on the GPU and 128 on the Cell (no physical memory chips were planned at first). That's back when 360 was a 256MB machine also though - but yea they've basically designed a deferred shading accelerator with that thing. It would have been - interesting - to see results had those early machines came to pass...
Really? If so, then that was destined for failure imo. I was thinking more like 16MB of restricted GPU eDRAM and 512MB of unified XDR.

Do you have a source for that info? I'd love to read it since PS3 hardware stuff is so interesting.
That's overstating things - if game ran at 720p and upscaled to 1080p - you lost about 4MB of memory. In the end, every bit of memory matters in a console, yes - but it wasn't some large premium, we're talking 0.8% of ram available. And indeed that's why some games upscaled to 1080 vertical and let the upscaler do the rest (so getting further 2MB back).
I think the fact lots of games actually opted for native resolutions (giving perf/quality modes) was more interesting - and later in the gen that only became more common on the 360 as well. Obviously that had far larger memory ramifications - and plenty of software juggled it just fine on both consoles.
Again, it might seem small on its own, but combined with the inflated OS RAM usage, it does add up. I don't have exact figures for PS3, but when looking at the range of PS3 RAM usage figures reported around launch (84MB-120MB), 360 consistently had a 50MB advantage. The lack of hardware upscaling made that even worse.

With later games, I'd say that's more down to developers becoming accustommed to the hardware + Sony's OS optimisations. It's kinda like how PS2 games initially looked underwhelming in comparison to dreamcast, but eventually far exceeded that console as the gen continued.
That also wasn't PS3 exclusive. 360 upscaler had a broken gamma, and it was also programmable, where results varied from game to game. And the games running differently at different resolution was the result of not simply upscaling as I mention above - lots of software did this by late in the gen, including titles that specifically did 30fps 1080p and 60fps 720p tagets.
But indeed - on PS3 there were some games doing this from day 1 - and sometimes HD modes were rather broken - there was that Marvel game I remember in particular...
The example I mentioned was likely a result of the upscaling though. Sonic Unleashed ran at a constant 880x720 internally on both platforms and the only difference I could notice between both modes was the output resolution.
 

Celine

Member
PS3 is, alongside XB OG, the worst financial failure in console history.

PdeYe0O.jpeg
 

Clear

CliffyB's Cock Holster
The key issue for PS3 was manufacturing cost. That's where the losses were being accrued, and Sony had issues on that outside of the CPU/GPU. There was a severe shortage of blue-violet laser diodes for the BD pickups that wasn't resolved until 2007.

The unfortunate thing for Sony was the intrinsic need for these components, they literally had to have them even at inflated prices they were at.
 
Last edited:

Fafalada

Fafracer forever
Really? If so, then that was destined for failure imo. I was thinking more like 16MB of restricted GPU eDRAM and 512MB of unified XDR.
You have to remember these are timelines well ahead of both console launches(with launch planned much earlier than 2006), and things did change with time. But when these designs were worked on, we'd be looking at 192MB vs 256MB console, but the former having 20-30x the bandwidth, which makes up for memory deficit pretty well.
But it's entirely possible the change to XDR would have happened later anyway - and you'd end up with something closer to your example - although I don't see any scenario where GPU would have less than 32MB of eDram - Sony's done a fair bit of work with HD resolutions by then, and 64MB wasn't chosen by accident. FB-only approach to eDram that 360 used had a ton of other limitations outside of requiring tiling - ie. Sony/Toshiba approach to rasterization was to actively leverage high bandwidth in the rendering pipeline(also design mindset behind the PS2, but that got limited a bit by cutting the console to 4MB from the original 8MB plan) - MS just put it there to mitigate perceived weakness of original XB UMA.

Anyway - it wasn't the only 'all eDram' design Sony made that changed before launch - PSP was originally something like 14MB of eDram - and I'd argue that the switch to DDR actually crippled the device somewhat, despite having 3x the total ram in the end.

Do you have a source for that info? I'd love to read it since PS3 hardware stuff is so interesting.
Nothing that is publicly available sadly. A lot of that early GPU work was pretty hidden stuff.

Again, it might seem small on its own, but combined with the inflated OS RAM usage, it does add up. I don't have exact figures for PS3, but when looking at the range of PS3 RAM usage figures reported around launch (84MB-120MB), 360 consistently had a 50MB advantage. The lack of hardware upscaling made that even worse.
Yea I know, that definitely didn't help matters, just saying scaler was less of an issue than some other elements.

With later games, I'd say that's more down to developers becoming accustommed to the hardware + Sony's OS optimisations. It's kinda like how PS2 games initially looked underwhelming in comparison to dreamcast, but eventually far exceeded that console as the gen continued.
There was that - but like mentioned - multiple resolution targets were a thing from early days for PS3, even some of the launch window titles had two (or three) modes.
On 360 it mostly came down to 1-2 modes for first few years (it's less known, but majority of 360 titles DID have a native 480p target). But Hw didn't even support 1080p until the revision, and there was less incentive to support that with the upscaler obviously. Eventually, native 720p/1080p switch did start showing up in later years though.
 

pulicat

Member
That entire generation was a financial loss for both companies tbh. I recall the RROD incident also costing M$ billions to address. Only Nintendo came out unscathed and even then, their choices that gen ultimately led to the Wii U disaster.
Not just unscathed for Nintendo but a glorious triumph over their competitors, they made $22 billion with Wii and DS during PS3, PSP, and 360 era, that was a golden age of Nintendo financially before the Switch era.

Profits ranking
1. Switch era - $25 billion
2. Wii/DS era - $22 billion
3. PS5 era - $12 billion
4. PS4/Vita era - $9 billion
 

PaintTinJr

Member
You have to remember these are timelines well ahead of both console launches(with launch planned much earlier than 2006), and things did change with time. But when these designs were worked on, we'd be looking at 192MB vs 256MB console, but the former having 20-30x the bandwidth, which makes up for memory deficit pretty well.
But it's entirely possible the change to XDR would have happened later anyway - and you'd end up with something closer to your example - although I don't see any scenario where GPU would have less than 32MB of eDram - Sony's done a fair bit of work with HD resolutions by then, and 64MB wasn't chosen by accident. FB-only approach to eDram that 360 used had a ton of other limitations outside of requiring tiling - ie. Sony/Toshiba approach to rasterization was to actively leverage high bandwidth in the rendering pipeline(also design mindset behind the PS2, but that got limited a bit by cutting the console to 4MB from the original 8MB plan) - MS just put it there to mitigate perceived weakness of original XB UMA.
...


....
Surely that was a pre STI Group design? before they'd settled on the POWER architecture/instruction set for the SPUs, yes? because I can't see any scenario where the IBM hosted CELL BE documentation about its design ever could have worked for IBM and their needs of the STI group project.

That design sounds like it was still in the PS2 MIPS camp, especially as the Sony Zego graphics workstations that Fixstars sold with Yellow Dog Linux did have dual Cell BEs and bigger unified XDR for both of them, and the Nvidia Quadro card with GDDR3 as though it was an evolution from a final prototype dual Cell BE with unified XDR + Toshiba RSX 2D accelerator that would still have been 99% coherent with the IBM documentation that was published for PS3 Other OS and IBM Cell powered systems.
 

Spiral1407

Member
Not just unscathed for Nintendo but a glorious triumph over their competitors, they made $22 billion with Wii and DS during PS3, PSP, and 360 era, that was a golden age of Nintendo financially before the Switch era.

Profits ranking
1. Switch era - $25 billion
2. Wii/DS era - $22 billion
3. PS5 era - $12 billion
4. PS4/Vita era - $9 billion
Yeah, Wii was on top that gen by far. The only problem I can think of is it's comparatively low software sales, but selling over 100m units hardware without taking a loss clearly offsets that.
 

tanners

Neo Member
The cell is a great piece of technology even at todays standard. Even medical field use cell processor and it was banned to be exported to China due to military implication that tech might be stolen. To and many developers especially from third party complained so much about it even if 1st and 2nd party are amazing with the cell processor. I hope for its comeback on ps6, maybe a hybrid with AMD Zen. This will really help especially the PSSR ML.
Nah, IBM discontinued the Cell in 2009... They proably will not going dig the grave to bring it back :-/
The Cell's technology wasn't only made by Sony, also by IBM and Toshiba. And no, the PS3 banned in China was not in miltary reasons, rather than China's own console injunction in 2002-2014 period.
 

Fafalada

Fafracer forever
Surely that was a pre STI Group design? before they'd settled on the POWER architecture/instruction set for the SPUs, yes?
As far as I'm aware no - this was built parallel to Cell, it was 'the' GPU for PS3 (until it wasn't).
Anyway there were other missed targets, like Cell was supposed to be more than just 8-way SPU, Ken apparently really wanted that 1TFlop machine originally.
 

PaintTinJr

Member
As far as I'm aware no - this was built parallel to Cell, it was 'the' GPU for PS3 (until it wasn't).
Anyway there were other missed targets, like Cell was supposed to be more than just 8-way SPU, Ken apparently really wanted that 1TFlop machine originally.
I thought the: more than 8 SPUs I read about in 2011-ish was a twin 16 core Cell BE2 for either a PS3 Pro prototype or PS4 prototype long after Ken, I hadn't realised he was pushing that design from the beginning. I'm guessing they must have predicted a lower node for Cell launch chips than what they got if they thought they could fit twice the SPUs in a Cell BE at the design stage.
 
The thermal design on early PS3s was great but it was ultimately the fan curve that saved the day. Afaik, the 360 relied on a static temperature target rather than a fan curve like PS3, which meant that 360s were essentially locked at the GPU killing temps from day one. I think the 360s cooling system was generally good enough to support the console if the GPU didn't have the underfill defect. And even with the defect included, a significantly lower temperature target would have gone a long way. The original Xenons had the GPU running at 97C!

Yep that's exactly what made the difference; I learned about this bump-gate stuff from a quite long but in-depth documentary (it's on Youtube, should be easily findable for people interested). I'm not 100% knowledgeable of how it worked (it's been a while since I've seen the vid or done any reading on the topic), but they got into the temperature target for 360 and did tests, showing the GPU never dropped below a certain temperature (it might've been around 70-something degree Fahrenheit IIRC), no matter its activity level.

So I'm guessing that would've contributed to more stress on the underfill/ball grid array, between the GPU being locked at a high temperature and then cooling (i.e system shutdown) before it's on again and jumps right back up to that certain temperature once the system deemed it required. It's kinda like the lead effect in Alien 3 when they poured the lead on the alien and then hit it with the extremely cold water.

We should probably be fortunate no 360s exploded.
 
Top Bottom