kensama
Member
I'm still mightily confused as to how a GCN @ 13.8TF can be worse than an RDNA1 @ 9.75TF.
To me, it's like someone telling me that a ton of bricks is heavier than a ton of feathers.
Is a teraflop really such a bad measurement of things? After all, a flop is the measurement of floating point operations per second. A terraflop is a billion floating point operations per second.
Therefore, is one piece out hardware is outputing 13.8TF per second, while the other is outputting 9.75T per second, it should follow that the one with the higher number is handling more of these calculations per second and therefore is the better device.
What else is going on to affect the performance?
Because RDNA1 (GCN) has not RT core built in hardware. RDNA2 is like Nvidia RTX with tensor core built in hardware.
So for 13.8 TF in GCN you must add that vast part of TF will be used to do what tensor core or RDNA2 core inside GPU cn not on GCN achitecture (which must be done by software and use many ressources from GPU).
Last edited: