My guess would be that the GPU in the PS6 is going to be three device layers, each one being a newer revision of the PS5 Pro GPU and clocked at whatever allows them to hit the 250watt limit or less
Then paired with a modern mobile Zen CPU inside the APU with decent power efficient but with equal or higher clocks to the PS5/Pro, for B/C with but better IPC and throughput because of the 3D cache.
Assuming they were going with 3x Crossfire GPU I would expect 48GBs of whatever GDDR memory won't bottleneck performance, so possible sticking with GDDR6 and just relying on the GPU crossfire setup with a memory controller operating in parallel on three 16GB regions to give a big multiplier in bandwidth by controller complexity rather than chasing expensive GDDR, combined with an update IOComplex with three times the bandwidth (ESRAM) to scale appropriately.
If they were doing it this way, they'd be completely covered for PS5 B/C, mostly PS5 Pro B/C with patches to handle clocks and redirecting raster, RT and ML to the different GPUs, and Cross gen by taking the Pro solution and just ramping up the ML AI and RT on those hardly used parallel GPUs and using the newer Zen CPU and more GDDR.
Early native PS6 games would then probably utilise the Zen CPU, new IOcomplex and RAM fully with Raster on one GPU1, RT on GPU2 and ML AI on GPU3,
Fully developed PS6 games would instead split the Raster, ML AI and RT across the GPUs 1-3 as jobs to scale by need rather than dedicate whole GPU cores per feature IMO.