I can't speak for others who utilize Github and the testing data as a reference point, but I'll answer what I can in your post.
-I always thought the initial testing data was doing BC testings for PS4 and Pro. This was even before I learned about the Ariel iGPU testing profile and therefore meant even well before bringing that up into discussion, I felt that testing data was only reflective of BC settings and not the full actual chip
-The 36/40 is either the full Oberon CU counts, or a count that's incomplete. It's not the fault of the data miners why that particular question can't be clearly answered; you'd have to ask Sony and AMD staff running these tests and storing the data about that.
-You are actually misinterpreting what RDNA2 actually is. RDNA2 isn't exclusively 7nm EUV. AMD's wording clouded conclusions on that but they shortly clarified after the conference that it was a nomenclature thing, so 7nm"+" can be thought of more as further improvement on the 7nm node process
-Another important thing to keep in mind is that RDNA2 does not necessarily mean "large chip" out of necessity. AMD have never stated this, not even once, and they never will. Why would they keep RDNA2 features like RT, VRS, VRR etc. off their mobile APUs, which have very low CU counts compared to these consoles? Some people have automatically made the correlation when that is just them jumping through an unnecessary hoop. It just so happens there will be chips on RDNA2 that are "big" chips, but RDNA2 features can be implemented on smaller chips as well. Otherwise, AMD have completely failed in designing a smart, scalable GPU architecture
-There's nothing inherently stopping Sony from going with a bigger chip, but that doesn't mean a bigger chip was their goal, either. I actually speculated why there's a chance they've gone with a relatively smaller chip, but there are also cost-related reasons, plus the fact Sony is very likely going to manufacture a very big chunk of units for the first fiscal quarter (I would guess at least 10 million) and then you have to weigh all of that against the size of Sony as a corporation itself and what money they have on hand (and what can be borrowed through big loans from banks or investors) to cover those costs. That alone could have led them down a path of wanting a relatively "smaller" chip.
-You're right, pretty much anyone speculating Oberon is an RDNA2 chip is doing so with information outside of Github. But for myself personally, I've been using a swell of other sources besides that in my own speculation for a long time now, and more to the point, it's not like most of the insiders really know what the PS5 is, either. If so, we could actually get some more nuanced specifications beyond simply a TF range. And when you have guys like Tommy Fischer commit reputation suicide by almost duping a mod with doctored controller photos, that just puts an extra bit of questioning on other rumors as a whole: even when pressed with proof some of these dudes may try bullshitting their way to the top because the balls are just that brass.
-I never questioned PS5 having hardware ray-tracing, not once. But it wasn't just Githubbers speculating about PS5 being RDNA1; many people in general were. Many were under the assumption Sony had their own RT ASIC solution either integrated into the APU as an IP block (some went as far as to bring up a team member having worked on PowerVR as Sony maybe using PowerVR RT instead of AMD's), or connected to the APU, likely with some PCIe interconnect or something to that effect. And that was mainly Sony people speculating this. Even I did for a while but once it became apparent how inefficient a separate RT ASIC would be to the design (bandwidth issue, potentially) I just accepted that PS5 would have AMD's RT in one form or another, and that's while still relating Oberon to PS5. In other words, I dropped PS5 being RDNA1 a long time ago and that was pretty much at the time Cerny clarified things and when the RT ASIC idea fell through.
-Gotta keep in mind that a lot of us were also uncertain about RDNA2's node process. I assumed both PS5 and XSX were RDNA1 because I thought they would be 7nm. In fact, given AMD's recently said 7nm+ isn't necessarily EUV but rather improvements on the 7nm process, it means almost for certain PS5 and XSX are still 7nm but with some efficiency gains (just not EUV levels). I even doubted XSX was fully RDNA2 even when Microsoft themselves mentioned RDNA2 in that Twitter post, because I assumed RDNA2 was exclusively 7nm EUV and knew that would be even costlier than regular 7nm.
So basically, I think PS5 and even XSX will still be 7nm, but be RDNA2 chips. That should be pretty clear-cut.
-I think people are overestimating RDNA2 efficiency. The 50% perf-per-watt doesn't actually mean 50% IPC over RDNA1. And we now know RDNA2 is not exclusive to EUV, and the "7nm+" is actually indicative of small improvements over the 7nm process, notable improvements but not to the level of what actual 7nm EUV brings. So if anything it suggests both PS5 and XSX are 7nm but benefit from slight efficiency improvements while not being on the EUV process.
-Wasn't the interpretation of the testing data always meant to indicate BC of PS4 and PS4 Pro? Even with that said it asks the question of why 2GHz needed to be tested for that except to perhaps ensure PS4 Pro titles could hit native 4K60 natively on the chip. Probably a question worth asking and one we'll hopefully get an answer for soon (through Sony themselves).
-So I guess that brings us back to the high clock. You ask why would they do it, well, I'm still asking that too. I have always said Github and the testing data do not represent everything in terms of PS5's performance, not even once. However, I think with clarification on what RDNA2 actually is and the fact it can be on both 7nm and 7nm EUV (and the "7nm+" was apparently more about minor improvements on the 7nm process, probably through things AMD have done themselves on the architecture pipeline side) that there might be some justification for the 2GHz clock testing on Oberon outside of acting as a free FPS stability boost for Pro-enhanced games.
What would justify that? Well, the easiest answer is that Oberon might be a 36/40CU chip after all. Or, it could be a slightly larger chip (the Dec revision adjusted the memory bus controller IIRC, upping it higher) but still getting a relatively high clock. That's why I think maybe the best answer to what PS5 actually is (for now) is to go somewhere in the middle. A 44 - 48 active CU chip on a more recent revision, pushing relatively high clocks (anything between 1800MHz and 1900MHz if 2000MHz can't be sustained) might fit that bill.
But I can't have a grounded speculation like that by simply considering one source type (insiders) as gospel and the other (pertinent & persistent testing data) as throwaway junk. Unfortunately a lot of the people who say things like "Team Github" are doing it unironically, so as to discredit anyone even partially referencing the Github stuff or any of the datamined testing data whatsoever, for any purpose at all. That type of reasoning is nothing short of insanity.
I also think it's because we really do have a group of people who are desperate for a power crown battle at any costs, unwilling to even consider the worth or validity of any source that isn't for 'their team" coming out on top in a proverbial e-penis TF pissing contest, and assume anyone and everyone who refers to anything besides insiders exclusively as being against their preferred brand or "team". The truth is you may get that with literally a very small minority of people but the vast majority of that type aren't here on these forums.
Yes it's unfortunate people with actual troll-like behavior in the past such as TimDog (and even some people here like Longdi who I hope has calmed that type of posting down) are clinging to the Github 9.2TF stuff to antagonize PS fans, but I'm not TimDog and TimDog is not representative of the majority of us who simply see it as a valuable asset in discussing the possibility of next-gen PlayStation. Hell, most of us are even doing so while still listening and giving consideration to the rumors as well, it doesn't have to be an either/or situation.
In the end, I ultimately agree with your end point: Github doesn't provide the full picture on PS5. It didn't provide the full picture on XSX either and barely provides anything on Lockhart (speaking of which, I'll actually take a minute here and say how funny it was to see some Xbox people try arguing the 22/24 CU APU leak that very clearly listed Microsoft in it, with a 950MHz TDP-down, was in fact not related to Lockhart. It took until DF did a video simulating a 4TF Lockhart running games at 1080p and mostly 60FPS clocks for them to accept that particular leak. However I took that leak to pretty much be Lockhart upon first seeing it and it also has me thinking Lockhart is not a "traditional" game console, but that's something for another time).
However, there're people who will take anyone merely stating simply that, and interpret it to mean "Github is useless. The testing data is useless. None of these chips are PS5 related at all", and some of them know what they're doing when they do this. So IMHO, they're just as guilty as having an agenda in dismissing Github as some people are in using Github and the testing data to antagonize Sony and PS fans (and pushing 9.2TF like it's a derogatory insult). It goes both ways on this, and as much as I don't want to give the fanatical, trolling Xbox fanboys ammunition (in needing to clarify things on the Github info and testing data from my own POV), I don't want to give the fanatical, trolling PlayStation fanboys ammunition in attacking all aspects of that info and testing data as being FUD that only folks with an agenda against Sony and PS would bother bringing up in the first place, because for the vast majority of us that is NOT the case whatsoever.
I just simply talk about what I observe, and I try to observe from multiple sources, and be fair in doing so to see where they can meet in the middle and agree upon. However, I can't help if one of those source types has data on its side that I can look at, check up on/research, verify on my own (in terms of following any coding the producers of the chips the data comes from would follow), and itself has been very consistent. And I can't help if that comes off to some people as me "choosing" one side over another when that isn't me doing so out of some emotional need or negative bias towards a preference. And I honestly think most of the other people who regard the Github info and testing data with at least some weight would feel similar.