Team Andromeda
Member
Yes, because you can bet China isn't using AICapitalism at it's finest. The Butlerian Jyhad looking more and more plausible.
Yes, because you can bet China isn't using AICapitalism at it's finest. The Butlerian Jyhad looking more and more plausible.
Japan is an entire different kettle of fish with the women not wanting the men anymore and men being indifferent in chasing at the same time.Japan’s problem is a declining birth rate. What Japan needs to do is what Hungary does which is to incentivize families to form and stay together.
Even then, we have to get away from this growth, growth, growth mindset. GDP doesn’t make a nation.
What slowing? 2024 was the year video gen exploded with Sora showing it was possible early in 2024 and now Google's VEO 2 in December showing amazing progress in physics understanding and quality.the slowing of A.I. advancements is not only inevitable but already happening
What slowing?
Declining Quality of OpenAI Models Over Time: A Concerning Trend
I’ve noticed a troubling pattern with OpenAI’s release strategy for their language models, particularly GPT-4 and now GPT-4o. It seems they initially release a high-quality product to attract subscribers, but over time, the model’s performance degrades noticeably. Key observations: Initial...community.openai.com
Has AI scaling hit a limit? - Foundation Capital
On the eve of ChatGPT’s second birthday, I examine the fault lines in one of the field's most powerful assumptions.foundationcapital.com
.. in recent months, this faith in scaling has begun to collide with an inconvenient reality: signs are emerging that brute-force scaling alone may not be enough to drive continued improvements in AI.
OpenAI’s experience with its next-generation Orion model provides one data point. At 20% of its training process, Orion was matching GPT-4’s performance—what scaling laws would predict. But as training continued, the model’s gains proved far smaller than the dramatic leap seen between GPT-3 and GPT-4. In some areas, particularly coding, Orion showed no consistent improvement, despite consuming significantly more resources than its predecessors.
This pattern of diminishing returns isn’t isolated. The latest version of Google’s Gemini is reportedly falling short of internal expectations...
duh?Show’s over folks! Nothing to see here.
In 2005 we had flip phones. In 2025 we have well functioning AI. Give it time, you’re not going to have breakthroughs every month.
duh?
Slowing ≠ as the show being over.
Then why post such an alarmist, ridiculous article?
imagine taking about A.I. and having such bad Human Intelligence to understand the point being made. (especially for the one hyping up A.I.)On a scale of what though? 1 year on a 10-year scale is nothing. Even less on 100.
Companies are investing left, right, and center in such technologies and we’re talking about slowing.
It’s not slowing, it’s only getting started.
imagine taking about A.I. and having such bad Human Intelligence to understand the point being made. (especially for the one hyping up A.I.)
imagine taking about A.I. and having such bad Human Intelligence to understand the point being made. (especially for the one hyping up A.I.)
Please explain that statement?a.i is a fad.
I had to explain to a recent Computer Science master's student what the economic reality of the tech sector currently is.but in the meantime we need to hire millions of Indians on H1B visas!
Why spend hundreds of thousands of dollars traning doctors and engineers when you can just hire them already trained and experienced from lower income countries like India and Brazil?This country has over 330 Million people. If we can’t develop enough such individuals ourselves then we’ve failed as a nation.
This nation produced Ben Carson which is one of the best brain surgeons in history.
Make the climate so more like him can emerge. Instead we push a culture of single mothers, ghetto culture, and victimhood.
Indeed, this has been a huge year with even more breakthroughs every week now.What slowing? 2024 was the year video gen exploded with Sora showing it was possible early in 2024 and now Google's VEO 2 in December showing amazing progress in physics understanding and quality.
It was the first year we got an AI to actually be able to produce natural human sounding voice, capable of reproducing emotion (sad,anger,whisper,laugh,joy,etc) with GPT-4o.
We got Anthropics Claude 3 Opus that wowed us in the beginning of the year and now 3.6 Sonnet is smaller, faster, cheaper and smarter model showing the cost of intelligence trending towards zero.
OpenAI introduced their new scaling paradigm with the o-series of models, the first models able to "reason" (think for a long time before and answer), these models perform better the more compute you spend to generate the answer.
Did you see the announced o3 benchmark results? It had a way bigger jump in scores in hard benchmarks than expected. All in 3 months of progress since o1 preview around September. And pretty much every research scientists seems to think this trend of scaling test time compute will continue.