• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Says AI Will Lead To Job Losses, Invests $80 Billion In The Tech

Mattyp

Not the YouTuber
Japan’s problem is a declining birth rate. What Japan needs to do is what Hungary does which is to incentivize families to form and stay together.

Even then, we have to get away from this growth, growth, growth mindset. GDP doesn’t make a nation.
Japan is an entire different kettle of fish with the women not wanting the men anymore and men being indifferent in chasing at the same time.

They should accept the decline in population and get through it without importing 10 million immigrants they will be better off riding it out in the long term.




AI will shutter a lot of jobs, retrain its the future do something that a machine can’t do within seconds to prove your worth.

Me and my partner made an entire punk album together that sounds pretty dam decent to us over a few drinks with the power of this tech, and we’re only in the first iteration of it. Anyone burying their head in the sand in a job that could be replaced should be getting out of their industry now we’re going to see some crazy shit over the next decade it’s only just getting started.

Yes businesses will replace someone costing $100k a year when an input of text can produce the same results for pennies.
 

sachos

Member
the slowing of A.I. advancements is not only inevitable but already happening 🤷🏼‍♂️
What slowing? 2024 was the year video gen exploded with Sora showing it was possible early in 2024 and now Google's VEO 2 in December showing amazing progress in physics understanding and quality.
It was the first year we got an AI to actually be able to produce natural human sounding voice, capable of reproducing emotion (sad,anger,whisper,laugh,joy,etc) with GPT-4o.
We got Anthropics Claude 3 Opus that wowed us in the beginning of the year and now 3.6 Sonnet is smaller, faster, cheaper and smarter model showing the cost of intelligence trending towards zero.
OpenAI introduced their new scaling paradigm with the o-series of models, the first models able to "reason" (think for a long time before and answer), these models perform better the more compute you spend to generate the answer.
Did you see the announced o3 benchmark results? It had a way bigger jump in scores in hard benchmarks than expected. All in 3 months of progress since o1 preview around September. And pretty much every research scientists seems to think this trend of scaling test time compute will continue.
 
What slowing?

9FVzvyI.jpeg
lSUikN6.jpeg




.. in recent months, this faith in scaling has begun to collide with an inconvenient reality: signs are emerging that brute-force scaling alone may not be enough to drive continued improvements in AI.

OpenAI’s experience with its next-generation Orion model provides one data point. At 20% of its training process, Orion was matching GPT-4’s performance—what scaling laws would predict. But as training continued, the model’s gains proved far smaller than the dramatic leap seen between GPT-3 and GPT-4. In some areas, particularly coding, Orion showed no consistent improvement, despite consuming significantly more resources than its predecessors.

This pattern of diminishing returns isn’t isolated. The latest version of Google’s Gemini is reportedly falling short of internal expectations...
 

DoubleClutch

Gold Member

9FVzvyI.jpeg
lSUikN6.jpeg




.. in recent months, this faith in scaling has begun to collide with an inconvenient reality: signs are emerging that brute-force scaling alone may not be enough to drive continued improvements in AI.

OpenAI’s experience with its next-generation Orion model provides one data point. At 20% of its training process, Orion was matching GPT-4’s performance—what scaling laws would predict. But as training continued, the model’s gains proved far smaller than the dramatic leap seen between GPT-3 and GPT-4. In some areas, particularly coding, Orion showed no consistent improvement, despite consuming significantly more resources than its predecessors.

This pattern of diminishing returns isn’t isolated. The latest version of Google’s Gemini is reportedly falling short of internal expectations...

Show’s over folks! Nothing to see here.

In 2005 we had flip phones. In 2025 we have well functioning AI. Give it time, you’re not going to have breakthroughs every month.
 
On a scale of what though? 1 year on a 10-year scale is nothing. Even less on 100.

Companies are investing left, right, and center in such technologies and we’re talking about slowing.

It’s not slowing, it’s only getting started.
🤦🏻‍♂️ imagine taking about A.I. and having such bad Human Intelligence to understand the point being made. (especially for the one hyping up A.I.)
 

KXVXII9X

Member
They will do this and not follow through like their many other failed projects while letting go of a lot of potentially good workers.

I feel like Microsoft thinks only short term. This chase for AI is like rich people trying to play God. Reminds me of Deus Ex. Instead of augments it is AI. These companies will fall like Icarus when they squeezed every last drop out of everyone.
 

Shifty1897

Member
but in the meantime we need to hire millions of Indians on H1B visas!
I had to explain to a recent Computer Science master's student what the economic reality of the tech sector currently is.

I told him he doesn't just have to be better than his Indian colleagues. He has to be six times better than his Indian colleagues, because that's how many your employer can hire with your US salary.

My team hasn't hired on a non-Indian employee in nearly 3 years.
 
Last edited:

Sony

Nintendo
Nothing about this is new. We used to have a lot of manual labor which is automated. It will be no different with AI, personal jobs will be redundant. The question is how do we as a society only facilitate the people who will loose jobs, but more importantly, how do we allow automation to create value for society.

I wouldn't have voted for him as president, but Andrew Yang had a very good vision for why Universal Basic Income was needed.
 

mhirano

Member
This country has over 330 Million people. If we can’t develop enough such individuals ourselves then we’ve failed as a nation.

This nation produced Ben Carson which is one of the best brain surgeons in history.

Make the climate so more like him can emerge. Instead we push a culture of single mothers, ghetto culture, and victimhood.
Why spend hundreds of thousands of dollars traning doctors and engineers when you can just hire them already trained and experienced from lower income countries like India and Brazil?
 

ResurrectedContrarian

Suffers with mild autism
What slowing? 2024 was the year video gen exploded with Sora showing it was possible early in 2024 and now Google's VEO 2 in December showing amazing progress in physics understanding and quality.
It was the first year we got an AI to actually be able to produce natural human sounding voice, capable of reproducing emotion (sad,anger,whisper,laugh,joy,etc) with GPT-4o.
We got Anthropics Claude 3 Opus that wowed us in the beginning of the year and now 3.6 Sonnet is smaller, faster, cheaper and smarter model showing the cost of intelligence trending towards zero.
OpenAI introduced their new scaling paradigm with the o-series of models, the first models able to "reason" (think for a long time before and answer), these models perform better the more compute you spend to generate the answer.
Did you see the announced o3 benchmark results? It had a way bigger jump in scores in hard benchmarks than expected. All in 3 months of progress since o1 preview around September. And pretty much every research scientists seems to think this trend of scaling test time compute will continue.
Indeed, this has been a huge year with even more breakthroughs every week now.

AI isn't slowing in the sense that it's some transient change in technology; it is completely reshaping the world of technology at every level. Traditional development will be rapidly displaced by AI-augmented development and systems, and it's happening faster and faster.

This isn't just about massive LLMs alone, it's the convergence of all the different modalities. Vision models are being used to synthetically train video generation models that are begin used to synthetically train SoTA world simulation models with advanced grasp of physics, like Cosmos that was just released https://github.com/NVIDIA/Cosmos

Many current areas of coding & dev expertise will be eliminated in a decade. For instance, the old stacks of building 3d models in Maya or whatever software, custom shaders and textures, etc by hand and putting it all together to make a special effects scene in a movie -- that simply won't exist, that entire stack can be replaced by sampling from controlled simulation models in a way that abandons almost all the former areas of expertise.
 
Top Bottom