Like with any new technology. Remember the blockchain hype a few years back? Give it a few years and we will have a handful of areas where it makes sense and the rest of the hype will die off.
Everyone sane probably realizes this. No one knows for sure exactly where it will succeed so a lot of money and time is being spent on a 10% chance for a huge payout in case they guessed right.
It has some application in technical writing, data transformation and querying/summarization but it is definitely being oversold.
There’s an area where blockchain makes sense!?!
Cryptocurrencies can be useful as currencies. Not very useful as investment though.
Git is a sort of proto-blockchain – well, it’s a ledger anyway. It is fairly useful. (Fucking opaque compared to subversion or other centralized systems that didn’t have the ledger, but I digress…)
Yep, Ik ai should die someday.
Mr. Torvalds is truly a generous man, giving the current AI market an analysis of 10% usefulness is probably a decimal or two more than will end up panning out once the hype bubble pops.
I’m waiting for the part that it gets used for things that are not lazy, manipulative and dishonest. Until then, I’m sitting it out like Linus.
This is where I’m at. The push right now has nft pump and dump energy.
The moment someone says ai to me right now I auto disengage. When the dust settles, I’ll look at it seriously.
AI has been used for these things for decades, they are just in the background and not noticed by laypeople
Though the biggest issue is that when people say “AI” today, they mean specifically LLMs, but the world of AI is so much larger than that
I’m waiting for the part that it gets used for things that are not lazy
Replacing menial or boring tasks is like 90% of what I’m hoping from it.
He is correct. It is mostly people cashing out on stuff that isn’t there.
Decided to say something popular after his snafu, I see.
Ai bad gets them every time.
In a way he’s right, but it depends! If you take even a common example like Chat GPT or the native object detection used in iPhone cameras, you’d see that there’s a lot of cool stuff already enabled by our current way of building these tools. The limitation right now, I think, is reacting to new information or scenarios which a model isn’t trained on, which is where all the current systems break. Humans do well in new scenarios based on their cognitive flexibility, and at least I am unaware of a good framework for instilling cognitive flexibility in machines.
100% hyped by the people who’ve watched a few youtube videos and now claim they’re an expert
I had a professor in college that said when an AI problem is solved, it is no longer AI.
Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they’re just tools we use without thinking about them.
I’m sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it’s just the keyboard app on my phone.
There’s a name for it the phenomenon: the AI effect.
The approach of LLMs without some sort of symbolic reasoning layer aren’t actually able to hold a model of what their context is and their relationships. They predict the next token, but fall apart when you change the numbers in a problem or add some negation to the prompt.
Awesome for protein research, summarization, speech recognition, speech generation, deep fakes, spam creation, RAG document summary, brainstorming, content classification, etc. I don’t even think we’ve found all the patterns they’d be great at predicting.
There are tons of great uses, but just throwing more data, memory, compute, and power at transformers is likely to hit a wall without new models. All the AGI hype is a bit overblown. That’s not from me that’s Noam Chomsky https://youtu.be/axuGfh4UR9Q?t=9271.
I’ve often thought LLMs could replace all of the C-suites and upper and middle management.
Funny how no companies push that as a possibility.
I almost expect that we’ll see some company reveal it has been letting an AI control the top level decision making for the business itself, including if and when to reveal the AI.
But the funny thing will be that all the executives and board members still have jobs and huge stock awards. They will all pat each other on the back for getting paid more money to do less work, by being bold and taking a risk to let the computer do half their job for them.
game devs gonna have to use different language to describe what used to be simply called “enemy AI” where exactly zero machine learning is involved
CPU
Logic and Path-finding?
“duh.”
No AI is a very real thing… just not LLMs, those are pure marketing
The latest llms get a perfect score on the south Korean SAT and can pass the bar. More than pure marketing if you ask me. That does not mean 90% of business that claim ai are nothing more than marketing or the business that are pretty much just a front end for GPT APIs. llms like claud even check their work for hallucinations. Even if we limited all ai to llms they would still be groundbreaking.
Korean SAT are highly standardized in multiple choice form and there is an immense library of past exams that both test takers and examiners use. I would be more impressed if the LLMs could show also step by step problem work out…
Claud 3.5 and o1 might be able to do that; if not, they are close to being able to do that. Still better than 99.99% of earthly humans
You seem to be in the camp of believing the hype. See this write up of an apple paper detailing how adding simple statements that should not impact the answer to the question severely disrupts many of the top model’s abilities.
In Bloom’s taxonomy of the 6 stages of higher level thinking I would say they enter the second stage of ‘understanding’ only in a small number of contexts, but we give them so much credit because as a society our supposed intelligence tests for people have always been more like memory tests.
Exactly… People are conflating the ability to parrot an answer based on machine-levels of recall (which is frankly impressive) vs the machine actually understanding something and being able to articulate how the machine itself arrived at a conclusion (which, in programming circles, would be similar to a form of “introspection”). LLM is not there yet
Copilot by Microsoft is completely and utterly shit but they’re already putting it into new PCs. Why?
Investors are saying they’ll back out if no AI in products. So tech leaders will talk talk and all deal with ai.
Copilot + Pcs tho…
Linus is known for his generosity.
Linus is a generous man.
Dude…
What?
True. 10% is very generous.
So basically just like linux. Except linux has no marketing…So 10% reality, and 90% uhhhhhhhhhh…
90% angry nerds fighting each other over what answer is “right”
Never heard of Android I guess?
That says more about your ignorance than anything about AI or Linux.
What
Some Linux bad Windows good troll
Did I fall into a 1999 Slashdot comment section somehow?
You’re aware Linux basically runs the Internet, right?
You’re aware Linux basically runs the
InternetWorld, right?Billions of devices run Linux. It is an amazing feat!
So basically just like linux. Except linux has no marketing
Except for the most popular OS on the Internet, of course.
I play around with the paid version of chatgpt and I still don’t have any practical use for it. it’s just a toy at this point.
It’s useful for my firmware development, but it’s a tool like any other. Pros and cons.
I used chatGPT to help make looking up some syntax on a niche scripting language over the weekend to speed up the time I spent working so I could get back to the weekend.
Then, yesterday, I spent time talking to a colleague who was familiar with the language to find the real syntax because chatGPT just made shit up and doesn’t seem to have been accurate about any of the details I asked about.
Though it did help me realize that this whole time when I thought I was frying things, I was often actually steaming them, so I guess it balances out a bit?
I use shell_gpt with OpenAI api key so that I don’t have to pay a monthly fee for their web interface which is way too expensive. I topped up my account with 5$ back in March and I still haven’t use it up. It is OK for getting info about very well established info where doing a web search would be more exhausting than asking chatgpt. But every time I try something more esoteric it will make up shit, like non existent options for CLI tools
ugh hallucinating commands is such a pain