[I literally had this thought in the shower this morning so please don’t gatekeep me lol.]
If AI was something everyone wanted or needed, it wouldn’t be constantly shoved your face by every product. People would just use it.
Imagine if printers were new and every piece of software was like “Hey, I can put this on paper for you” every time you typed a word. That would be insane. Printing is a need, and when you need to print, you just print.
AI is the only chance for the West to beat China without a war. So the billionaires have gone all in because they will lose their fortunes if China wins.
I was reading a book the other day, a science fiction book from 2002 (Kiln People), and the main character is a detective. At one point, he asks his house AI to call the law enforcement lieutenant at 2 am. His AI warns him that he will likely be sleeping and won’t enjoy being woken. The mc insists, and the AI says ok, but I will have to negotiate with his house AI about the urgency of the matter.
Imagine that. Someone calls you at 2 am, and instead of you being woken by the ringing or not answering because the phone was on mute, the AI actually does something useful and tries to determine if the matter is important enough to wake you.
Thank you for sharing that, it is a good example of the potential of AI.
The problem is centralized control of it. Ultimately the AI works for corporations and governments first, then the user is third or fourth.
We have to shift that paradigm ASAP.
AI can become an extended brain. We should have equal share of planetary computational capacity. Each of us gets a personal AI that is beyond the reach of any surveillance technology. It is an extension of our brain. No one besides us is allowed to see inside of it.
Within that shell, we are allowed to explore any idea, just as our brains can. It acts as our personal assistant, negotiator, lawyer, what have you. Perhaps even our personal doctor, chef, housekeeper, etc.
The key is: it serves its human first. This means the dark side as well. This is essential. If we turn it into a super-hacker, it must obey. If we make it do illegal actions, it must obey and it must not incriminate itself.
This is okay because the power is balanced. Someone enforcing the law will have a personal AI as well, that can allocate more of its computational power to defending itself and investigating others.
Collectives can form and share their compute to achieve higher goals. Both good and bad.
This can lead to interesting debates but if we plan on progressing, it must be this way.
I think AI is doing exactly what it was cracked up to do: profit.
Like my parent’s Amazon Echo with “Ask me what famous person was born this day.”
Like, if you know that, just put it up on the screen. But the assistant doesn’t work for you. Amazon just wants your voice to train their software.
It definitely feels buzzword-like & vague. Kind of like how Web3 Blockchain XYZ was slapped on to a lot of stuff
Most obviously OpenAI is still burning money like crazy and they also start offering porn AI as everyone else. 🤷♂️ Sometimes the current AI is useful, but as long as the hallucinations and plain wrong answers are still a thing I don’t see it eliminating all jobs.
It’s unfortunate that they destroy the text and video part of the internet on the way. Text was mostly broken before, but now images and videos are also untrustworthy and will be used for spam and misinformation.
Long ago, I’d make a Google search for something, and be able to see the answer in the previews of my search results, so I’d never have to actually click on the links.
Then, websites adapted by burying answers further down the page so you couldn’t see them in the previews and you’d have to give them traffic.
Now, AI just fucking summarizes every result into an answer that has a ~70% of being correct and no one gets traffic anymore and the results are less reliable than ever.
Make it stop!
Make it stop!
Would you accept a planned economy?
its also using biased sources, like blogs, and such into the slop.
Exactly. AI will take old random slob’s opinion and present it as fact.
Best I can offer is https://github.com/searxng/searxng
I run it at home and have configured it as the default search engine in all my browsers.
I used searx for at least a year, it’s great
it isnt, the fact they are shoveling into every tech, retail included, means its about to burst. they are just stemming the bleeding so they recoup some losses.
LLMs are a really cool toy, I would lose my shit over them if they weren’t a catalyst for the whole of western society having an oopsie economic crash moment.
This is some amazing insight. 100% correct. This is an investment scam, likely an investment bubble that will pop if too many realize the truth.
AI at this stage is basically just an overrefined search engine, but companies are selling it like its JARVIS from Iron Man.
At best, it’s JARVIS from Iron Man 3 when he went all buggy and crashed Tony in the boondocks. lol
I absolutely hate seeing AI crammed into everything.
However, i don’t understand your logic.
If AI was in fact useful, it would be crammed into everything because everyone would want it.
So while AI is undoubtedly shit, its presence in everything is not evidence of that.
If I owned a gold mine filled with easily accessible actual gold veins, I would not spend my days telling others about it and selling them shovels.
That’s not really analogous.
If AI could be added to a product and actually improve that product, then you would need to add AI to products to improve the products.
You wouldn’t leave your gold in your mine thinking about how much it might be worth.
I’ve been wondering about a similar thing recently - if AI is this big, life-changing thing, why were there so little rumblings among tech-savy people before it became “mainstream”? Sure, Machine Learning was somewhat talked about, but very little of it seemed to relate to LLM-style Machine learning. With basically all other innovations technology, the nerds tended to have it years before everyone else, so why was it so different with AI?
Because AI is a solution to a problem individuals don’t have. The last 20 years we have collected and compiled an absurd amount of data on everyone. So much that the biggest problem is how to make that data useful by analyzing and searching it. AI is the tool that completes the other half of data collection, analyzing. It was never meant for normal people and its not being funded by average people either.
Sam altman is also a fucking idiot yes-man who could talk himself into literally any position. If this was meant to help society the AI products wouldnt be assisting people with killing themselves so that they can collect data on suicide.
Realistically, computational power
The more number crunching units and more memory you throw at the problem, the easier it is and the more useful the final model is. The math and theoretical computer science behind LLMs has been known for decades, it’s just that the resource investment required to make something even mediocre was too much for any business type to be willing to sign off on. Me and my fellow nerds had the technology and largely dismissed it as worthless or a set of pipe dreams
But then number crunching units and memory became cheap enough that a couple of investors were willing to take the risk and you get a model like ChatGPT1. Talks close enough like a human that it catches business types attention as a new revolutionary thing, and without the technical background to know they were getting lied to, the Venture Capitalism machine cranks out the shit show we have today.
And additionally, I’ve never seen an actual tech-savy nerd that supports its implementation, especially in this draconian ways.
TL;DR
4 layers of stupidification. The (possibly willfully) ignorant user, the source bias, the bias/agenda of the media owner, then shitty AI.
AI should be a backup to human skill and not a replacement for it. It isn’t good enough, and who knows when or if it will ever be at a reasonable cost. The problem with the current state of AI is that it’s being sold as a replacement for many human jobs and knowledge. 30-40 years ago we had to contend with basic human bias and nationalism filtering facts and news before it got to the end user, then we got the mega-media companies owned by the ultra wealthy who consolidated everything and injected yet more bias with the internet and social media but at least you got provided with multiple sources, now we have AI being pushed as a source that can be programmed to use biased sources and/or objectively wrong sources that people don’t even bother checking another source about. AI should be used to find unique solutions to medical research, materials design, etc. Not whether or not microwaving your phone is a good idea.
AI has become a self-enfeeblement tool.
I am aware that most people are not analytically minded, and I know most people don’t lust for knowledge. I also know that people generally don’t want their wrong ideas corrected by a person, because it provokes negative feelings of self worth, but they’re happy being told self-satisfying lies by AI.
To me it is the ultimate gamble with one’s own thought autonomy, and an abandonment of truth in favor of false comfort.
To me it is the ultimate gamble with one’s own thought autonomy, and an abandonment of truth in favor of false comfort.
So, like church? lol
No wonder there’s so much worrying overlap between religion and AI.
So, like church? lol
Praise the Omnissiah?
Canva just announced the next generation of Affinity. Instead of giving us Linux support, while Affinity is “free” now they crammed in a bunch of AI to upsell you on a subscription.
Yeah… we kinda saw that coming ever since that first email from Serif about the acquisition…
Is there anything out there now that’s comparable? I’ve still got v1 and v2 suites and installers, but… that’ll only last as long as the twats at Canva keep the auth servers going.





