When I was young and starting out with computers, programming, BBS’ and later the early internet, technology was something that expanded my mind, helped me to research, learn new skills, and meet people and have interesting conversations. Something decentralized that put power into the hands of the little guy who could start his own business venture with his PC or expand his skillset.
Where we are now with AI, the opposite seems to be happening. We are asking AI to do things for us rather than learning how to do things ourselves. We are losing our research skills. Many people are talking to AI’s about their problems instead of other people. And they will take away our jobs and centralize all power into a handful of billionaire sociopaths with robot armies to carry out whatever nefarious deeds they want to do.
I hope we somehow make it through this part of history with some semblance of freedom and autonomy intact, but I’m having a hard time seeing how.
Ai pisses me off so much every day. It has made people so dumb and reliant on it. When I was a kid I didn’t have school or opportunities, just a computer. That computer expanded my universe and gave me the education I craved. I learned to fix computers both physically and virtually to eventually claw my way to my current software engineering career. All completely self taught with ZERO education. When I say zero I mean the last grade of school I graduated was 6th grade in 2001. I am still learning every day despite the ai bullshit. Especially these past 6 months, I decided to upgrade my personal computer that was using windows 7(lmao 🔥) to a newly built machine now running on Linux.
I refuse to use AI. I have proven over and over to my coworkers that AI makes our job worse. My coworkers create so many mistakes with AI and then they ask me to help because they don’t quite understand what AI has written for them. Ai is a plague on our intelligence. I hope it completely fails (it probably won’t 😖)
As far as I’m concerned the generative AI that we see in chatbots has no goal associated with it: it just exists for no purpose at all. In contrast to google translate or other translation apps (which BTW still use machine learning algorithms) have a far more practical use to it as being a resource to translate other languages in real-time. I don’t care what companies call it (if it’s a tool or not) at the moment its a big fucking turd that AI companies are trying to force feed down our fucking mouth.
You also see this tech slop happening historically in the evolution of search engines. Way before we had recommendation algorithms in most modern search engines. A search engine was basically a database where the user had to thoughtfully word its queries to get good search results, then came the recommendation algorithm and I could only imagine no one, literally no one, cared about it since we could already do the things this algorithm offered to solve. Still, however, it was pushed, and sooner than later integrated into most popular search engines. Now you see the same thing happening with generative AI…
The purpose of generative AI, much like the recommendation algorithm is solving nothing hence the analogy “its just a big fucking turd” is what I’m trying to persuade here: We could already do the things it offered to solve. If you can see the pattern, its just this downward spiraling affect. It appeals to anti intellectuals (which is most of the US at this point) and google and other major companies are making record profit by selling user data to brokers: its a win for both parties.
This is how I felt about it a year ago. But it has gotten so much better since then. It automates a lot of time consuming tasks for me now. I mean I’ve probably only saved 100 hours using it this year but the number is going up rapidly. It’s 100 more than it saved me last year.
it creates perceived shareholder value of an emerging market. that is it’s purpose.
it’s utility is not for the end-user. it’s something for shareholders to invest in, and companies to push in an attempt to generate shareholder interest. It’s to raise the stock-price.
And like all speculative assets… nobody will care about the returns on it, until they do. And once those returns don’t materialize… poof goes the market.
Just like they did with all the speculative investment bubbles based on insane theories.
Computers can and still do all that, you just need some mental discipline to avoid the cognitive equivalent of fast food being forced into your attention via AI slop and social media demagogues over corporate owned messaging systems.
But how many people are actually doing that? I reckon most people (myself included) don’t realise the extents of influence social media and other media outlets have on them, let alone act on that knowledge.
The LLM is absolutely not doing anything thinking for you. It’s can, at best surface someone else’s thinking based on a prompt.
Anyone that confuses what these things do with thinking is on a path towards psychosis.
Every 4 hours spent talking to one of these things is indistinguishable from talking to oneselves for 40 hours. It amplifies one’s inner thoughts in ways that prevoisly only a schizophrenic was able to enjoy.
It absolutely can replace hours of research or programming or drawing with a quick prompt. It does this for me often, and as of the latest Gemini pretty much is always right too.
Switching to Linux a few years ago gave me (at last part of) that feeling back
I’ve been using Linux steadily for the last 30 years, and yes it’s still great. But doesn’t really fill the niche that AI does.
Same!! Except it’s been about 6 months for me :)
Librarian here, can confirm.
I started my Master’s in Library and Information Science in 2010. We were told not to worry about the internet making us obsolete because we would be needed to teach information literacy.
Information literacy turned out to be something people didn’t want. They wanted to be told what to think, not taught skills to think for themselves.
It’s been the single greatest and most expensive disappointment of my life.
They wanted to be told what to think, not taught skills to think for themselves.
This must be one of the wisest statements I ever read on the internet.
How does one go about learning information literacy?
classes in philosophy, literature, politics, and digital media. typically.
you know, those evil humanities that are destroying society… because they don’t produce ‘value’.
Rhetoric is a big one too, not just to use but to be able to identify when it’s being used to manipulate you
Needing a masters for $18/hr sucks too.
if people don’t want to use computers to expand their mind, empower themselves and others then, obviously they won’t get those benefits
you can still use computers to do those things
AI isn’t the only thing you can use a computer for now. If you ignore AI and corporate software, there’s loads of mind expanding activities in computing.
Take a look at what you can self host with commodity hardware (barring the insane RAM prices right now).
I do lots of self hosting. But the issue is not what I will do but what the world will do and what we will be forced to do by our employers and pressure to work at an efficiency only possible with ai doing a lot of the work.
Any good thing will inevitably be corrupted by capitalism, because that is what capitalism does. It is a cancer, and it will consume everything and us all in the process.
I don’t know if it was in Strauss’ “Accelerando” that humanity told an AI to solve some complex problem at any cost, and the AI promptly turned all the matter in the solar system into a supercomputer capable of solving it.
That’s capitalism in a nutshell: “do profit” is the only imperative, and it will destroy everything, just like a cancer is predicated upon “do growth”, forever, at any cost, regardless of whether the host organism dies.
I haven’t read Strauss in so long. I need to go back and reread them. His books were always a lot of fun.
With AI, now it does the thinking for you […]
No, it doesn’t. It’s just mimikry. Autocomplete on steroids.
My father is convinced that humans and dinosaurs coexisted and told me that ai proved that to him. So… people do let it think for them.
So he let’s the “AI” do the hallucinating for him.
Yep lol.
Have you met many people?
Most people’s entire lives are a form of autocomplete.
Obvious non-argument is obvios.
This was true last year. But they are cranking along the ARC-AGI benchmarks designed specifically to test the kind of things that cannot be done by just regurgitating training data.
On GPT 3 I was getting a lot of hallucinations and wrong answers. On the current version of Gemini, I really haven’t been able to detect any errors in things I’ve asked it. They are doing math correctly now, researching things well and putting together thoughts correctly. Even photos that I couldn’t get old models to generate now are coming back pretty much exactly as I ask.
I was sort of holding out hopes that LLM’s would peak somewhere just below being really useful. But with RAGs and agentic approaches, it seems that they will sidestep the vast majority of problems that LLM’s have on their own and be able to put together something that is better at even very good humans at most tasks.
I hope I’m wrong, but it’s getting pretty hard to bank on that old narrative that they are just fancy autocomplete that can’t think anymore.
I’m pleased to inform you that you are wrong.
A large language model works by predicting the statistically-likely next token in a string of tokens, and repeating until it’s statistically-likely that its response has finished.
You can think of a token as a word but in reality tokens can be individual characters, parts of words, whole words, or multiple words in sequence.
The only addition these “agentic” models have is special purpose tokens. One that means “launch program”, for example.
That’s literally how it works.
AI. Cannot. Think.
…And what about non LLM models like diffusion models, VL-JEPA, SSM, VLA, SNN? Just because you are ignorant of what’s happening in the industry and repeating a narrative that worked 2 years ago doesn’t make it true.
And even with LLM’s, even if they aren’t “thinking”, but produce as good or better results than real human “thinking” in major domains, does it even matter? The fact is that there will be many types of models working in very different ways working together and together will be beating humans at tasks that are uniquely human.
Go learn about ARC-AGI and see the progress being made there. Yes, it will take a few more iterations of the benchmark to really challenge humans at the most human tasks, but at the rate they are going that’s only a few years.
Or just stay ignorant and keep repeating your little mantra so that you feel okay. It won’t change what actually happens.
Yeah those also can’t think, and it will not change soon
The real problem though is not if LLM can think or not, it’s that people will interact with it as if it can, and will let it do the decision making even if it’s not far from throwing dice
deleted by creator
We don’t even know what “thinking” really is so that is just semantics. If it performs as well or better than humans at certain tasks, it really doesn’t matter if it’s “thinking” or not.
I don’t think people primarily want to use it for decision making anyway. For me it just turbocharges research, compiling stuff quickly from many sources, writes code for small modules quite well, generates images for presentations, etc, does more complex data munging from spreadsheets or even saved me a bunch of time taking a 50 page handwritten ledger and near perfectly converting it to excel…
None of that requires decision making, but it saves a bunch of time. Honestly I’ve never asked it to make a decision so I have no idea how it would perform… I suspect it would more describe the pros and cons than actually try to decide something.
That’s a lot of bullshit.
this bubble can’t pop soon enough
was dotcom this annoying too?
was dotcom this annoying too?
Surprisingly, it was not this annoying.
It was very annoying, but at least there was an end in sight, and some of it was useful.
We all knew that http://www.only-socks-and-only-for-cats.com/ was going away, but eBay was still pretty great.
In contrast, we’re all standing around today looking at many times the world’s GDP being bet on a pretty good autocomplete algorithm waking up and becoming fully sentient.
It feels like a different level of irrational.
Dot com bubble was optimistic. AI bubble is pessimistic. People thought their lives would improve due to improved communication and efficiency. The internet was seen as a positive thing. The dot com bubble was more about monetizing it, but that wasn’t the zwitgeist. With AI people don’t see much benefits and are aware it’s purpose is to take their jobs.
With the dot com bubble, it was mainly mom and pop investors that were worst off, but many companies died. With AI bubble it seems like it’s the companies that will do worst when it crashes. Obviously, it affects everyone, but this skews more to the 1%. So hopefully it’s a lesson on greed. Unlikely though.
To me, this is more annoying. But I might have been too young and naïve back then.
If you can’t see it you’re not paying attention.
If you’re seeing it, you’re delusional.
Thank you…
Apparently, I’ve never ever considered any LLM serious, convenient, appropriate, similar to Markov’s chain, though those differ, and I disable and uninstall absolutely everything that is LLM related, or alters human effortful works, including programming suggestions, searching, and in any kind of adequate research, in the personal life or everywhere possible since 2021 (and some in 2023), where I had a few months of experimenting with those - enough to consider the time I still have to continue actually learning, discovering, and staying social as much as I can…
Please… Please, in context of such education… instead of investing your priceless, precious, finite life time… into such empty void as unknown output of unknown LLM from unknown dataset of unknown artists… developers… people… Please, instead, please consider to take your time… and try to see the love in someone’s else works, courses, videos, books, articles, schemes, tables, drawings… who would be only heartfelt delighted to know… to know that someone else like you, like themselves… were reaching out for their experience they were gaining for decades and worked hard to prepare it for someone out there… in search… for someone who wishes and tries to create something, to improve the world… to reach for an achievement… to treasure a goal… to invent a miracle…
Since isn’t the following the miraculous purpose to live and contribute to the infinite world? To gain experience by confident, adequate effort, to work towards achievements, to stay responsible as a human, to stay alive… Which is at least: personal contributions published, social interactions, actually felt and considered facts organized by accountable people, self-confidence and miraculous time you invest into learning the human experience published in marvelous works of books, articles, videos, forums, chats - the ineffable magnificence…
There’s use for LLM, including pentesting, medicine and analyzing of unknown and random for the sake of random in scopes of “black-box”, for example, sure, but overly rarely and the fear of malformed facts, unknown sources, disturbed art… will always shadow any presence of such generative technologies, I believe… Yet, shouldn’t technology support you, your mind (i.e. not atrophy it but train and discipline it), your creativity, your ideas, your… existence?
Since isn’t learning from someone else experience is actually important… Isn’t it ineffably magnificent to discover someone’s hard work… Isn’t the process of learning and discovering actually fun!
Isn’t the knowledge that you unique carry valuable… What is the fun, the purpose, otherwise…?Please consider your confidence, skills, mind, and… your precious time…
“If you’re not paying for the product, then you are the product.” ~ Tristan Harris
“Machines should work; people should think.” ~ IBM Pollyanna PrincipleI don’t think AI is taking jobs, I think dumbass execs use it as an excuse to fire people though.
No its apart of the companies business strategy. These tech companies fire an unprecedented amount of employees (primarily from the mass hiring during 2020) make a post they fired these employees because of ai improvements, see their stock price rise ultimately inflating it and creating an economic bubble, and rinse and repeat with the next wave of potential hires who are sucking their employers dick a little to hard.
It’s unethical, and it violates any and all job security and I don’t want to be apart of that toxic workspace. Its ironic im saying this because a few years ago if I got a job at Google I would say “fuck yea mother fucker count me in” and now I just don’t want to work for them. There are far better companies doing interesting and valuable work to benefit society than these hipster douche bags.
It’s definitely taking some jobs. Not a huge amount yet, but it’s unfortunately still getting better at a pretty good clip.
Maybe, which jobs though?
Graphic artists, translators, and copywriters are losing jobs in droves. It’s expanding. I sell contact center software and it’s just kicking off in my industry, but it’s picking up.
Yeah, I can see it happening there, especially for graphic artists (however actual graphic design is much better than anything a model can currently spit out). Translation is surprising to me, because in my experience, LLMs are actually kind of bad at actual translation especially when sounding natural according to local dialect. So I might consider that one to be a case of dumb bosses that don’t know any better.
I’m a DevOps engineer and dumb bosses are absolutely firing in my industry. However, our products have suffered the consequences and they continue to get worse and less maintainable.
As someone who uses machine translation on a daily basis, I’ve watched it go from being barely usable to as good as human translation for most tasks. It’s really uncommon that I find issues with it any more. And even if there is one issue in 1000 words or whatever, you can just have a human proofread it instead of translating the whole thing, it will reduce your headcount my 90%. But I think for most things, no one calls translators any more, they just go to google translate. Translators now only do realtime voice translation, not documents which used to be most of their work.
These things creep up on you. They aren’t good and you get comfortable that they don’t work that well, and then over time they start working as well or better than humans and suddenly there’s really no reason to employ someone for it.
Yep, its definitely gotten worse. Thats why I just live as if its 2005 in my house xD
It’s a real dilemma unfortunately. On one hand if you don’t get used to using it you will be at a massive disadvantage in whatever’s left of a job market in the future. On the other hand if you do get used to using it you will likely be atrophying parts of your brain and giving money to exactly the machine that will destroy us.
Bud… they said the same thing about computers when I was a kid in the 70s.
I certainly don’t remember that. And I was there.
I was certainly there and I do… this is from a google search
Key Themes and Examples from the Era
Concerns about automation and job displacement by computers were widely documented, particularly as computer technology became smaller, cheaper, and more integrated into various industries, from manufacturing floors to office settings.
-
Manufacturing and “Blue-Collar” Jobs: The introduction of computer numerical control (CNC) machinery led to a 24% drop in employment for high school dropouts in the metal manufacturing industry, fueling concerns about job security for skilled factory workers in the “Rust Belt”.
-
Office and “White-Collar” Jobs: White-collar workers also felt unease. Innovations like the automated teller machine (ATM) threatened bank tellers, while photocopiers were viewed with suspicion by some in publishing. The transition to computers on every desk in the late 70s and early 80s initially led to the firing of secretarial pools, forcing others (often men) to learn typing and computer skills.
-
Media Coverage and Public Discourse: The topic was covered by major publications.
- In 1965, Time Magazine ran a cover story on “the computer in society,” which included a prediction of shorter workweeks due to automation.
- In the UK, Prime Minister James Callaghan requested a think tank to investigate the potential impact of new technologies on employment.
- The term “job killer computer” was a popular slogan expressing the fear of technological unemployment.
I’ll tell ChatGPT to analyze your prompt. Can you give me a summary in the meantime?
Is this an AI summary
…yeah…
I overall like AI, but it’s not great for making this type of argument because it doesn’t offer anyone anything they can really use to update their beliefs about what’s true. Any of the factual claims there could be hallucinated, and most are only tangentially relevant to the question of how strong the parallels between the attitude towards computers 50 years ago are to attitudes towards AI now. If someone wants to seriously consider the question, it isn’t useful.
A better way to do it is to use it like a search engine to find relevant citeable information and then make your own case for its relevance. Or maybe in this case just some personal anecdotes would work pretty well, you’re claiming personal experience as your main source here and I kind of wanted to hear more about it, having not been there.
lol way to prove a point
fucking fuck
Are you under the impression that before ai, there were people prepare search responses?
An automated thing got replaced by an automated thing.
Whatever.
Well sure every new technology to some extent replaces jobs, but that wasn’t my primary thesis.
My primary thesis is that it is disempowering us, and centralizing power in a handful of billionaires. Personal computers in those days were empowering to the individual, whereas AI is empowering only for a handful of billionaires and disempowering for most other people.
I don’t remember anyone complaining back then that personal computers were taking their power and autonomy away and giving it to billionaires.
This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. - Plato on the invention of writing in The Phaedrus
Every notable invention associated with language (and communication in general) has elicited similar reactions. And I don’t think Plato is wholly wrong, here. With each level of abstraction from the oral tradition, the social landscape of meaning is further externalized. That doesn’t mean the personal landscape of meaning must be. AI only does the thinking for you if that’s what you use it for. But I do fear that that’s exactly what it will largely be used for. These technologies have been coming fast since radio, and it doesn’t seem like society has the time to adapt to one before the next.
There’s a relevant Nature article that touches on some/most of this.
I see these thought-terminating cliches everywhere, and nowhere do their posters pause a moment to consider the specifics of the actual technology involved. The people forewarning about this stuff were correct about, for instance, social media, but who cares because Plato wasn’t a fan of writing, we rode on horses before in cars, or the term Luddite exists…etc. etc.
I talked about the way in which Plato’s concerns were valid and expressed similar fears about misuse. The linked article is about how to approach the specific technology.
You didn’t say his concerns were valid. You said you thought he was not “wholly wrong”. Regardless, Plato being a crank about writing proves only that cranks existed before writing. It does nothing to help you interrogate nor help set you down the path to interrogate the problems mentioned (which is why I categorized it as a thought terminating cliche).
Your referenced article is basically a long-form version of your post, which has a perceivable bias toward the viewpoint that every newly-introduced technology can or will inevitably result in “progress” for humanity as a whole regardless of the methods of implementation or the incentives in the technology itself.
Far from being an instance of skub (https://pbfcomics.com/comics/skub/) as trumpeting this perspective – perhaps unknowingly – implies that it is (i.e. an agnostic technology / inanimate object that “two sides” are getting emotionally charged about), LLMs (and their “agentic” offspring) are both deliberately and unwittingly programmed to be biased. There are real concerns about this particular set of technologies that posting a quote from an ancient tome does not dismiss.
LLMs are both deliberately and unwittingly programmed to be biased.
I mean, it sounds like you’re mirroring the paper’s sentiments too. A big part of Clark’s point is that interactions between humans and generative AI need to take into account the biases of the human and the AI.
The lesson is that it is the detailed shape of each specific human-AI coalition or interaction that matters. The social and technological factors that determine better or worse outcomes in this regard are not yet fully understood, and should be a major focus of new work in the field of human-AI interaction. […] We now need to become experts at estimating the likely reliability of a response given both the subject matter and our level of skill at orchestrating a series of prompts. We must also learn to adjust our levels of trust
And as I am not, Clark is not really calling Plato a crank. That’s not the point of using the quote.
And yet, perhaps there was an element of truth even in the worries raised in the Phaedrus. […] Empirical studies have shown that the use of online search can lead people to judge that they know more ‘in the biological brain’ than they actually do, and can make people over-estimate how well they would perform under technologically unaided quiz conditions.
I don’t think anyone is claiming that new technology necessarily leads to progress that is good for humanity. It requires a great deal of honest effort for society to learn how to use a new technology wisely, every time.
-
Unless you were a hard GNU fan when you were a kid, it was the same process of giving power to billionares. Just that now it sits on 50 years of wins for the billionares side. So it’s closer to the endgame.
I’ve been a GNU fan since 1995. And yes, while buying software did make some billionaires, I never felt like it was taking away my abilities or autonomy or freedom until now. Back then I felt like it was giving me more of those things.
Is it possible you were just more naive back then?
I don’t know. Looking back I don’t think I gave up my abilities or allowed billionaires to replace me by using tech until LLM’s came along.
If AI can even half ass your job you barely had one to begin with. All us healthcare workers and the tradies are still making a half decent wage for real work just like we always have. And the food service and sanitation workers still aren’t doing the absolute best but they’re not hurting for work either. I’m not going to tell you I like the way my work is valued under capitalism but at least I’m tangibly benefitting other humans.
Future of health care workers?. This footage is real time autonomous movement. Not sped up, not teleoperated.
The fact that you think I clean houses only serves to prove how little you understand what I do.
Maybe you don’t, but I have a father in assisted living and know for a fact that there are an awful lot of nursing jobs that don’t look particularly different than this. AI will start with the hardest diagnosis tasks first, and at some point start doing the easiest physical ones. Then it will eat away the stuff in the middle gradually. This is one of the most needed areas for non human labor so it will be one of the most heavily focused on.
I don’t think it’s fair to say that just because you were a commercial graphic designer or translator or copywriter that you were doing bullshit work that was barely worth being called work.
Yes, healthcare is a very commendable line of work, no doubt, but we will see radiologists out of work fairly soon IMO, as well as anyone who interprets lab results, and very likely those who make diagnoses of all types. These are all things that AI will likely be doing better if they aren’t already.
Physical care will take longer and won’t be replaced until we have AI robots, but the gains there are happening fast too. We may only have another decade or so until we see a lot of that stuff being automated. It’s really hard to tell how fast this will all happen. Things do tend to happen slower than the hype around them, but the progress that’s happening every year is pretty staggering if you are really tracking it. I’d love to think that my job which requires mostly creative ways of dealing with people and negotiation is safe for some time, but I’m really doubting that I can make it the next 12 years I need to until retirement without some disruption.










