

Yet nothing in the news, because Americans are all talk.
Yet nothing in the news, because Americans are all talk.
Chances are you wouldn’t do anything, like all the other people seeing kidnappers with masks and doing nothing. Looks like the rest of the world was right when they told Americans that their arguments for keeping guns to prevent the government from going rogue was all talk.
Why are the first two ifs in infix notation and the third isn’t?
Interesting that Americans always argue that they need their guns in case the government goes rouge. So many comments say that they will get shot, because they absolutely look like kidnappers, yet nothing happens. Looks like you keep those guns around just for a higher suicide rate and school shootings.
I didn’t say they have no knowledge, quite the opposite. Here a quote from the comment you answered:
LLMs are extremely knowledgeable (as in they “know” a lot) but are completely dumb.
There is a subtle difference between intelligent and knowledgeable. LLM know a lot in that sense that they can remember a lot of things, but they are dumb in that sense that they are completely unable to draw conclusions and put that knowledge into action in any other means besides spitting out again what they once learned.
That’s why LLMs can tell you a lot about about all different kinds of game theory about tic tac toe but can’t draw/win that game consistently.
So knowing a lot and still being dumb is not a contradiction.
But wouldn’t you point still be true today that the best AI video models today would be the onces that are not available for consumers?
Coding isn’t special you are right, but it’s a thinking task and LLMs (including reasoning models) don’t know how to think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn’t learn to think from that. That’s why LLMs can’t replace humans.
That does certainly not mean that software can’t be smarter than humans. It will and it’s just a matter of time, but to get there we likely have AGI first.
To show you that LLMs can’t think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though it “saw” the entire Wikipedia article on how xxo works during training, that it’s a solved game, different strategies and how to consistently draw - but still it can’t do it. It loses most games against my four year old niece and she doesn’t even play good/perfect xxo.
I wouldn’t trust anything, which is claimed to do thinking tasks, that can’t even beat my niece in xxo, with writing firmware for cars or airplanes.
LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can’t think. For now, but likely we’ll need different architectures for real thinking models than LLMs have.
I don’t see how that follows because I did point out in another comment that they are very useful if used like search engines or interactive stack overflow or Wikipedia.
LLMs are extremely knowledgeable (as in they “know” a lot) but are completely dumb.
If you want to anthropomorphise it, current LLMs are like a person that read the entire internet, remembered a lot of it, but still is too stupid to win/draw tic tac toe.
So there is value in LLMs, if you use them for their knowledge.
Totally agree with that and I don’t think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That’s why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.
I can’t speak for Lemmy but I’m personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it’s good for. But “thinking” is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.
It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can’t do.
Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.
There actually isn’t really any doubt that AI (especially AGI) will surpass humans on all thinking tasks unless we have a mass extinction event first. But current LLMs are nowhere close to actual human intelligence.
Text that’s not code might also work.
A drill press (or the inventors) don’t claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them “reasoning” models, but still fail to beat my niece in tic tac toe, which by the way doesn’t have a PhD in anything 🤣
LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.
If they can’t handle things they even saw during training (but sparsely, like tic tac toe) it wouldn’t be able to produce code you should use in production. I wouldn’t trust any junior dev that doesn’t set their O right next to the two Xs.
I don’t think it’s cherry picking. Why would I trust a tool with way more complex logic, when it can’t even prevent three crosses in a row? Writing pretty much any software that does more than render a few buttons typically requires a lot of planning and thinking and those models clearly don’t have the capability to plan and think when they lose tic tac toe games.
Play ASCII tic tac toe against 4o a few times. A model that can’t even draw a tic tac toe game consistently shouldn’t write production code.
Practically all LLMs aren’t good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four year old niece and I wouldn’t trust her writing production code 🤣
Once a single model (doesn’t have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.
Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn’t compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).
I don’t say that AI (especially AGI) can’t replace humans. It definitely can and will, it’s just a matter of time, but state of the Art LLMs are basically just extremely good “search engines” or interactive versions of “stack overflow” but not good enough to do real “thinking tasks”.
Stop using timezones? So every day would actually be two weekdays because at some random point in time it would switch date during the day. Let’s meet next Monday wouldn’t even specify a single day anymore in most countries. And there is no real benefit to stop using timezones, just downsides. Yes you’d know which time it is anywhere but you still wouldn’t know of they are awake or not and have to either look it up or remember it - the same you have to do now.
Shut that section down and ground the wires. Not really that dangerous. It’s only dangerous if you don’t follow protocol.
That’s not why JS is a big pile of crap. It’s because the language was not thought through at the beginning (I don’t blame the inventors for that) and because of the web it spread like wildfire and only backwards compatible changes could be made. Even if with all your points in mind the language could be way nicer. My guess is that once wasm/wasi is integrated enough to run websites without JS (dom access, etc.) JS will be like Fortran, Cobol and Telefax - not going away any time soon, but practically obsolete.
That’s what I meant with you wouldn’t do anything. First you’d still have other options and if ICE is at your door and you start shooting at them it will probably kill you faster.
That was not my intention. It totally get why nobody is doing anything, especially with guns, because of the consequences. I just pointed out that so many people in favor of armed civilians bring completely flawed arguments and now we actually see in action how useless a “regulated militia” is.
Why would I? I don’t have a gun and never argued in favor of armed civilians, because it’s completely useless (and research shows that it only increases the chances of domestic violence and suicide). Maybe it wasn’t useless at the time the constitution was written but the current situation shows that it is now. Besides that I live in Vienna.