while(true){💩};

  • 0 Posts
  • 245 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle

  • You missed 3 times in a row.

    1. The 30% cut thing has been industry standard since the dawn of time. Valve goes out of its way to make exceptions to this rule down to 10% in cases of very high volume but everyone only talks about the 30 since thats all they hear about. Only an Epic Games apologist would parrot this as a talking point. Plus, developers are not getting nothing for that 30%, especially games that use Valve’s Steam networking services. Unlike Microsoft and Sony who also take 30% cuts, Valve doesn’t charge $10,000 per game patch to have someone review and approve it to be published.

    2. The regional pricing goes both ways. There was literally a game recently users were complaining about NOT getting it because the publisher opted out or something, where the regional pricing would have made the game affordable but in USD (Valves country of origin and therefore default), it was exhorbitantly priced. And this one wasn’t even Valve’s fault.

    3. Valve did not censor games directly on behest of the Australian nutjobs, they fought back against them pretty hard, but Valve is ultimately beholden to the payment processors (who they also pushed back on). Once Visa and MasterCard started threatening to pull services, Valve was put in a “comply or die” situation. If they didn’t do as they were told they wouldn’t be able to accept money with anything but Stripe or Bitcoin. They literally lost Paypal as a payment option over this fight.

    I think its very dishonest of you to frame these points as enshittification. This term means the intentional degradation of a product or service for the sole motive of increasing profits. For point 1, the whole industry literally started off like that. For point 2, it was literally an attempt at equity (valve may not get the deltas correct but in some countries they’re losing money on games). And for point 3, you might be able to argue it but ultimately it wasn’t for profits so much as it was survival.

    If you wanted to shitsling at Valve, you should have mentioned how Valve invented lootboxes in TF2 and then exacerbated the issue in CS:GO/CS2, releasing that awful plague onto the industry.
















  • My argument is incredibly simple:

    YOU exist. In this universe. Your brain exists. The mechanisms for sentience exist. They are extremely complicated, and complex. Magic and mystic Unknowables do not exist. Therefore, at some point in time, it is a physical possibility for a person (or team of people) to replicate these exact mechanisms.

    We currently do not understand enough about them yet to do this. YOU are so laser-focused on how a Large Language Model behaves that you cannot take a step back and look at the bigger picture. Stop thinking about LLMs specifically. Neural-network artificial intelligence comes in many forms. Many are domain-specific such as molecular analysis for scientific research. The AI of tomorrow will likely behave very different from those of today, and may require hardware breakthroughs to accomplish (I don’t know that x86_64 or ARM instruction sets are sufficient or efficient enough for this process). But regardless of how it happens, you need to understand that because YOU exist, you are the prime reason it is not impossible or even unfeasible to accomplish.


  • This argument feels extremely hand-wavey and falls prey to the classic problem of “we only know about X and Y that exist today, therefore nothing on this topic will ever change!”

    You also limit yourself when sticking strictly to narrow thought experiments like the Chinese room.

    If you consider the human brain, which is made up of nigh-innumerable smaller domain-specific neural nets combined together with the frontal lobe, has consciousness, this absolutely means that it is physically possible to replicate this process by other means.

    We noticed how birds fly and made airplanes. It took many, MANY Iterations that seem excessively flawed by today’s standards, but were stepping stones to achieve a world-changing new technology.

    LLMs today are like DaVinci’s corkscrew flight machine. They’re clunky, they technically perform something resembling the end goal but ultimately in the end fail the task they were built for in part or in whole.

    But then the Wright brothers happened.

    Whether sentient AI will be a good thing or not is something we will have to wait and see. I strongly suspect it won’t be.


    EDIT: A few other points I wanted to dive into (will add more as they come to mind):

    AI derangement or psychosis is a term meant to refer to people forming incredibly unhealthy relationships with AI to the point where they stop seeing its shortcomings, but I am noticing more and more that people are starting to throw it around like the “Trump Derangement Syndrome” term, and that’s not okay.