• 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Changes the torque and the application of said torque for each bolt. As in “tool head has 5° of give until in place, then in ramps torque to 5nM over half a second, and holds for 1 second and then ramps to zero over .1 seconds”, and then something different for the next bolt. Then it logs that it did this for each bolt.
    The tool can also be used to measure and correct the bolts as part of an inspection phase, and log the results of that inspection.
    Finally, it tracks usage of the tool and can log that it needs maintenance or isn’t working correctly even if it’s just a subtle failure.



  • Look at their actions, not their words specifically.

    It’s a culture where being unkind is particularly unacceptable, not specifically where you’re not allowed to be honest or forthright.

    You’re allowed to not like someone, but telling someone you dislike them is needlessly unkind, so you just politely decline to interact with them.
    You’d “hate to intrude”, or “be a bother”. If it’s pushed, you’ll “consider it and let them know”.

    Negative things just have to be conveyed in the kindest way possible, not that they can’t be conveyed.


  • Brian Acton is the only billionaire I can think of that hasn’t been a net negative.

    Co-founded WhatsApp, which became popular with few employees. Sold the service at a reasonable rate.
    Sold the business for a stupid large sum of money, and generously compensated employees as part of the buyout.
    Left the buying company, Facebook, rather than do actions he considered unethical, at great personal expense ($800M).

    Proceeded to cofound signal, which is an open, and privacy focused messaging system which he has basically bankrolled while it finds financial stability.

    He also has been steadily giving away most of his money to charitable causes.

    Billionaires are bad because they get that way by exploiting some combination of workers, customers or society.
    In the extremely unlikely circumstance where a handful of people make something fairly priced that nearly everybody wants, and then uses the wealth for good, there’s nothing intrinsically wrong with being that person.
    Selling messaging to a few billion people for $1 a lifetime is a way to do that.





  • I believe their point was that even encrypted messages convey data. So if you have a record of all the encrypted messages, you can still tell who was talking, when they were talking, and approximately how much they said, even if you can’t read the messages.

    If you wait until someone is gone and then loudly raid their house, you don’t need to read their messages to guess the content of what they send to people as soon as they find out. Now you know who else you want to target, despite not being able to read a single message.

    This type of metadata analysis is able to reveal a lot about what’s being communicated. It’s why private communication should be ephemeral, so that only what’s directly intercepted can be scrutinized.



  • If you have an unutilized asset, there’s pressure to get rid of it for the cost savings.
    If you sell your asset at a loss, it looks bad for you and the company. Same for paying cancelation fees.

    If you legitimately think that you’re going to need that space in the future, for example because you think that we’ll find an equilibrium between “everyone work from office” and where we are now, and that we’re trending towards an organic level of office need/desire higher than we’re at now, you might see selling now as the first step to needing to buy again later, likely for higher than you sold for. So you try to “mandate” the equilibrium that you expect so you’re not in a position to have to explain why you’re holding onto a dead and losing value property.

    Executives spend a lot of time talking to people and having meetings. The job selects for people who thrive on and value face to face communication. Naturally, they overestimate how much that social aspect of the job is true for everyone else, so they estimate that the equilibrium will have a lot more office time than other people would.
    To make it worse, the more power you have to influence that decision, the more likely you are to have a similar bias.

    This isn’t an excuse of course, since you can overcome that bias simply by telling teams to discuss what their ideal working arrangement would be, and then running a survey. Now you have data, and you can use it to try to scale offices to what you actually want.




  • To me it’s important to ask “what problem is it solving”, and “how did we solve that problem in the past”, and “what does it cost”.
    Crypto currency solves the problem of spending being tracked by a third party. We used to handle this by giving each other paper. The new way involves more time, and a stupendous amount of wasted electricity.
    Nfts solve the problem of owning a digital asset. We used to solve this by writing down who owned it. The cost is a longer time investment, and a stupendous amount of wasted electricity.
    Generative AI is solving the problem of creative content being hard to produce, and expensive. We used to solve this problem by paying people to make things for us, and not making things if you don’t have money. The cost is pissing off creatives.

    The first two feel like cases where the previous solution wasn’t really bad, and so the cost isn’t worth it.

    The generative AI case feels mixed, because pissing off creatives to make more profit feels shitty, but lowering barriers to entry to creativity doesn’t.


  • We should also ban long hair.

    I’m sure plenty of women only prefer to have long hair because they think they would be shunned or stan out if they cut it short.

    I’m all for people getting to wear their hair like they want, but I’m confident that many women would actually prefer to wear their hair short, and so can’t be trusted to make that choice for themselves or express an honest opinion about it.

    The first step in women’s liberation is making it clear that they lack agency and that other people know what’s best for them.



  • That’s an interesting perspective, regarding the question of not just “where you are”, but also “how you got there” being something that can factor into what you see as part of your identity.

    Closest thing I have is with weight gain and loss. When I get back down to my target weight, I’ll still have stretch marks that’ll show that I at one point was much larger. If I could just “be” my target weight without the physical evidence of the past, I’d opt for that path, so it’s interesting to me to consider that someone might take a different view. :)





  • I don’t think they work the same way, but I think they work in ways that are close enough in function that they can be treated the same for the purposes of this conversation.

    Pen and pencil are “the same”, and either of those and printed paper are “basically the same”.
    The relationship between a typical modern AI system and the human mind is like that between a pencil written document and a word document: entirely dissimilar in essentially every way, except for the central issue of the discussion, namely as a means to convey the written word.

    Both the human mind and a modern AI take in input data, and extract relationships and correlations from that data and store those patterns in a batched fashion with other data.
    Some data is stored with a lot of weight, which is why I can quote a movie at you, and the AI can produce a watermark: they’ve been used as inputs a lot. Likewise, the AI can’t perfectly recreate those watermarks and I can’t tell you every detail from the scene: only the important bits are extracted. Less important details are too intermingled with data from other sources to be extracted with high fidelity.