I don’t think that casting a range of bits as some other arbitrary type “is a bug nobody sees coming”.

C++ compilers also warn you that this is likely an issue and will fail to compile if configured to do so. But it will let you do it if you really want to.

That’s why I love C++

  • LillyPip@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    16 hours ago

    Why use a strongly typed language at all, then?

    Sounds unnecessarily restrictive, right? Just cast whatever as whatever and let future devs sort it out.

    $myConstant = ‘15’;
    $myOtherConstant = getDateTime();
    $buggyShit = $myConstant + $myOtherConstant;

    Fuck everyone who comes after me for the next 20 years.

  • Opisek@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    16 hours ago

    The problem is that it’s undefined behavior. Quake fast inverse square root only works before the types just happen to look that way. Because the floats just happens to have that bit arrangement. It could look very different on other machines! Nevermind that it’s essentially always exactly the same on most architectures. So yeah. Undefined behavior is there to keep your code usable even if our assumptions about types and memory change completely one day.

  • merc@sh.itjust.works
    link
    fedilink
    arrow-up
    43
    arrow-down
    1
    ·
    2 days ago

    “C++ compilers also warn you…”

    Ok, quick question here for people who work in C++ with other people (not personal projects). How many warnings does the code produce when it’s compiled?

    I’ve written a little bit of C++ decades ago, and since then I’ve worked alongside devs who worked on C++ projects. I’ve never seen a codebase that didn’t produce hundreds if not thousands of lines of warnings when compiling.

    • Ajen@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      19 hours ago

      My team uses the -Werror flag, so our code won’t compile if there are any warnings at all.

      • Phoenixz@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        21 hours ago

        Neither should your development code, except for the part where you’re working on.

    • Zacryon@feddit.org
      link
      fedilink
      arrow-up
      25
      ·
      2 days ago

      I mostly see warnings when compiling source code of other projects. If you get a warning as a dev, it’s your responsibility to deal with it. But also your risk, if you don’t. I made it a habit to fix every warning in my own projects. For prototyping I might ignore them temporarily. Some types of warnings are unavoidable sometimes.

      If you want to make yourself not ignore warnings, you can compile with -Werror if using GCC/G++ to make the compiler a pedantic asshole that doesn’t compile until you fix every fucking warning. Not advisable for drafting code, but definitely if you want to ship it.

      • Valmond@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Except when you have to cast size_t on int and vice versa (for “small” numbers). I hate that warning.

    • nroth@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      0 in our case, but we are pretty strict. Same at the first place I worked too. Big tech companies.

    • jkercher@programming.dev
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 days ago

      You shouldn’t have any warnings. They can be totally benign, but when you get used to seeing warnings, you will not see the one that does matter.

    • dejected_warp_core@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      Ideally? Zero. I’m sure some teams require “warnings as errors” as a compiler setting for all work to pass muster.

      In reality, there’s going to be odd corner-cases where some non-type-safe stuff is needed, which will make your compiler unhappy. I’ve seen this a bunch in 3rd party library headers, sadly. So it ultimately doesn’t matter how good my code is.

      There’s also a shedload of legacy things going on a lot of the time, like having to just let all warnings through because of the handful of places that will never be warning free. IMO its a way better practice to turn a warning off for a specific line.. Sad thing is, it’s newer than C++ itself and is implementation dependent, so it probably doesn’t get used as much.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        I’ve seen this a bunch in 3rd party library headers, sadly. So it ultimately doesn’t matter how good my code is.

        Yeah, I’ve seen that too. The problem is that once the library starts spitting out warnings it’s hard to spot your own warnings.

    • sunbeam60@lemmy.one
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 days ago

      Depends on the age of the codebase, the age of the compiler and the culture of the team.

      I’ve arrived into a team with 1000+ warnings, no const correctness (code had been ported from a C codebase) and nothing but C style casts. Within 6 months, we had it all cleaned up but my least favourite memory from that time was “I’ll just make this const correct; ah, right, and then this; and now I have to do this” etc etc. A right pain.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        So, did you get it down to 0 warnings and manage to keep it there? Or did it eventually start creeping up again?

        • shane@feddit.nl
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          I’m not the person you’re asking but surely they just told the compiler to treat warnings as errors after that. No warnings can creep in then!

    • vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Ignoring warnings is really not a good way to deal with it. If a compiler is bitching about something there is a reason to.

      A lot of times the devs are too overworked or a little underloaded in the supply of fucks to give, so they ignore them.

      In some really high quality codebases, they turn on “treat warnings as errors” to ensure better code.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I know that should be the philosophy, but is it? In my experience it seems to be normal to ignore warnings.

    • jmicz3d@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      I work on one of the larger c++ projects out there (20 to 50 million lines range) and though I don’t see the full build logs I’ve yet to see a component that has a warning.

  • Gobbel2000@programming.dev
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    2 days ago

    I’m all for having the ability to do these shenanigans in principle, but prefer if they are guarded in an unsafe block.

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      37
      ·
      2 days ago

      C is dangerous like your uncle who drinks and smokes. Y’wanna make a weedwhacker-powered skateboard? Bitchin’! Nail that fucker on there good, she’ll be right. Get a bunch of C folks together and they’ll avoid all the stupid easy ways to kill somebody, in service to building something properly dangerous. They’ll raise the stakes from “accident” to “disaster.” Whether or not it works, it’s gonna blow people away.

      C++ is dangerous like a quiet librarian who knows exactly which forbidden tomes you’re looking for. He and his… associates… will gladly share all the dark magic you know how to ask about. They’ll assure you that the power cosmic would never, without sufficient warning, pull someone inside-out. They don’t question why a loving god would allow the powers you crave. They will show you which runes to carve, and then, they will hand you the knife.

  • magic_lobster_party@fedia.io
    link
    fedilink
    arrow-up
    55
    arrow-down
    1
    ·
    2 days ago

    There are no medals waiting for you by writing overly clever code. Trust me, I’ve tried. There’s no pride. Only pain.

    • Ajen@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      19 hours ago

      Debugging code is always harder that writing it in the first place. If you make it as clever as you can, you won’t be clever enough to debug it.

    • Chrobin@discuss.tchncs.de
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      2 days ago

      It really depends on your field. I’m doing my master’s thesis in HPC, and there, clever programming is really worth it.

      • magic_lobster_party@fedia.io
        link
        fedilink
        arrow-up
        14
        ·
        2 days ago

        Well as long you know what you’re doing and weigh the risks with the benefits you’re probably ok.

        In my experience in the industry, there’s little benefit in pretending you’re John Carmack writing fast inverse square root. Understanding what you wrote 6 months ago outweighs most else.

      • MonkderVierte@lemmy.zip
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        Clever as in elegantly and readable or clever as in a hack that abuses a bug/feature and you need to understand the intricacies to understand half of it?

        • Chrobin@discuss.tchncs.de
          link
          fedilink
          arrow-up
          12
          ·
          2 days ago

          Honestly, also the latter. If you are using hundreds of thousands of cores for over 100h, every single second counts.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      8
      ·
      2 days ago

      Not only that, but everyone who sees that code later is going to waste so much time trying to understand it. That includes future you.

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          2 days ago

          A yes, comments.

          int flubTheWozat(void *) {
            for (int i=0; i<4; i++) {
              lfens += thzn[i] % ugy;  // take mod of thnz[i] with ugy and add to lefens.
            }
            return (lfens % thzn[0]) == 4; // return if it's 4ish
          }
          
          • Zacryon@feddit.org
            link
            fedilink
            arrow-up
            5
            ·
            2 days ago

            Haha, meaningful, informative comments that make it easier to understand the code of course. ;)

    • Zacryon@feddit.org
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      But I must o p t i m i z e! ó_ò

      Yes, let’s spend two hours on figuring out optimal values of preallocating a vector for your specific use-case. It’s worth the couple of microseconds saved! Kleinvieh macht auch Mist.

  • BigDanishGuy@sh.itjust.works
    link
    fedilink
    arrow-up
    33
    arrow-down
    8
    ·
    2 days ago

    But it will let you do it if you really want to.

    Now, I’ve seen this a couple of times in this post. The idea that the compiler will let you do anything is so bizarre to me. It’s not a matter of being allowed by the software to do anything. The software will do what you goddamn tell it to do, or it gets replaced.

    WE’RE the humans, we’re not asking some silicon diodes for permission. What the actual fuck?!? We created the fucking thing to do our bidding, and now we’re all oh pwueez mr computer sir, may I have another ADC EAX, R13? FUCK THAT! Either the computer performs like the tool it is, or it goes the way of broken hammers and lawnmowers!

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      ·
      20 hours ago

      Yeah, but there’s some things computers are genuinely better at than humans, which is why we code in the first place. I totally agree that you shouldn’t be completely controlled by your machine, but strong nudging saves a lot of trouble.

    • mormegil@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      I understand the idea. But many people have hugely mistaken beliefs about what the C[++] languages are and how they work. When you write ADC EAX, R13 in assembly, that’s it. But C is not a “portable assembler”! It has its own complicated logic. You might think that by writing ++i, you are writing just some INC [i] ot whatnot. You are not. To make a silly example, writing int i=INT_MAX; ++i; you are not telling the compiler to produce INT_MIN. You are just telling it complete nonsense. And it would be better if the compiler “prevented” you from doing it, forcing you to explain yourself better.

      • BigDanishGuy@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        I get what you’re saying. I guess what I’m yelling at the clouds about is the common discourse more than anything else.

        If a screw has a slotted head, and your screwdriver is a torx, few people would say that the screwdriver won’t allow them to do something.

        Computers are just tools, and we’re the ones who created them. We shouldn’t be submissive, we should acknowledge that we have taken the wrong approach at solving something and do it a different way. Just like I would bitch about never having the correct screwdriver handy, and then go look for the right one.

    • AnyOldName3@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      Soldiers are supposed to question potentially-illegal orders and refuse to execute them if their commanding officer can’t give a good reason why they’re justified. Being in charge doesn’t mean you’re infallible, and there are plenty of mistakes programmers make that the compiler can detect.

      • BigDanishGuy@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        I get the analogy, but I don’t think that it’s valid. Soldiers are, much to the chagrin of their commanders, sentient beings, and should question potentially illegal orders.

        Where the analogy doesn’t hold is, besides my computer not being sentient, what I’m prevented from doing isn’t against the law of man.

        I’m not claiming to be infallible. After all to err is human, and I’m indeed very human. But throw me a warning when I do something that goes against best practices, that’s fine. Whether I deal with it is something for me to decide. But stopping me from doing what I’m trying to do, because it’s potentially problematic? GTFO with that kinda BS.

    • Owl@mander.xyz
      link
      fedilink
      arrow-up
      23
      arrow-down
      4
      ·
      2 days ago

      Ok gramps now take your meds and off you go to the retirement home

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      when life gives you restrictive compilers, don’t request permission from them! make life take the compilers back! Get mad! I don’t want your damn restrictive compilers, what the hell am I supposed to do with these? Demand to see life’s manager! Make life rue the day it thought it could give BigDanishGuy restrictive compilers! Do you know who I am? I’m the man who’s gonna burn your house down! With the compilers! I’m gonna get my engineers to invent a combustible compiler that burns your house down!

    • Throskie@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      This comment makes me want to reformat every fucking thing i use and bend it to -my- will like some sort of technomancer

  • panda_abyss@lemmy.ca
    link
    fedilink
    arrow-up
    91
    arrow-down
    2
    ·
    3 days ago

    I actually do like that C/C++ let you do this stuff.

    Sometimes it’s nice to acknowledge that I’m writing software for a computer and it’s all just bytes. Sometimes I don’t really want to wrestle with the ivory tower of abstract type theory mixed with vague compiler errors, I just want to allocate a block of memory and apply a minimal set rules on top.

    • jkercher@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      100%. In my opinion, the whole “build your program around your model of the world” mantra has caused more harm than good. Lots of “best practices” seem to be accepted without any qualitative measurement to prove it’s actually better. I want to think it’s just the growing pains of a young field.

      • SpaceCowboy@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        Even with qualitative measurements they can do stupid things.

        For work I have to write code in C# and Microsoft found that null reference exceptions were a common issue. They actually calculated how much these issues cost the industry (some big number) and put a lot of effort into changing the language so there’s a lot of warnings when something is null.

        But the end result is people just set things to an empty value instead of leaving it as null to avoid the warnings. And sure great, you don’t have null reference exceptions because a value that defaulted to null didn’t get set. But now you have issues where a value is an empty string when it should have been set.

        The exception message would tell you exactly where in the code there’s a mistake, and you’ll immediately know there’s a problem and it’s more likely to be discovered by unit tests or QA. Something that’s an value that’s supposed to be set may not be noticed for a while and is difficult to track down.

        So their research indicated a costly issue (which is ultimately a dev making a mistake) and they fixed it by creating an even more costly issue.

        There’s always going to be things where it’s the responsibility of the developer to deal with, and there’s no fix for it at the language level. Trying to fix it with language changes can just make things worse.

        • HER0@beehaw.org
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          For this example, I feel that it is actually fairly ergonomic in languages that have an Option type (like Rust), which can either be Some value or no value (None), and don’t normally have null as a concept. It normalizes explicitly dealing with the None instead of having null or hidden empty strings and such.

          • SpaceCowboy@lemmy.ca
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            2 days ago

            I just prefer an exception be thrown if I forget to set something so it’s likely to happen as soon as I test it and will be easy to find where I missed something.

            I don’t think a language is going to prevent someone from making a human error when writing code, but it should make it easy to diagnose and fix it when it happens. If you call it null, “”, empty, None, undefined or anything else, it doesn’t change the fact that sometimes the person writing the code just forgot something.

            Abstracting away from the problem just makes it more fuzzy on where I just forgot a line of code somewhere. Throwing an exception means I know immediately that I missed something, and also the part of the code where I made the mistake. Trying to eliminate the exception doesn’t actually solve the problem, it just hides the problem and makes it more difficult to track down when someone eventually notices something wasn’t populated.

            Sometimes you want the program to fail, and fail fast (while testing) and in a very obvious way. Trying to make the language more “reliable” instead of having the reliability of the software be the responsibility of the developer can mean the software always “works”, but it doesn’t actually do what it’s supposed to do.

            Is the software really working if it never throws an exception but doesn’t actually do what it’s supposed to do?

            • HER0@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              It is fair to have a preference for exceptions. It sounds like there may be a misunderstanding on how Option works.

              Have you used languages that didn’t have null and had Option instead? If we look at Rust, you can’t forget not to check it: it is impossible to get the Some of an Option without dealing with the None. You can’t forget this. You can mess up in a lot of other ways, but you explicitly have to decide how to handle that potential None case.

              If you want it to fail fast and obvious, there are ways to do this. For example you, you can use the unwrap() method to get the contained Some value or panic if it is None, expect() to do the same but with a custom panic message, the ? operator to get the contained Some value or return the function with None, etc. Tangentially, these also work for Result, which can be Ok or Err.

              It is pretty common to use these methods in places where you always want to fail somewhere that you don’t expect should have a None or where you don’t want your code to deal with the consequences of something unexpected. You have decided this and live with the consequences, instead of it implicitly happening/you forgetting to deal with it.

    • Kairos@lemmy.today
      link
      fedilink
      arrow-up
      8
      arrow-down
      34
      ·
      3 days ago

      People just think that applying arbitrary rules somehow makes software magically more secure, like with rust, as if the compiler won’t just “let you” do the exact same fucking thing if you type the unsafe keyword

        • Kairos@lemmy.today
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          2 days ago

          That’s not what I meant. I understand that rust forces things to be more secure. It’s not not like there’s some guarantee that rust is automatically safe, and C++ is automatically unsafe.

            • vivendi@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              2 days ago

              No there is not. Borrow checking and RAII existed in C++ too and there is no formal axiomatic proof of their safety in a general sense. Only to a very clearly defined degree.

              In fact, someone found memory bugs in Rust, again, because it is NOT soundly memory safe.

              Dart is soundly Null-safe. Meaning it can never mathematically compile null unsafe code unless you explicitly say you’re OK with it. Kotlin is simply Null safe, meaning it can run into bullshit null conditions.

              The same thing with Rust: don’t let it lull you into a sense of security that doesn’t exist.

              • BatmanAoD@programming.dev
                link
                fedilink
                arrow-up
                4
                arrow-down
                1
                ·
                2 days ago

                Borrow checking…existed in C++ too

                Wat? That’s absolutely not true; even today lifetime-tracking in C++ tools is still basically a research topic.

                …someone found memory bugs in Rust, again, because it is NOT soundly memory safe.

                It’s not clear what you’re talking about here. In general, there are two ways that a language promising soundness can be unsound: a bug in the compiler, or a problem in the language definition itself permitting unsound code. (unsafe changes the prerequisites for unsoundness, placing more burden on the user to ensure that certain invariants are upheld; if the code upholds these invariants, but there’s still unsoundness, then that falls into the “bug in Rust” category, but unsoundness of incorrect unsafe code is not a bug in Rust.)

                Rust has had both types of bugs. Compiler bugs can be (and are) fixed without breaking (correct) user code. Bugs in the language definition are, fortunately, fixable at edition boundaries (or in rare cases by making a small breaking change, as when the behavior of extern "C" changed).

                • vivendi@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  2 days ago

                  Have you heard about cve-rs?

                  https://github.com/Speykious/cve-rs

                  Blazingly fast memory failures with no unsafe blocks in pure Rust.

                  Edit: also I wish whoever designed the syntax for rust to burn in hell for eternity

                  Edit 2: Before the Cult of Rust™ sends their assassins to take out my family, I am not hating on Rust (except the syntax) and I’m not a C absolutist, I am just telling you to be aware of the limitations of your tools

          • drosophila@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            2 days ago

            I want you to imagine that your comments in this thread were written by an engineer or a surgeon instead of a programmer.

            Imagine an engineer saying “Sure, you can calculate the strength of a bridge design based on known material properties and prove that it can hold the design weight, it that doesn’t automatically mean that the design will be safer than one where you don’t do that”. Or “why should I have to prove that my design is safe when the materials could be defective and cause a collapse anyway?”

            Or a surgeon saying “just because you can use a checklist to prove that all your tools are accounted for and you didn’t leave anything inside the patient’s body doesn’t mean that you’re going to automatically leave something in there if you don’t have a checklist”. Or “washing your hands isn’t a guarantee that the patient isn’t going to get an infection, they could get infected some other way too”.

            A doctor or engineer acting like this would get them fired, sued, and maybe even criminally prosecuted, in that order. This is not the mentality of a professional, and it is something that programming as a profession needs to grow out of.

            • Kairos@lemmy.today
              link
              fedilink
              arrow-up
              2
              ·
              2 days ago

              “washing your hands isn’t a guarantee that the patient isn’t going to get an infection, they could get infected some other way too”.

              Every single doctor should know this yes.

              It seems people are adding a sentence I didn’t say “rust can be unsafe and thus we shouldn’t try” on top of the one I did say “programmers should be aware that rust doesn’t automatically mean safe”.

              • BatmanAoD@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                21 hours ago

                You didn’t say “programmers should be aware that rust doesn’t automatically mean safe”. You said:

                People just think that applying arbitrary rules somehow makes software magically more secure…

                You then went on to mention unsafe, conflating “security” and “safety”; Rust’s guarantees are around safety, not security, so it sounds like you really mean “more safe” here. But Rust does make software more safe than C++: it prohibits memory safety issues that are permitted by C++.

                You then acknowledged:

                I understand that rust forces things to be more secure

                …which seems to be the opposite of your original statement that Rust doesn’t make software “more secure”. But in the same comment:

                It’s not not like there’s some guarantee that rust is automatically safe…

                …well, no, there IS a guarantee that Rust is “automatically” (memory) safe, and to violate that safety, your program must either explicitly opt out of that “automatic” guarantee (using unsafe) or exploit (intentionally or not) a compiler bug.

                …and C++ is automatically unsafe.

                This is also true! “Safety” is a property of proofs: it means that a specific undesirable thing cannot happen. The C++ compiler doesn’t provide safety properties[1]. The opposite of “safety” is “liveness”, meaning that some desirable thing does happen, and C++ does arguably provide certain liveness properties, in particular RAII, which guarantees that destructors will be called when leaving a call-stack frame.

                [1] This is probably over-broad, but I can’t think of any safety properties C++ the language does provide. You can enfor your own safety properties in library code, and the standard library provides some; for instance, mutexes have safety guarantees.

              • drosophila@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                edit-2
                2 days ago

                Then you should probably be a little more explicit about that, because I have never, not once in my life, heard someone say “well you know wearing a seatbelt doesn’t guarantee you’ll survive a car crash” and not follow it up with “that’s why seatbelts are stupid and I’m not going to wear one”.

                • Kairos@lemmy.today
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  edit-2
                  2 days ago

                  We need to stop attaching shit someone doesn’t say to something they did. It makes commutating hostile and makes you an asshole.

                  Edit: okay that was a bit rude. But it’s so frustrating to say something and then have other people go “that means <this other thing you didn’t say>!!!11!”

      • BatmanAoD@programming.dev
        link
        fedilink
        arrow-up
        25
        ·
        2 days ago

        It’s neither arbitrary nor magic; it’s math. And unsafe doesn’t disable the type system, it just lets you dereference raw pointers.

      • Speiser0@feddit.org
        link
        fedilink
        arrow-up
        12
        ·
        2 days ago

        You don’t even need unsafe, you can just take user input and execute it in a shell and rust will let you do it. Totally insecure!

        • Ignotum@lemmy.world
          link
          fedilink
          arrow-up
          14
          arrow-down
          1
          ·
          2 days ago

          Rust isn’t memory safe because you can invoke another program that isn’t memory safe?

          • Speiser0@feddit.org
            link
            fedilink
            arrow-up
            8
            ·
            2 days ago

            My comment is sarcastic, obviously. The argument Kairos gave is similar to this. You can still introduce vulnerabilities. The issue is normally that you introduce them accidentally. Rust gives you safety, but does not put your code into a sandbox. It looked to me like they weren’t aware of this difference.

      • panda_abyss@lemmy.ca
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        I don’t know rust, but for example in Swift the type system can make things way more difficult.

        Before they added macros if you wanted to write ORM code on a SQL database it was brutal, and if you need to go into raw buffers it’s generally easier to just write C/objc code and a bridging header. The type system can make it harder to reason about performance too because you lose some visibility in what actually gets compiled.

        The Swift type system has improved, but I’ve spent a lot of time fighting with it. I just try to avoid generics and type erasure now.

        I’ve had similar experiences with Java and Scala.

        That’s what I mean about it being nice to drop out of setting up some type hierarchy and interfaces and just working with a raw buffers or function pointers.

  • UnfortunateShort@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    edit-2
    2 days ago

    I used to love C++ until I learned Rust. Now I think it is obnoxious, because even if you write modern C++, without raw pointers, casting and the like, you will be constantly questioning whether you do stuff right. The spec is just way too complicated at this point and it can only get worse, unless they choose to break backwards compatibility and throw out the pre C++11 bullshit

    • mobotsar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago

      Depending on what I’m doing, sometimes rust will annoy me just as much. Often I’m doing something I know is definitely right, but I have to go through so much ceremony to get it to work in rust. The most commonly annoying example I can think of is trying to mutually borrow two distinct fields of a struct at the same time. You can’t do it. It’s the worst.

    • Zacryon@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I suppose it’s a matter of experience and practise. The more you wotk with it the better you get. As usual with all things one can learn.

      • sexual_tomato@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        The question becomes, then, if I spend 5 years learning and mastering C++ versus rust, which one is going to help me produce a better product in the end?

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    2 days ago

    Structs with union members that allow the same place in memory to be accessed either word-wise, byte-wise, or even bit-wise are a god-sent for everyone who needs to access IO-spaces, and I’m happy my C-compiler lets me do it.