That’s very interesting. Robots are less resistant to radiation than humans? So when robots take the jobs of people, production is made more vulnerable to nuclear weapons?
Humans are, in general, absurdly robust. You can absolutely mess them up, and they will keep chugging along for a while before breaking down. Not to mention their almost frightening ability to make a full recovery from horrendous injuries.
Most robots/machines will be more or less completely disabled by a faulty connection, clogged valve, or torn hydraulic line. Sure, you can shield them more, but for stuff like radiation, dust, and harsh environments that cause gradual degradation, you’re going to have a very hard time beating the resilience of humans.
Bleep Bloop… it is clearly advantageous that we use humans to operate in harsh environments rather that robots… Bleep Ding.
Which is why its imperative that little Timmy is sent to the mines despite all the risks and occupational health hazard that will eventually kill them.
in a way. Cell damage can be repaired when it occurs in low amounts and even broken DNA strands can be fixed by the machinery in our cells. Most importantly, our systems are very much redundant on a cellular level, losing a few cells is not so much of an issue, since we lose cells every day anyways.
Computers have nearly no redundancy; in some cases, a single bit flipped by a gamma ray can cause a system crash in any computer. There is stuff like ECC for memory which helps, but even that isn’t foolproof. Computers for space missions outside of earths magnetosphere are designed to make sure the density of components isn’t too high, with lots of error correction code, backups and a lot of lead shielding, which equals lower performance.
I think you are both overestimating the ability of biological systems and underestimating the ability of mechanical systems to be repaired.
Biological systems have incredible self-repair capabilities, but are otherwise largely unrepairable. To fix issues with biological systems you mostly have to work within the bounds of those self-repair mechanisms which are slow, poorly understood and rather limited.
Loosing a few skin cells is perfectly normal. Corrupting a few skin cells can cancer cancers or autoimmune disorders. Loosing a few Purkinje cells can lead to significant motor impairment and death.
Computers, and mechanical systems in general, can have a shit ton of redundancy. You mention ECC, but neglected to mention the layers of error connection, BIST, and redundancy that even the cheap, broken, cost-optimized, planned obsolescence consumer crap that most people are mostly familiar with make heavy use of.
A single bit flipped by a gamma ray will not cause any sort of issue in any modern computer. I cannot overstate how often this and other memory errors happen. A double bit flip can cause issues in a poorly designed system and, again, are not just caused by cosmic rays. However, it’s not usually that hard to have multiple redundancies if that is a concern, such as with high altitude, extreme environment, highly miniaturized, etc. objects. It does increase cost and complexity though so____
The huge benefit of mechanical systems is they are fully explainable and replaceable. CPU get a bunch of radiation and seems to be acting a bit weird? Replace it! Motor burnt out? Replace it! The new system will be good as new or better.
You can’t do that in a biological system. Even with autografts (using the person’s own tissues for “replacements”) the risk of scarring, rejection and malignancy remains fairly high and doesn’t result in “good as new” outcome, but is somewhere between ‘death’ and ‘minor permanent injury’. Allografts (doner tissues) often need lifelong medications and maintenance to not fail, and even “minor” transplants carry the risk of infection, necrosis and death.
That study doesn’t seem to support the point you’re trying to use it to support. First it’s talking about machines with error correcting RAM, which most consumer devices don’t have. The whole point of error correcting RAM is that it tolerates a single bit flip in a memory cell and can detect a second one and, e.g. trigger a shutdown rather than the computer just doing what the now-incorrect value tells it to (which might be crashing, might be emitting an incorrect result, or might be something benign). Consumer devices don’t have this protection (until DDR5, which can fix a single bit flip, but won’t detect a second, so it can still trigger misbehaviour). Also, the data in the tables gives figures around 10% for the chance of an individual device experiencing an unrecoverable error per year, which isn’t really that often, especially given that most software is buggy enough that you’d be lucky to use it for a year with only a 10% chance of it doing something wrong.
it’s talking about machines with error correcting RAM, which most consumer devices don’t have.
It’s a paper from 2009 talking about “commodity servers” with ECC protection. Even back then it was fairly common and relatively cheap to implement though it was more often integrated into the CPU and/or memory controller. Since 2020 with DDR5 it’s mandatory to be integrated into the memory as well.
gives figures around 10% for the chance of an individual device experiencing an unrecoverable error per year, which isn’t really that often
Yes, that’s my point. Your claim of “computers have nearly no redundancy” is complete bullshit.
It wasn’t originally my claim - I replied to your comment as I was scrolling past because it had a pair of sentences that seemed dodgy, so I clicked the link it cited as a source, and replied when the link didn’t support the claim.
Specifically, I’m referring to
A single bit flipped by a gamma ray will not cause any sort of issue in any modern computer. I cannot overstate how often this and other memory errors happen.
This just isn’t correct:
loads of modern computers don’t use DDR5 or ECC variants of older generations at all, so don’t have any error-correcting memory. If the wrong bit flips, they just crash.
loads of modern computers don’t exclusively use DDR5, e.g. graphics memory (which didn’t have error correction until GDDR7 but can still cause serious problems, e.g. if a bit flips in a command buffer and makes the GPU write back to the wrong address in main memory, overwriting something important), and various caches (SRAM is vulnerable to bit flips from various kinds of radiation, too). If the wrong bit flips, they just crash.
Compared to other computer problems that can put the wrong data into memory, like experiencing a bug because a programmer made a mistake, or even just a part wearing out from age, memory errors are really rare, so anything implying normal people need to care is thoroughly overstating their prevalence.
Sorry, I wasn’t paying attention and missed that. I apologize.
loads of modern computers don’t use DDR5 or ECC variants of older generations at all, so don’t have any error-correcting memory. If the wrong bit flips, they just crash.
Integrated memory ECC isn’t the only check, it’s an extra redundancy. The point of that paper was to show how often single bit errors occur within one part of a computer system.
memory errors are really rare
Right, because of redundancies. It takes 2 simultaneous bit flips in different regions of the memory in order to cause a memory error and it’s still ~10% chance annually according to the paper I cited.
It also means humans will be progressively pushed into the most dangerous jobs because the robot circuitry can’t cope with harsh environments. The easy cushy jobs will go to the robots.
Off the top of my head: Anything with poisonous gases. Anything where there’s a RISK of an explosion or something (so the robots would work before the explosion; this is kinda already a thing with bomb disposal robots, isn’t it?). Etc.
So for sure anything nuclear will have to be human, but there could be other environments where robots survive, but humans won’t.
That’s very interesting. Robots are less resistant to radiation than humans? So when robots take the jobs of people, production is made more vulnerable to nuclear weapons?
Humans are, in general, absurdly robust. You can absolutely mess them up, and they will keep chugging along for a while before breaking down. Not to mention their almost frightening ability to make a full recovery from horrendous injuries.
Most robots/machines will be more or less completely disabled by a faulty connection, clogged valve, or torn hydraulic line. Sure, you can shield them more, but for stuff like radiation, dust, and harsh environments that cause gradual degradation, you’re going to have a very hard time beating the resilience of humans.
Bleep Bloop… it is clearly advantageous that we use humans to operate in harsh environments rather that robots… Bleep Ding.
Robots were sent into the Chernobyl reactor and they stopped working immadiately. Gamma radiation fries circuits.
in the end, they sacrificed soldiers above dumping sacks of cement, and miners below laying a foundation to stop the core melting into the earth.
And don’t forget cheaper!
Which is why its imperative that little Timmy is sent to the mines despite all the risks and occupational health hazard that will eventually kill them.
in a way. Cell damage can be repaired when it occurs in low amounts and even broken DNA strands can be fixed by the machinery in our cells. Most importantly, our systems are very much redundant on a cellular level, losing a few cells is not so much of an issue, since we lose cells every day anyways. Computers have nearly no redundancy; in some cases, a single bit flipped by a gamma ray can cause a system crash in any computer. There is stuff like ECC for memory which helps, but even that isn’t foolproof. Computers for space missions outside of earths magnetosphere are designed to make sure the density of components isn’t too high, with lots of error correction code, backups and a lot of lead shielding, which equals lower performance.
I think you are both overestimating the ability of biological systems and underestimating the ability of mechanical systems to be repaired.
Biological systems have incredible self-repair capabilities, but are otherwise largely unrepairable. To fix issues with biological systems you mostly have to work within the bounds of those self-repair mechanisms which are slow, poorly understood and rather limited.
Loosing a few skin cells is perfectly normal. Corrupting a few skin cells can cancer cancers or autoimmune disorders. Loosing a few Purkinje cells can lead to significant motor impairment and death.
Computers, and mechanical systems in general, can have a shit ton of redundancy. You mention ECC, but neglected to mention the layers of error connection, BIST, and redundancy that even the cheap, broken, cost-optimized, planned obsolescence consumer crap that most people are mostly familiar with make heavy use of.
A single bit flipped by a gamma ray will not cause any sort of issue in any modern computer. I cannot overstate how often this and other memory errors happen. A double bit flip can cause issues in a poorly designed system and, again, are not just caused by cosmic rays. However, it’s not usually that hard to have multiple redundancies if that is a concern, such as with high altitude, extreme environment, highly miniaturized, etc. objects. It does increase cost and complexity though so____
The huge benefit of mechanical systems is they are fully explainable and replaceable. CPU get a bunch of radiation and seems to be acting a bit weird? Replace it! Motor burnt out? Replace it! The new system will be good as new or better.
You can’t do that in a biological system. Even with autografts (using the person’s own tissues for “replacements”) the risk of scarring, rejection and malignancy remains fairly high and doesn’t result in “good as new” outcome, but is somewhere between ‘death’ and ‘minor permanent injury’. Allografts (doner tissues) often need lifelong medications and maintenance to not fail, and even “minor” transplants carry the risk of infection, necrosis and death.
That study doesn’t seem to support the point you’re trying to use it to support. First it’s talking about machines with error correcting RAM, which most consumer devices don’t have. The whole point of error correcting RAM is that it tolerates a single bit flip in a memory cell and can detect a second one and, e.g. trigger a shutdown rather than the computer just doing what the now-incorrect value tells it to (which might be crashing, might be emitting an incorrect result, or might be something benign). Consumer devices don’t have this protection (until DDR5, which can fix a single bit flip, but won’t detect a second, so it can still trigger misbehaviour). Also, the data in the tables gives figures around 10% for the chance of an individual device experiencing an unrecoverable error per year, which isn’t really that often, especially given that most software is buggy enough that you’d be lucky to use it for a year with only a 10% chance of it doing something wrong.
It’s a paper from 2009 talking about “commodity servers” with ECC protection. Even back then it was fairly common and relatively cheap to implement though it was more often integrated into the CPU and/or memory controller. Since 2020 with DDR5 it’s mandatory to be integrated into the memory as well.
Yes, that’s my point. Your claim of “computers have nearly no redundancy” is complete bullshit.
It wasn’t originally my claim - I replied to your comment as I was scrolling past because it had a pair of sentences that seemed dodgy, so I clicked the link it cited as a source, and replied when the link didn’t support the claim.
Specifically, I’m referring to
This just isn’t correct:
Sorry, I wasn’t paying attention and missed that. I apologize.
Integrated memory ECC isn’t the only check, it’s an extra redundancy. The point of that paper was to show how often single bit errors occur within one part of a computer system.
Right, because of redundancies. It takes 2 simultaneous bit flips in different regions of the memory in order to cause a memory error and it’s still ~10% chance annually according to the paper I cited.
It also means humans will be progressively pushed into the most dangerous jobs because the robot circuitry can’t cope with harsh environments. The easy cushy jobs will go to the robots.
Could be some exceptions.
Off the top of my head: Anything with poisonous gases. Anything where there’s a RISK of an explosion or something (so the robots would work before the explosion; this is kinda already a thing with bomb disposal robots, isn’t it?). Etc.
So for sure anything nuclear will have to be human, but there could be other environments where robots survive, but humans won’t.