Does it? OCR is still pretty bad, it’s definitely going to be more annoying than plaintext. It might be worth it, but that doesn’t really make it that much less of a pain in the ass to deal with. You might need to use symbols that aren’t alphanumeric (along the lines of QR codes) to make the conversion to plaintext more reliable. I don’t think we have something like that right now.
You’re probably right, but steganography with FEC should be enough to do the job; any predictive text errors would be caught with the checksumming.
After all, Phil Zimmerman got the entirety of the PGP source code from the US to Germany as a book. OCR combined with predictive text reconstruction has come a LONG way since then. The big problem today with OCR is that it often corrects errors that were present in the original document.
Does it? OCR is still pretty bad, it’s definitely going to be more annoying than plaintext. It might be worth it, but that doesn’t really make it that much less of a pain in the ass to deal with. You might need to use symbols that aren’t alphanumeric (along the lines of QR codes) to make the conversion to plaintext more reliable. I don’t think we have something like that right now.
You’re probably right, but steganography with FEC should be enough to do the job; any predictive text errors would be caught with the checksumming.
After all, Phil Zimmerman got the entirety of the PGP source code from the US to Germany as a book. OCR combined with predictive text reconstruction has come a LONG way since then. The big problem today with OCR is that it often corrects errors that were present in the original document.