Used to write software for reading QR Codes, and it was a fascinating process, dealing with increasingly bad customer images. They’re pretty resilient though!
Used to write software for reading QR Codes, and it was a fascinating process, dealing with increasingly bad customer images. They’re pretty resilient though!
video-sizes
I’m confused as to your meaning here. Current codecs are miles ahead of what we had in the past. Unless you mean typical resolution (eg. 4k, 8k, etc).
For the purposes of OPs problem (P v NP), it considers not particular solutions, but general algorithmic approaches. Thus, we consider things as either Hard (exponential time, by size of input), or Easy (only polynomial time, by size of input).
A number of important problems fall into this general class of Hard problems: Sudoku, Traveling Salesman, Bin Packing, etc. These all have initial setups where solving them takes exponential time.
On the other hand, as an example of an easy problem, consider sorting a list of numbers. It’s really easy to determine if a lost is sorted, and it’s always relatively fast/easy to sort the list, no matter what setup it had initially.
I don’t have the source with me, but I recall a paper about listening to various languages under different signal/ noise thresholds. If I recall correctly, languages like German that have multiple declensions were about to better able to parse noisy samples because of the redundant information. Sorry for not having the source off hand though.
Exactly same. Gf & I got into it a few weeks ago and just caught up to current. We’re champing at the bit to see what happens next.
Jade #1!