Editing 2739: Data Quality

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 10: Line 10:
  
 
==Explanation==
 
==Explanation==
 +
{{incomplete|Created by a SUPERIOR FELINE. Do NOT delete this tag too soon.}}
 
<!-- Specifically "No Idea If There's A Character Limit LMAO": please refrain from removing any more Incomplete tags by yourself and so quickly, and please check your Talk page! And please remove this comment once you've read it. :) -->
 
<!-- Specifically "No Idea If There's A Character Limit LMAO": please refrain from removing any more Incomplete tags by yourself and so quickly, and please check your Talk page! And please remove this comment once you've read it. :) -->
  
Line 22: Line 23:
 
|-
 
|-
 
! Item
 
! Item
! Title Text
 
 
! Explanation
 
! Explanation
|-
 
| Someone who once saw the data describing it at a party
 
| exclamation about how cute your cat is
 
| This is referring to how unreliable and inaccurate it is to get information verbally second-hand, as humans are naturally terrible at maintaining accuracy when passing on information received. This is the basic premise behind {{w|Chinese whispers|the Telephone Game}}. People naturally and instinctively mentally summarize information received in the way they understand, often in their own words instead of what they literally heard or read.
 
 
|-
 
|-
 
| {{w|Bloom filter}}
 
| {{w|Bloom filter}}
| last 4 digits of your cat's chip ID
+
| A Bloom filter is a probabilistic data structure that can efficiently say whether an element is ''probably'' part of the dataset, while it can say "element is not in set" with 100% accuracy. If a Bloom filter is used to compress the contents of a book, the Bloom filter can re-tell a similar story - just by guessing.{{fact}}
| A Bloom filter is a probabilistic data structure that can efficiently say whether an element is ''probably'' part of the dataset, while it can say "element is not in set" with 100% accuracy. If a Bloom filter is used to represent the contents of a book, reference to the Bloom filter could perhaps reconstruct everything, just by guessing, but in a highly inefficient and potentially inaccurate way. A bloom-filter is like a the last four digits of the cat's ID in that while you can know for sure a cat isn't your cat if it's last four digits don't match, you can't know for sure that it is yours if they do.
 
 
|-
 
|-
 
| {{w|Hash table}}
 
| {{w|Hash table}}
| your cat's full chip ID
+
| A hash table allows you to find data very fast. Randall probably means hashing the contents of entire books. Calculating a hash value for an entire book means that there is (most probably) a unique relationship between the book and a hash value - e.g. "58b8893b2a116d4966f31236eb2c77c4172d00e9". This means the book will yield this exact hash value, though it's impossible to reconstruct the book's content from a hash value. It is a highly efficient, but is meaningless: An average book contains several millions of bits, yet the SHA-2 hash has only 256 bits.
| A hash table allows you to find data very fast. Randall probably means hashing the contents of entire books. Calculating a hash value for an entire book means that there is (most probably) a unique relationship between the book and a hash value - e.g. "58b8893b172d00e9". This means this exact version of the book will yield this exact hash value, though it's practically impossible to reconstruct the book's potential content from a hash value. It is a method of checking that a copy is the same as the original, but is meaningless on its own and has the possibility of being wrong. An average book contains several millions of bits, yet the SHA-2 hash has only 256 bits, so there are theoretically many (mostly nonsensical, but not necessarily) 'wrong' versions that might look correct.
 
 
|-
 
|-
 
| {{w|JPEG|JPG}}, {{w|GIF}}, {{w|MPEG-1|MPEG}}
 
| {{w|JPEG|JPG}}, {{w|GIF}}, {{w|MPEG-1|MPEG}}
| a drawing of your cat
 
 
| Image and video formats that are considered 'lossy'. JPG (or "JPEG") format and the MPEG {{w|MPEG-2|group}} {{w|Advanced Video Coding|of}} formats typically use a range of data-compression methods that save space by selectively fudging (thus losing) what details it can of the image (and audio, where appropriate), to make disproportionate gains in compression; best used for real world images (and films) where real-world 'noise' can afford to be replaced by a more compressible version, without too much obvious change.
 
| Image and video formats that are considered 'lossy'. JPG (or "JPEG") format and the MPEG {{w|MPEG-2|group}} {{w|Advanced Video Coding|of}} formats typically use a range of data-compression methods that save space by selectively fudging (thus losing) what details it can of the image (and audio, where appropriate), to make disproportionate gains in compression; best used for real world images (and films) where real-world 'noise' can afford to be replaced by a more compressible version, without too much obvious change.
GIF compression is not 'lossy' in the same way, i.e. whatever it is asked to encode can be faithfully decoded, but Randall may consider its limitations (it can only write images of 256 unique hues, albeit that these can come from anywhere across the whole 65,536 "True color" range, plus transparency) to be a form of loss, as conversion from a more sophisticated format (e.g. PNG, below) could lose many of the subtle shades of the original and produce an inferior image. For this reason, GIF format becomes one best left to render diagrams and other computer-generated imagery with swathes of identical pixels and mostly sharp edges (and to utilize the optional transparent mask), for which JPEG compression will create prominant image artefacts. Alternatively, he may just have included it as a joke/nerd-snipe.
+
GIF compression is not 'lossy' in the same way, i.e. whatever it is asked to encode can be faithfully decoded, but Randall may consider its limitations (it can only write images of 256 unique hues, albeit that these can come from anywhere across the whole 65,536 "True color" range, plus transparency) to be a form of loss, as conversion from a more sophisticated format (e.g. PNG, below) could lose many of the subtle shades of the original and produce an inferior image. For this reason, GIF format became one best left to render diagrams and other computer-generated imagery with swathes of identical pixels and mostly sharp edges (and to utilize the optional transparent mask). Alternatively, he may just have included it as a joke/nerd-snipe.
 
|-
 
|-
| {{w|PNG}}, {{w|ZIP (file format)|ZIP}}, {{w|TIFF}}, {{w|WAV}}, raw data
+
| {{w|PNG}}, {{w|ZIP (file format)|ZIP}}, {{w|TIFF}}, {{w|WAV}}
| photo of your cat
+
| A series of formats using lossless compression. PNG and TIFF are image formats, that are suitable for photos but without resorting to reduced accuracy in order to assist compression. WAV is an audio format that also does not arbitrarily sacrifice 'unnecessary' details, unlike the more recently developed {{w|MP3|MPEG Audio Layer III}} which has become the de-facto consumer audio format for many.
| A series of formats using lossless compression. PNG and TIFF are image formats that are suitable for photos, but without (necessarily) resorting to reduced accuracy in order to assist compression. WAV is an audio format that also does not arbitrarily sacrifice 'unnecessary' details, unlike the more recently developed {{w|MP3|MPEG Audio Layer III}} which has become the de-facto consumer audio format for many.
+
ZIP is a generic compression algorithm(/format) that can be used to store any other digital file, for exact decompression later on, although any file(s) already compressed in some way are not likely to compress significantly more.
ZIP is a generic compression algorithm (and the name of the format it creates) that can be used to store any other digital files. Anything put within a ZIP file can be exactly decompressed into the original state later on, although any such file already compressed in some way (such as any of the image formats mentioned in this comic, or other ZIPs) are unlikely to recompress significantly more.
 
|-
 
| Raw data + parity bits for error detection
 
| clone of your cat
 
| In the number 135, the sum of its digits is 9. So the number 135 could be written as "1359", for example, slightly increasing the amount of data that needs to be sent. But with the slight advantage that, if the number was tampered with, the parity bits may be able tell you that an error has occured. (Possibly that the parity itself was the digit that was miswritten.) But a change from "1359" to "1539" could not be detected, in this method, when extracting the parity digit and using this to presume that the first three digits are indeed 'correct'.
 
There are more reliable means to detect errors, such as CRC-32 (now considered obsolete), MD5 and the much more modern {{w|Secure Hash Algorithm|SHA}}. Such values were alluded to in the Hash Table section. But here they are sent ''alongside'' the data, slightly increasing the amount of data transmitted/stored (in order to establish its accuracy), rather than instead of it and vastly decreasing the amount of 'necessary' data (but leaving the virtually impossible task of performing a correct reconstruction).
 
 
 
However it is done, if the check indicates a problem then you can only seek a new copy (of the data, and/or the parity or hash), hoping that the problems encountered can be resolved.
 
 
|-
 
|-
| Raw data + parity bits for error ''correction''
+
| Parity bits for error detection
| your actual cat
+
| In the number 135, the sum of digits is 9. So, the number 135 could be written as "135-9". If the number was tampered with, the parity bits could tell you so (in some cases), or possibly that the parity itself was the digit that was miswritten. But a change from "135" to "153" could not be detected that way. There are more reliable means to detect errors: The obsolete CRC-32 and MD5, and the much more modern {{w|Secure Hash Algorithm|SHA}}.
| With extra error-checking, there are ways to immediately restore the original data with the given additional data. One method is to 'overlap' multiple error-detection parities such that any small enough corruption of data (including of parity bits themselves) can be reconstructed to the correct original value by cross-comparison between all parity bits and the supposed data. One of the first modern methods developed was {{w|Hamming(7,4)}}, invented around 1950, which was a balanced approach designed to handle the typical error conditions typically encountered at the time and has inspired even contemporary electronic methods of maintaining data integrity. Another practical application of error correction bits would be that present in {{w|QR_code#Error_correction|QR Codes}}, using {{w|Reed–Solomon error correction|Reed–Solomon error correction}}.
 
 
|-
 
|-
| Better data
+
| Parity bits for error correction
| my better cat
+
| There are ways to restore the original data with the given additional data. One method is to 'overload' with multiple different methods of error-detection parity such that any small enough corruption of data (including of the parity bits themselves) can be reconstructed to the correct original value. One of the first such methods is {{w|Hamming(7,4)}}, invented around 1950. A practical application of error correction would be {{w|QR_code#Error_correction|QR Codes}} using {{w|Reed–Solomon error correction|Reed–Solomon error correction}}.
| This gives up on the data in question and suggests swapping it for different data entirely. It is no longer about the quality of the transfer of data, but judging the actual data instead. Philosophically, it could be saying that the data or cat are better in some nebulous way, or that it simply is more accurate to what the data is trying to record and represent, in the title text's case saying that Randall's cat more closely represents the essence of "catness."
 
 
|}
 
|}
  
 
==Transcript==
 
==Transcript==
:[A line chart is shown with eight unevenly-spaced ticks each one with a label beneath the line. Above the middle of the line there is a dotted vertical line with a word on either side of this divider. Above the chart there is a big caption with an arrow beneath it pointing right.]
+
:[A line chart is shown with eight unevenly-spaced ticks each one with a label beneath the line. Above the middle of the line there is a dotted vertical line with a word on either side of this divider. Above the chart there is a big caption with an arrow pointing right beneath it.]
 
:<big>Data Quality</big>
 
:<big>Data Quality</big>
 
:Lossy ┊ Lossless
 
:Lossy ┊ Lossless

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)