Difference between revisions of "Talk:3062: Off By One"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
(I could have sworn I logged in before that edit)
Line 9: Line 9:
  
 
But if it's adjusted both on store and on read, then there is a chance (of about 1 in 22) that the value after read will be exactly the same as the value before store. This does not eliminate pre-existing off-by-one errors, and in fact, introduces new ones if the adjustment on read is off by one from the adjustment on store, when there was no off-by-one error in the original code. And what's worse - with a single store-read cycle, the value can never be off by 40 to 50. It can be off by up to 10, or by between 80 to 100, in either direction. --[[User:NeatNit|NeatNit]] ([[User talk:NeatNit|talk]]) 22:42, 12 March 2025 (UTC)
 
But if it's adjusted both on store and on read, then there is a chance (of about 1 in 22) that the value after read will be exactly the same as the value before store. This does not eliminate pre-existing off-by-one errors, and in fact, introduces new ones if the adjustment on read is off by one from the adjustment on store, when there was no off-by-one error in the original code. And what's worse - with a single store-read cycle, the value can never be off by 40 to 50. It can be off by up to 10, or by between 80 to 100, in either direction. --[[User:NeatNit|NeatNit]] ([[User talk:NeatNit|talk]]) 22:42, 12 March 2025 (UTC)
 +
:I was ''just'' adjusting the explanation to imply this sort of thing (without having read your comment, just yet). Given the assumption that n=n±(40+rand(11)) at every stage (I'm assuming 'inclusive', Snaxmcgee!), two steps of 'intentional adjustment' might result in: -100 (x1), -99 (x2), -98 (x3), -97 (x4), -96 (x5), -95 (x6), -94 (x7), -93 (x8), -92 (x9), -91 (x10), -90 (x11), -89..-80 (x10..x1), -10 (x2), -9 (x4), -8 (x6), -7 (x8), -6 (x10), -5 (x12), -4 (x14), -3 (x16), -2 (x18), -1 (x20), ±0 (x22), +1..+10 (x20..x2), +80..+90..+100 (x1..x11..x1).
 +
:This gives a chance of being entirely correct as 22/484 (4.5454...%) and ''each'' off-by-one as ''very'' slightly less (though ±1, in total is almost twice as likely!).
 +
:Adding further steps (skipping odd step-cummulations, at least at first, until you get to nine of them and everything entirely stops being discontinuous) just spreads out an increased number of highs right next to zero deflection... [[Special:Contributions/172.70.86.129|172.70.86.129]] 23:38, 12 March 2025 (UTC)

Revision as of 23:38, 12 March 2025

But what about floats? GreyFox (talk) 20:01, 12 March 2025 (UTC)

Is this dithering? Hcs (talk) 21:19, 12 March 2025 (UTC)

Could be. --PRR (talk) 22:19, 12 March 2025 (UTC)

This language has a huge off by one error: the docs don't explicitly say if the random range is inclusive. --Snaxmcgee (talk) 22:22, 12 March 2025 (UTC)

But if it's adjusted both on store and on read, then there is a chance (of about 1 in 22) that the value after read will be exactly the same as the value before store. This does not eliminate pre-existing off-by-one errors, and in fact, introduces new ones if the adjustment on read is off by one from the adjustment on store, when there was no off-by-one error in the original code. And what's worse - with a single store-read cycle, the value can never be off by 40 to 50. It can be off by up to 10, or by between 80 to 100, in either direction. --NeatNit (talk) 22:42, 12 March 2025 (UTC)

I was just adjusting the explanation to imply this sort of thing (without having read your comment, just yet). Given the assumption that n=n±(40+rand(11)) at every stage (I'm assuming 'inclusive', Snaxmcgee!), two steps of 'intentional adjustment' might result in: -100 (x1), -99 (x2), -98 (x3), -97 (x4), -96 (x5), -95 (x6), -94 (x7), -93 (x8), -92 (x9), -91 (x10), -90 (x11), -89..-80 (x10..x1), -10 (x2), -9 (x4), -8 (x6), -7 (x8), -6 (x10), -5 (x12), -4 (x14), -3 (x16), -2 (x18), -1 (x20), ±0 (x22), +1..+10 (x20..x2), +80..+90..+100 (x1..x11..x1).
This gives a chance of being entirely correct as 22/484 (4.5454...%) and each off-by-one as very slightly less (though ±1, in total is almost twice as likely!).
Adding further steps (skipping odd step-cummulations, at least at first, until you get to nine of them and everything entirely stops being discontinuous) just spreads out an increased number of highs right next to zero deflection... 172.70.86.129 23:38, 12 March 2025 (UTC)