Difference between revisions of "Talk:3026: Linear Sort"

Jump to: navigation, search
Line 12: Line 12:
  
 
According to my estimates extrapolated from timing the sorting of 10 million random numbers on my computer, the break-even point where the algorithm becomes worse than linear is beyond the expected heat death of the universe. I did neglect the question of where to store the input array. --[[Special:Contributions/162.158.154.35|162.158.154.35]] 16:37, 18 December 2024 (UTC)
 
According to my estimates extrapolated from timing the sorting of 10 million random numbers on my computer, the break-even point where the algorithm becomes worse than linear is beyond the expected heat death of the universe. I did neglect the question of where to store the input array. --[[Special:Contributions/162.158.154.35|162.158.154.35]] 16:37, 18 December 2024 (UTC)
 +
:If the numbers being sorted are unique, each would need a fair number of bits to store. (Fair meaning that the time to do the comparison would be non-negligible.) If they aren't, you can just bucket-sort them in linear time. Since we're assuming absurdly large memory capacity. [[Special:Contributions/162.158.186.253|162.158.186.253]] 17:14, 18 December 2024 (UTC)

Revision as of 17:14, 18 December 2024

First in linear time!Mr. I (talk) 13:28, 18 December 2024 (UTC)

Due to the fact that O(nlog(n)) outgrows O(n), the Linear Sort is not actually linear. 162.158.174.227 14:21, 18 December 2024 (UTC)

If your sleep() function can handle negative arguments "correctly", then I guess it could work. 162.158.91.91 16:27, 18 December 2024 (UTC)

That was fast... Caliban (talk) 15:35, 18 December 2024 (UTC)

Do I even want to know what Randall's thinking nowadays? ⯅A dream demon⯅ (talk) 16:02, 18 December 2024 (UTC)

The title text would be more correct if Randall used e.g. Timsort instead of Mergesort. They both have the same worst-case complexity O(n*log(n)), but the former is linear if the list was already in order, so best-case complexity is O(n). Mergesort COULD also be implemented this way, but its standard version is never linear. Bebidek (talk) 16:35, 18 December 2024 (UTC)

According to my estimates extrapolated from timing the sorting of 10 million random numbers on my computer, the break-even point where the algorithm becomes worse than linear is beyond the expected heat death of the universe. I did neglect the question of where to store the input array. --162.158.154.35 16:37, 18 December 2024 (UTC)

If the numbers being sorted are unique, each would need a fair number of bits to store. (Fair meaning that the time to do the comparison would be non-negligible.) If they aren't, you can just bucket-sort them in linear time. Since we're assuming absurdly large memory capacity. 162.158.186.253 17:14, 18 December 2024 (UTC)