Talk:Heapsort: Difference between revisions
m Maintain {{WPBS}} and vital articles: 2 WikiProject template(s). Merge {{VA}} into {{WPBS}}. Keep the rating of {{VA}} "B" in {{WPBS}}. Remove the same ratings as {{WPBS}} and keep different ratings in {{WikiProject Computing}}, {{WikiProject Computer science}}. |
|||
(35 intermediate revisions by 20 users not shown) | |||
Line 1: | Line 1: | ||
{{WikiProject |
{{WikiProject banner shell|class=B|vital=yes|1= |
||
{{WikiProject |
{{WikiProject Computing|importance=mid}} |
||
{{WikiProject Computer science|importance=high}} |
|||
}} |
|||
__TOC__ |
__TOC__ |
||
Line 355: | Line 357: | ||
This depends in my opinion only on the concrete implementation of the sort algorithm. |
This depends in my opinion only on the concrete implementation of the sort algorithm. |
||
If we |
If we assume that the given array of elements is already sorted by any sort keys (SortKey_0, SortKey_1, ..., SortKey_I-1) |
||
so (considering all the already processed sort keys) all elements with a lower array index are lower or equal to all the elements with a larger array index. |
so (considering all the already processed sort keys) all elements with a lower array index are lower or equal to all the elements with a larger array index. |
||
This already existing sort order (of all the already processed sort keys) should be changed (for the current SortKey_I) in both of the following to cases: |
This already existing sort order (of all the already processed sort keys) should be changed (for the current SortKey_I) in both of the following to cases: |
||
if the value of an element with a lower array index (parent) is greater then OR is equal to the value of an element with a larger array index (one of it's children). |
if the value of an element with a lower array index (parent) is greater then OR is equal to the value of an element with a larger array index (one of it's children). |
||
This |
This really little change in the pseudo code (two times replace operator ">" with operator ">=") should guarantee |
||
that from all elements with the same value (considering the current SortKey_I) |
that from all elements with the same value (considering the current SortKey_I) |
||
that element with the largest array index as first is swapped to the root element and finally stored at the end of the array. |
that element with the largest array index as first is swapped to the root element and finally stored at the end of the array. |
||
Line 367: | Line 369: | ||
[[User:Aragorn321|Aragorn321]] ([[User talk:Aragorn321|talk]]) 11:12, 18 July 2012 (UTC) |
[[User:Aragorn321|Aragorn321]] ([[User talk:Aragorn321|talk]]) 11:12, 18 July 2012 (UTC) |
||
: {{ping|Aragorn321}} No, this doesn't work, although it's a common student question. The problem is that elements move, and after they move their order is no longer the same as their initial order. (You can easily do it by ''storing'' the initial position, and using that as part of the sort key, but a stable sort shouldn't need that.) |
|||
: Basically, two elements with the same value might have different parents. And then the element with the greater ''parent'' is promoted a level when some other element is sifted down. This can move it from after its twin to before in the array (higher in the heap). |
|||
: [[Special:Contributions/71.41.210.146|71.41.210.146]] ([[User talk:71.41.210.146|talk]]) 20:18, 26 December 2016 (UTC) |
|||
== Merge Sort requires requires Ω(n) auxiliary space - really? == |
== Merge Sort requires requires Ω(n) auxiliary space - really? == |
||
Line 375: | Line 381: | ||
: Do you have a reference that says a stable in-place merge takes O(''n'') time? References are really what WP requires. A stable out-of-place merge can be done in O(''n'') time. Stable merging in-place is a bit trickier -- what happens to displaced elements? [[User:Glrx|Glrx]] ([[User talk:Glrx|talk]]) 16:17, 14 August 2012 (UTC) |
: Do you have a reference that says a stable in-place merge takes O(''n'') time? References are really what WP requires. A stable out-of-place merge can be done in O(''n'') time. Stable merging in-place is a bit trickier -- what happens to displaced elements? [[User:Glrx|Glrx]] ([[User talk:Glrx|talk]]) 16:17, 14 August 2012 (UTC) |
||
:: {{ping|Glrx}} The constant factor tends to be a problem in practice, but O(n) algorithms are well known. [[Special:Contributions/71.41.210.146|71.41.210.146]] ([[User talk:71.41.210.146|talk]]) 20:04, 26 December 2016 (UTC) |
|||
::* {{cite journal |title=Optimal Stable Merging |first=Antonios |last=Symvonis | journal=[[The Computer Journal]] |volume=38 |issue=8 |pages=681–690. |year=1995 |citeseerx=10.1.1.40.862 |doi= 10.1093/comjnl/38.8.681 |doi-access=free |url=http://csep.hpcc.nectec.or.th/Journals/oup/smj/journals/ed/titles/computer_journal/Volume_38/Issue_08/38.8.pdf/3885.pdf}} |
|||
== "Now repeat after" me problem == |
== "Now repeat after" me problem == |
||
Line 394: | Line 403: | ||
(I truly do not understand why everything I can read is cleared and brought to higher and higher levels of abstraction, precision and simplification everywhere, except in programming. The solution that was taught 30 years ago is the same as today down to the last line. Mind-boggling for me.) <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Aperisic|Aperisic]] ([[User talk:Aperisic|talk]] • [[Special:Contributions/Aperisic|contribs]]) 16:35, 18 February 2014 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> |
(I truly do not understand why everything I can read is cleared and brought to higher and higher levels of abstraction, precision and simplification everywhere, except in programming. The solution that was taught 30 years ago is the same as today down to the last line. Mind-boggling for me.) <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Aperisic|Aperisic]] ([[User talk:Aperisic|talk]] • [[Special:Contributions/Aperisic|contribs]]) 16:35, 18 February 2014 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> |
||
==Algorithm issues - pseudocode does not work== |
|||
== duplicate keys? == |
|||
duplicate keys? |
|||
none in any code I find, but there certainly are in "the real world" |
none in any code I find, but there certainly are in "the real world" |
||
if both children are >= "swap" |
|||
[terrible name - same as procedure in pseudocode - could be "swapHere"] |
|||
if a[swap] < a[child]<br> |
|||
swap ← child<br><Br> |
|||
if child+1 ≤ end and a[swap] < a[child+1]<br> |
|||
swap ← child + 1 |
|||
How is the fact that the first assignment may be overwritten acceptable? |
|||
problem '''confirmed''' - implemented first block of pseudocode - fails to sort array |
|||
<small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:68.183.37.136|68.183.37.136]] ([[User talk:68.183.37.136|talk]] • [[Special:Contributions/68.183.37.136|contribs]]) 03:26, 4 June 2014</span></small><!-- Template:Unsigned --> |
|||
: Look at the code again. The assignment is not "overwritten" / thrown away in the sense you imply. The heap must promote the largest element to the parental position. The index value <code>swap</code> starts out as the parent index. If the left child is greater, then <code>swap</code> is set to the child. If there is a right child and the right child is greater than what <code>swap</code> currently points to (the larger of parent and left child), then <code>swap</code> is set to the right child. The result is <code>swap</code> is the index of the largest of the parent its two children. Then that index and the index of the parent are swapped to promote the largest of the three values to the parental position. [[User:Glrx|Glrx]] ([[User talk:Glrx|talk]]) 01:06, 13 June 2014 (UTC) |
|||
== Non-Optimal Algorithm Used == |
|||
[[Special:Contributions/172.56.32.126|172.56.32.126]] ([[User talk:172.56.32.126|talk]]) 04:53, 26 January 2015 (UTC) 2015-01-25: I think this page has an interesting issue. Instead of heapify putting the max element on the left of the array, I don't see why we cannot just put the max element on the right instead. Doing so will save us around N swaps right away. |
|||
: In an {{math|O(''n'' log(''n''))}} sorting algorithm, saving {{mvar|n}} swaps does not have an impact. Reliable sources put the root (the heap max) on the left, so we don't get to say different: rewriting the algorithm would be original research. Furthermore, putting the root on the right to avoid the copy means the entire heap would need to shift to the left (or a different heap structure would be needed). [[User:Glrx|Glrx]] ([[User talk:Glrx|talk]]) 16:50, 31 January 2015 (UTC) |
|||
== Pseudocode question #5 == |
|||
In this snippet of pseudocode |
|||
procedure heapify(a, count) is |
|||
(start is assigned the index in 'a' of the last parent node) |
|||
(the last element in a 0-based array is at index count-1; find the parent of that element) |
|||
start ← floor ((count - 2) / 2) |
|||
Is the (count-2) correct? The previous line says it should be (count-1), shouldn't it? <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:A.s.serov|A.s.serov]] ([[User talk:A.s.serov|talk]] • [[Special:Contributions/A.s.serov|contribs]]) 11:44, 23 June 2015 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> |
|||
: <code>iParent(i-1)</code>. [[User:Glrx|Glrx]] ([[User talk:Glrx|talk]]) 01:26, 27 June 2015 (UTC) |
|||
== Request to change the algorithm == |
|||
The following code of heapify() with siftDown() is very complex, and isn't explained why it works and why it's correct. (For example, I even believe it's wrong.) A much simpler version of heapify() using siftUp() must be used as the basic code. Then siftDown() version can be presented as a little better alternative. |
|||
(Put elements of 'a' in heap order, in-place) |
|||
procedure heapify(a, count) is |
|||
(start is assigned the index in 'a' of the last parent node) |
|||
(the last element in a 0-based array is at index count-1; find the parent of that element) |
|||
start ← iParent(count-1) |
|||
while start ≥ 0 do |
|||
(sift down the node at index 'start' to the proper place such that all nodes below |
|||
the start index are in heap order) |
|||
siftDown(a, start, count - 1) |
|||
(go to the next parent node) |
|||
start ← start - 1 |
|||
(after sifting down the root all nodes/elements are in heap order) |
|||
[[User:Zyavrik|Zyavrik]] ([[User talk:Zyavrik|talk]]) 02:17, 12 May 2019 (UTC) |
|||
== Contentious issue about siftDown function == |
|||
I had made some changes in the siftDown function which seemed to correct a mistake in my view. (This was my first ever edit), |
|||
The change I made was reverted twice, so I think it should be discussed here. |
|||
My original change was done November 17th. at 3:51, I included the following message: |
|||
'''There is a mistake in the siftDown pseudo-code, the last assingment has to be "swap = root", not "root = swap" - this causes the method to halt. Check https://github.com/pat-jpnk/algorithms/blob/main/sorting/heapsort.py for working example |
|||
''' |
|||
Am I wrong ? |
|||
[[User:Pat-jpnk|Pat-jpnk]] ([[User talk:Pat-jpnk|talk]]) 12:12, 11 December 2021 (UTC) |
|||
:{{re|Pat-jpnk}} I was notified of this by another editor. The pseudocode on this article is correct. The error is in your implementation, of which there's a few issues. Firstly, you're missing the heapify function. In its place you have a recursive perculate function, that looks like it's doing a simple [[bubble sort]] as max_heap iterates over the array it is invoked with. Next you're feeding the result of the bubble sort into heap sort, however you have two mistakes in your sift_down function. On line 55, where you're doing the calculation <code>((2 * start) + 1 <= end)</code> you should instead be doing <code>((2 * root) + 1 <= end)</code>. The reason for this is that at each of those lines you want to get the left child of whatever the current value of root is, whereas your code gets the left child of whatever the current value of start is. This is more obvious when you put that calculation off into a function as in the pseudocode example. Once you fix those two lines, line 70 should read <code>root = swap</code> and your code should execute properly. [[User:Sideswipe9th|Sideswipe9th]] ([[User talk:Sideswipe9th|talk]]) 22:47, 13 December 2021 (UTC) |
|||
:{{re|Sideswipe9th}} |
|||
Thanks for your input! |
|||
== O(N^2) heapsort code using siftUp == |
|||
I think the version of heapsort using siftUp needs to be fixed or removed. As currently written heapsort using siftUp is an O(N^2) algorithm at best because of the repeated calls to heapify which itself is O(N log(N)). Currently it's presented as an alternative O(N log(N)) heapsort implementation and that appears to be a mistake. The writing also gives the impression that this is Williams' original algorithm but I believe that although his original algorithm used siftUp in heapify it still made use of siftDown in the heapsort procedure itself. So this version is neither historically relevant nor performant enough to warrant inclusion in the article. I've copied and pasted the current code below for reference. |
|||
'''procedure''' heapify(a,count) is |
|||
''(end is assigned the index of the first (left) child of the root)'' |
|||
end := 1 |
|||
'''while''' end < count |
|||
''(sift up the node at index end to the proper place such that all nodes above'' |
|||
'' the end index are in heap order)'' |
|||
siftUp(a, 0, end) |
|||
end := end + 1 |
|||
''(after sifting up the last node all nodes are in heap order)'' |
|||
'''procedure''' heapsort(a, count) '''is''' |
|||
'''input:''' an unordered array ''a'' of length ''count'' |
|||
''(Build the heap in array a so that largest value is at the root)'' |
|||
heapify(a, count) |
|||
''(The following loop maintains the [[Loop invariant|invariants]] that a[0:end] is a heap and every element'' |
|||
''beyond end is greater than everything before it (so a[end:count] is in sorted order))'' |
|||
end ← count - 1 |
|||
'''while''' end > 0 '''do''' |
|||
''(a[0] is the root and largest value. The swap moves it in front of the sorted elements.)'' |
|||
swap(a[end], a[0]) |
|||
''(rebuild the heap using siftUp after the swap ruins the heap property)'' |
|||
heapify(a, end) |
|||
''(reduce the heap size by one)'' |
|||
end ← end - 1 <!-- Template:Unsigned --><span class="autosigned" style="font-size:85%;">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Mithrandir314|Mithrandir314]] ([[User talk:Mithrandir314#top|talk]] • [[Special:Contributions/Mithrandir314|contribs]]) 01:50, 10 March 2022 (UTC)</span> <!--Autosigned by SineBot--> |
|||
== siftDown in Pseudocode is incorrect? == |
|||
{{code|swap ← root}} will overwrite {{code|swap ← child}} or {{code|swap ← child + 1}}, possibly looping infinitely. {{code|swap ← root}} should move out of the while loop? |
|||
== Comments in pseudocode? == |
|||
I find that using brackets for comments is quite unreadable, especially since brackets are already being used for function calls. Permission to change the format? |
|||
Probably something like this: |
|||
''(this is a comment)'' |
|||
to this: |
|||
''// this is a comment'' |
|||
[[User:COArSe D1RTxxx|COArSe D1RTxxx]] ([[User talk:COArSe D1RTxxx|talk]]) 22:30, 27 September 2023 (UTC) |
|||
:Adding whitespace and color would be more helpful, I think. Compare: |
|||
'''while''' end > 0 '''do''' |
|||
''(a[0] is the root and largest value.)'' |
|||
swap(a[end], a[0]) |
|||
''(the heap size is reduced by one)'' |
|||
end ← end - 1 |
|||
''(the swap ruined the heap property, so restore it)'' |
|||
siftDown(a, 0, end) |
|||
with |
|||
'''while''' end > 0 '''do''' |
|||
{{color|#3D7B7B|''(a[0] is the root and largest value.)''}} |
|||
swap(a[end], a[0]) |
|||
{{color|#3D7B7B|''(the heap size is reduced by one)''}} |
|||
end ← end - 1 |
|||
{{color|#3D7B7B|''(the swap ruined the heap property, so restore it)''}} |
|||
siftDown(a, 0, end) |
|||
:[[User:Dexxor|Dexxor]] ([[User talk:Dexxor|talk]]) 12:52, 28 September 2023 (UTC) |
Latest revision as of 03:57, 6 January 2024
This level-5 vital article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||
|
Worst case performance
[edit]The worst case performance is currently listed as O((n log n)^n), I believe this may be error and I going to change it to O(n log n). If anyone wants to explain why I am wrong, please do so here. In fact looking at the rest of the article, which lists the worst case as O(n log n) several times, I think that the previous worst case was either typo or vandalism. -Uselesswarrior (talk) 17:28, 2 March 2009 (UTC)
C-code
[edit]The example C-code was tested and found to be flawed (Obviously, since in the heapify() routine, it says start=count%2 instead of count/2). The following test vector {2,5,3,4,1} returned {1,2,4,3,5}. Many other failing examples have been found. The problem seems to be in the build heap portion of the code, which sometimes fails to create a valid max-heap. When the sift_in function is replaced with a simpler sift_down function for building the heap, the code works.
Also, the comments of the sift_in function seem incorrect. The function first performs a sift_down and then a sift_up, unless you are viewing the heap upside down.
—The preceding unsigned comment was added by Rekinser (talk • contribs) 21:41, 8 December 2006 (UTC).
I concur. The initial heap build does not work. The build phase examines the first half of the data, in reverse order. With other algorithms, this only affects children that are to the right of the entry being processed. Hence the max heap property is progressively made true with right to left processing of all non-leaf entries.
Where the C code fails is that its "sift_in" will affect parent nodes of the initial "free" node being processed.
A bit of horrible patching on "sift_in" does make it work:
a) Change its parameter from "free" to be "free_in"
b) Add "unsigned free = free_in;" at the start
c) Modify the "while"'s "(i=free/2)" to say "((i=free/2)>=free_in)"
MESSY! Probably not optimal. Can anyone suggest a more sensible change? Lauwr 00:06, 23 January 2007 (UTC)
The C code does not include stdlib and I realize that "free" is a descriptive variable name, but wouldn't it still be good practice not to have a variable name that could so easily be confused with the stdlib function? 85.11.196.122 07:32, 23 January 2007 (UTC)
Some time has past and the code on the page has not been corrected. I've also come into c heapsort implementation flaw. As I don't want another person to waste time (as I did) I will correct the main page with the change proposed by Lauwr. Sergio Demian Lerner. --24.232.216.146 20:39, 31 May 2007 (UTC)
This C code seems to work, it's an almost direct translation of the pseudocode on the page:
void heapsort (int *, int); void heapify (int *, int); void siftdown (int *, int, int); void heapsort (int *arr, int len) { int end; int tmp; heapify (arr, len); end = len - 1; while (end > 0) { tmp = arr[0]; arr[0] = arr[end]; arr[end] = tmp; siftdown (arr, 0, --end); } } void heapify (int *arr, int len) { int start; for (start = len / 2 - 1; start >= 0; --start) siftdown (arr, start, len - 1); } void siftdown (int *arr, int start, int end) { int root, child, tmp; root = start; while (root * 2 + 1 <= end) { child = root * 2 + 1; if (child < end && arr[child] < arr[child + 1]) ++child; if (arr[root] < arr[child]) { tmp = arr[root]; arr[root] = arr[child]; arr[child] = tmp; root = child; } else { return; } } }
Numerical Recipies copyright
[edit]Is it acceptable to use code adapted from Numerical Recipies in C? I don't own a copy, so I can't quote it, but I am led to understand that it is very restrictive in its usage licence. — PhilHibbs | talk
- That's correct. The code can only be used on one screen if you purchase a single-screen license; distribution is prohibited. Deco 00:27, 27 Apr 2005 (UTC)
Pseudocode question
[edit]I'm fairly certain that the pseudocode in this article does not work on some arrays of odd length. That is, if length() is odd, then occassionally heapSort() produces an array that is one swap away from sorted. For example,
heapSort([1,2,3]) -> [1,3,2]
and
heapSort([2,1,3,5,4]) -> [1,2,4,3,5].
If, in heapSort, I replace the line
var int start := count / 2 - 1
with
var int start := count / 2
it seems to correct the bug. Have I missed something? --Quaternion 22:28, 18 Feb 2005 (UTC)
Some algorithms employ data sentinels to avoid checking for array bounds. They can simplify the algorithm and also speed up execution. For heapsort, a data sentinel at the end of the array would prevent the need to check for the special case of a node having just one child, which occurs when N is odd.Gj7 13:48, 23 June 2007 (UTC)
Oops, that should have been "which occurs when N is even."Gj7 13:55, 23 June 2007 (UTC)
- The pseudocode works for me. Make sure you are really using integer based variables for the value of start, or take the floor value of count/2. For example, in perl I said
my $start = int($count / 2) - 1;
--Dustball 02:36, 4 July 2006 (UTC) - if you're using Java, try
(count >>> 1)
(right-shift floor) 76.64.22.234 02:45, 24 May 2007 (UTC)
Is it for some reason impractical to implement a quaternary heapsort?
- It's not impractical to implement a quaternary heapsort. In fact, I managed to implement an N-ary Heapsort algorithm in Java. — pguedes
i think there's a bug in the final line of the python program. should it not be called with the first element of the array, 1 instead of with 0?
To my mind O(N log N) is not approximately linear for large N, but I suppose it depends on your viewpoint. There are sort methods which are linear - e.g pigeon hole sort for a densely populated set of integers, and these could be expected to run significantly faster.
However, both heapsort and quicksort are, for most practical purposes, amongst the fastest sorts. Note that scrambling the elements before doing a quicksort is often more effective - as often the elements are almost in order, in which case quicksort is known to be slow. This does not apply to heapsort.
- I agree with the O(N log N) statement, and eliminated the claim that it's about linear. Scrambling the elements isn't a really effective strategy for improving quicksort, since scrambling involves a lot of cache misses, making it a very slow operation. Deco 19:36, 4 Nov 2004 (UTC)
The article states that an in-place (O(1) auxiliary space) quicksort is possible. Is that true? Don't you need a stack to manage what parts of the array you still need to sort? --Ryan Stone 15:52, 4 Nov 2004 (UTC)
- Oh, I see it now. The data stored on the stack in quicksort is the pivot positions. It is quite possible to compute, rather than store, the pivot positions, in restricted implementations — for example, in a quicksort implementation which always uses the median. I'm not aware of any fast in-place quicksort variant though. Deco 19:31, 4 Nov 2004 (UTC)
There is something I've always wondered about heapsort. When you've created the heap and starts to extract the largest element of it, you swap it with the element with the largest index in the heap to get the value in the right position (ie when you just have created the heap, the first thing you do is to swap the 1st element with the Nth, then heapify, then you swap the 1st element with the (N-1)th and heapify and so on). But if you do that you will most of a time get a very low value at the top of the heap (since the value was previously at the bottom of the heap), and since that value is pretty low the siftup will take longer. Why don't you instead implement the heap as a min-heap instead of a max-heap? Then the head would be at the right position directly (no need to swap) and you could make one of it's children (the element just after it) the head and then heapify. The heapifying would take shorter time on average wouldn't it? I mean, it obvoiusly wouldn't be asymptotically faster, but it would be faster, n'est pas? Gkhan 01:21, Feb 14, 2005 (UTC)
- And oh yeah, I realize that the talk-page isn't really the right forum for asking techincal questions, but indulge me Gkhan 01:22, Feb 14, 2005 (UTC)
- Note: I removed my previous answer here -- I had the algorithm Gkhan was describing completely wrong
- Put briefly, the algorithm you describe isn't very effective at all -- assymptotically it's worse than Bubble sort. Actually, what you describe is an expensive version of Selection sort.
- Here's the reason why it's so poor: The heapifying operation takes O(n log n). What you're suggesting is that we heapify n elements, then n - 1 elements, ... until finally heapifying an array with 1 element(ok, we could skip that, but it doesn't make too much of a difference, and does make the calculations below easier.
- The number of operations we do will thus be O(n log n) + O ( (n - 1) log (n - 1) ) + ... + O ( 1 log 1)
- So it's O ( sum(i = 1 to n) of i log i). I'm not if that can be simplified, but I do know that sum(i = 1 to n) of i is n(n + 1)/2. So your algorithm will be worse than O (n ^ 2).
- I don't think that on average, things will be much faster. Let me think about it and see if I can come up with an answer for that.--Ryan Stone 21:03, 18 Feb 2005 (UTC)
- Well, I've thought about this a bit more, and what I realized was that even if Gkhan's algorithm was faster in the average case, it still probably couldn't beat quicksort. Heapsort's greatest advantages over quicksort is its guaranteed O (n log n) and that it's an in-place sort. If you're willing accept poor performance on certain inputs anyway, you might as well go with quicksort.--Ryan Stone 02:08, 23 Feb 2005 (UTC)
pseudocode question #2
[edit]When getting the child, is multiplying root * 2 enough? I mean, root = 0, then child = 0? I'm thinking it should be ((root + 1) * 2) - 1
- For the sake of simplicity, the arrays are one-based. However, this should have been explicitly noted and now is. Thanks. Deco 02:01, 26 Jun 2005 (UTC)
- Oops, I was wrong here. It seems like other parts rely on it being zero-based. Hmm. Deco 28 June 2005 21:17 (UTC)
The comment that Heapsort is usually faster than Mergesort may be a bit dated.
On architectures with a (relatively) small CPU cache and large main memory (e.g. Pentiums), Heapsort is usually *much* slower than Mergesort, when the size of the array being sorted is significantly larger than that of the cache but less than half that of main memory, *because* there are so many cache misses (the penalty is appalling: on my home PC a Heapsort of, say, 500,000 32-bit integers takes about twice as long, and of 90,000,000, takes about 10 times as long, as Mergesort and Quicksort).
Heapsort should *not* be used for large N.
Re: the "sorting revisited" link: it just ain't so. By a stroke of luck, the v8 Intel C++ compiler optimizes binary Heapsort better than it optimizes Quicksort or Mergesort (if mergesort is coded in assembler to avoid branch mis-predictions it is slightly faster) (but I'll be damned if I can dream up an efficient way to do the same thing for Quicksort!). The analysis doesn't extend to *large* values of N, which is where Heapsort does very badly indeed. The Insertion Sort implementation used for the test cases is... non-standard and very inefficient... which knobbles both Quicksort and Mergesort. I recommend: *Pull* the link.
-James Barbetti
- This is interesting. I wasn't aware that heapsort had such terrible cache behaviour, but it makes sense considering its access pattern. I can add something regarding this. Deco 19:55, 11 July 2005 (UTC)
Pseudocode question #3
[edit]How does the following line of code translate to other languages?
var int root = start, child
How can two variables be assigned to a single varaiable? Can someone please explain how this works and perhaps split this line up into something that can carry over more easily into other languages?
Thank you -- Random Heapsort Investigator
- Presumably, it's like C, where int x, y; declares both x and y as integers. One can make an assignment as well, so int x = 2, y; declares both x and y as integers and sets x to be 2. Dysprosia 03:36, 15 August 2006 (UTC)
Pseudocode question #4
[edit]Is it true that both heapifys run in O(n)? The one that uses siftUp should perform
sum_{i=2}^{i=n} floor(log_2(i))
comparisons in the wost case (every new element is shifted all the way to the root). If a recall correctly, the previous summation is \theta(n log n). 193.120.148.177 20:50, 12 September 2007 (UTC)
- Oops, I didn't notice that this issue had already been raised when I posted my comment under "Heapify approaches are equivalent?" below. Since we both questioned the statement independently, I'm going to take that as consensus and fix it. Peristarkawan 22:02, 28 September 2007 (UTC)
Quicksort not in-place
[edit]Isn't it inappropriate to state that an advantage of Heapsort over QuickSort is that Heapsort is in-place? After all, QuickSort seems to be easily done in-place as shown in the Quicksort article.
Quote:
Although somewhat slower in practice on most machines than a good implementation of quicksort, it has the advantages of worst-case O(n log n) runtime and being an in-place algorithm.
- Quicksort is not in place, as the article discusses at some length. Quicksort often uses an in-place partition, and has a space-saving tail-recursive variant, but even then it requires Ω(log n) space. The quote you gave is actually talking about heapsort, so it's an argument against your claim rather than for. Deco 00:30, 6 December 2005 (UTC)
is smoothsort the same as heapsort?
[edit]It seems smoothsort redirects to heapsort. Does this make sense? The table on sorting algorithm lists both heapsort and smoothsort as seperate. --Apantomimehorse 21:21, 20 August 2006 (UTC)
I noticed the same thing. Have a look under "Variations" and they do talk about smoothsort though.
http://en.wikipedia.org/wiki/Smoothsort#Variations
Blakeops 06:40, 2 February 2007 (UTC)
- SelectionSort, HeapSort, and SmoothSort are all variations on the idea of "find the largest element in the unsorted part of the list and add it to the sorted part". Their main difference is in how they structure the unsorted part of the list: SelectionSort doesn't bother to structure it; HeapSort structures it as a heap; SmoothSort structures it in a way that doesn't have a name.
- As a result, HeapSort's performance is O(nlog(n)) in all cases whereas SelectionSort's performance is always O(n2) and SmoothSort's performance is O(n) in the best case and O(nlog(n)) in the worst case.
- While it's fair to say that SmoothSort was inspired by HeapSort, I think that it's as misleading to say that SmoothSort is a variation of HeapSort as it would be to say that SmoothSort is a variation of SelectionSort. There's no doubt that at a high enough level they have something in common but then so do all sorting algorithms. -- Derek Ross | Talk 18:14, 8 May 2008 (UTC)
correcting a heapsort reference
[edit]chuck bradley, bradley@tiac.net the reference to introduction to algorithms is partly wrong. chpt 6 is called heapsort. it covers heaps, heapsort, and priority queues. chapter 7 is quicksort.
Two versions of the code?
[edit]The second version of the pseudocode gives a feeling of dissonance to this article. I think it's because of the way the intro to the second version complains slightly about the example before it. Can something be done to correct this? --Masamage 17:14, 7 December 2006 (UTC)
Pseudocode changes and clarifications
[edit]The strange thing about the above implementation is that it uses heapify-down operations to achieve what we really want to achieve using heapify-up operations. Imagine building the heap. As we add new elements, we want them to crawl up the heap. For the actual sorting, however, the standard implementation jibes with intuition. The following implementation jibes completely with intuition and is still O(n log n).
The two implementations are equivalent, except in the order in which they process data. The first one starts at the bottom and moves up while sifting down, while the second starts at the top and moves down while sifting up. Either way is acceptable and neither is strange to me. The only plus side to using the first piece of code is that you only need one sift function, other than that they are equivalent in execution time.
I am having trouble understanding some of the notation used in this function.
function siftup(a, start) { var int child := start, root, remainder while child > 0 { remainder := (child - 1) % 2 root := ((child - 1) - remainder) / 2 if a[root] < a[child] swap(a[root], a[child]) child := root else return } }
So I rewrote it. Most languages default to integer division operations when using integers. So unless start, child, and/or root are defined as floating point they are unlikly to cause floating point division in the Java, or C languages. This algorithm is correct, but the remainder is (in this algorithm) equivalent to using the floor function.
I changed the original sift down function to take the end as a parameter instead of count, because it may be difficult to understand that sift(a, 0, end) means to sift down all the way except for the last element in the heap as you are inputting end for count. I also changed the format to be in greater accordence with Wikipedia:Algorithms_on_Wikipedia. --ANONYMOUS COWARD0xC0DE 08:10, 18 December 2006 (UTC)
Heapsort Explanation
[edit]The article is pretty good at showing the logic behind heapsort. I'm going to attempt to explain this, then present an example. After the numbers or fields are loaded into an array, think of the heap as a corporation where each supervisor has one or two people reporting to them. The first number A(1) is the CEO. The next two numbers, A(2) and A(3) report to A(1). The next two numbers, A(4) and A(5) report to A(2), A(6) and A(7) report to A(3), A(8) and A(9) report to A(4). If there are an even number of entries, then the last supervisor only has one person reporting to them. Thus, for any supervisor J, there will be up to two employees, A(2J) and A(2J+1) reporting to them. Therefore, in a company of N employees, int(N/2) of those employees are supervisors.
We start off with the last supervisor, int(N/2)=J, and compare A(J) with his one or two subordinates, A(2J) and A(2J+1). If N is even, then the last supervisor only has one employee reporting to him, A(2J). Between the three people, the supervisor must have the highest numbers. Then we move backwards through the supervisors and compare each supervisor with his two employees. When we get to A(1), then A(1) will have the highest value. Then we switch the value of the CEO, A(1), with the last employee, A(N) so A(N) will have the highest number. Then A(N) will retire, so there are N - 1 employees left. We then recalculate the number of supervisors, which is int((N-1)/2), work backwards from the last supervisor in N-1 employees. Then when A(1) has the highest value, we switch A(1) with the last employee, which is now A(N-1) and A(N-1) retires. Now there are N-2 employees and int((N-2)/2) supervisors and continue these comparison until there is only one employee left. Here is the logic for that.
LET N = NUMBER OF RECORDS LET J = N WHILE J > 1 I = INT(J/2) 'Calculate number of supervisors While I > 0 If A(I) < A(2I) then TEMP=A(I) 'Supervisor gets highest value. A(I)=A(2I) A(2I)=TEMP End If If A(I) < A(2I+1) and 2I+1 <= J then TEMP=A(I) 'Supervisor gets highest value. A(I)=A(2I+1) A(2I+1)=TEMP End If I = I - 1 'Go backward through the supervisors Loop TEMP=A(J) 'Switch CEO and last employee. A(J)=A(1) A(1)=TEMP J = J - 1 'Retire last employee LOOP K=1 WHILE K <= N PRINT A(K) K = K + 1 LOOP
--Trust101 07:22, 27 April 2007 (UTC)
Wikibook Quotation
[edit]Why not to use implementation suggested in Wikibook Algorithm implementation? KorDen 16:27, 20 February 2007 (UTC)
heapsort variant and heapsort generalization
[edit]There is a relatively simple variation (exchanging two values instead of one during each Heapsort iteration) that saves N/2 comparisons and N/2 moves. The variation is described in the paper at http://www.eduneer.com/pub/dualheapsort.pdf, which also may be of interest to those reading the heapsort article.
Heapify approaches are equivalent?
[edit]"It can be shown that both variants of heapify run in O(n) time."
I'm fairly certain that this is incorrect. One half of the nodes in a heap are leaves. When using the siftUp variant, sifting up from each leaf node will take Θ(log n) time in the worst case, making the algorithm Ω(n log n). The asymptotic complexity of the siftDown variant is harder to demonstrate, but my recollection from school is that it is in fact O(n). – Peristarkawan 02:42, 28 September 2007 (UTC)
Code block templates
[edit]Hi all. A while ago I began an experiment with a new method for discouraging incorrect good-faith changes to code and pseudocode called "code block templates" on the articles quicksort and binary search algorithm. So far there haven't been any incorrect new changes to those code block templates, but very few changes have been made, so I'd like to expand the scope of this experiment to see how effective it is. This is one of the articles I'm targeting for this second phase. Please let me know on my talk page if you have any questions or concerns. Thanks! Dcoetzee 21:37, 7 December 2008 (UTC)
Bug in pseudocode?
[edit]If I read it right, then it doesn't sort the simple vector: {1,2}.
If I'm right then it's because after the swap (when poping elements), the sift operations still includes the poped element.
While it shouldn't.
79.178.112.120 (talk) 21:48, 10 July 2009 (UTC)
Sorting an almost sorted list
[edit]When you heap sort a list that is "almost sorted" or even completely sorted (like if you just added 1 element), heapsort makes a complete full sort, by recreating the entire heap, and moving just about every element in the list.
Isn't that a big disadvantage to a sort method like even the basic bubble sort, which can do a 1 pass sort?
Shouldn't that be mentioned in the article? happypal (Talk | contribs) 08:39, 30 July 2009 (UTC)
Proof that heapify with siftDown is O(n) and not just O(n log n)
[edit]Each level k has 2k-1 elements (counting levels from 1 to log n).
In the worst case scenario of fully sorted array, each element has to be pushed to the end.
So siftDown at level k will take (log n) - k operations, since the new parent is the smallest element, so it will siftDown to the bottom.
So at each level k you have 2k-1 elements to be siftDown, each taking (log n) - k, or
Sum[1, log n] (2k-1(log n - k)) =
S(2k-1 log n - 2k-1 k) =
log n S(2k-1) - S(k 2k-1) =
log n (2log n - 1) - ( 2 + 2log n(log n - 1) ) =
log n (n - 1) - ( 2 + n(log n - 1) ) = // replaced 2log n with n
n log n - log n - 2 - n log n + n =
n - log n - 2 = O(n)
For comparison, heapify with siftUp takes O(n log n)
This time, each level k takes k-1 operations to siftUp in worst case.
Let's look at the last level log n. It contains 2log n - 1 or n/2 elements.
Pushing them up takes n/2 elements pushing up log n - 1 levels. Or total for last level:
(log n - 1)*n/2 = O(n log n)
Pla123 (talk) 00:49, 16 January 2010 (UTC)
Pictures to Step through algorithm?
[edit]Hi all. I was wondering if we could get any pictures that step through the algorithm? The one's on Prim's algorithm, and Kruskal's algorithm are particularly useful. I am more of a visual learner, and the pictures are 10x easier to understand than the pseudocode. thoughts? Crabpot8 (talk) 17:45, 10 February 2009 (UTC)
- Maybe someone could use this: http://corte.si/posts/code/visualisingsorting/index.html 84.182.107.93 (talk) 16:49, 9 February 2010 (UTC)
Best Case Run Time
[edit]Can someone tell me an instance where the best case would be O(n)? I think this is an error. Epachamo (talk) 21:45, 15 October 2011 (UTC)
The best case is achieved when all elements are identical. — Preceding unsigned comment added by 89.0.35.62 (talk) 20:55, 25 January 2014 (UTC)
Is Heapsort a stable sort algorithm or not ?
[edit]I think that heapsort - like each other sort algorithm - can be both a stable sort algorithm or not. This depends in my opinion only on the concrete implementation of the sort algorithm.
If we assume that the given array of elements is already sorted by any sort keys (SortKey_0, SortKey_1, ..., SortKey_I-1) so (considering all the already processed sort keys) all elements with a lower array index are lower or equal to all the elements with a larger array index. This already existing sort order (of all the already processed sort keys) should be changed (for the current SortKey_I) in both of the following to cases: if the value of an element with a lower array index (parent) is greater then OR is equal to the value of an element with a larger array index (one of it's children). This really little change in the pseudo code (two times replace operator ">" with operator ">=") should guarantee that from all elements with the same value (considering the current SortKey_I) that element with the largest array index as first is swapped to the root element and finally stored at the end of the array. That means the already existing sort order (of all the already processed sort keys) will be unchanged now if the values of two different elements are equal (considering the current SortKeyI). For this reason this sort algorithm should be stable now. Is this correct?
Aragorn321 (talk) 11:12, 18 July 2012 (UTC)
- @Aragorn321: No, this doesn't work, although it's a common student question. The problem is that elements move, and after they move their order is no longer the same as their initial order. (You can easily do it by storing the initial position, and using that as part of the sort key, but a stable sort shouldn't need that.)
- Basically, two elements with the same value might have different parents. And then the element with the greater parent is promoted a level when some other element is sifted down. This can move it from after its twin to before in the array (higher in the heap).
- 71.41.210.146 (talk) 20:18, 26 December 2016 (UTC)
Merge Sort requires requires Ω(n) auxiliary space - really?
[edit]"Heapsort also competes with merge sort, which has the same time bounds, but requires Ω(n) auxiliary space, whereas heapsort requires only a constant amount."
This is NOT true, since merging can be done in-place in O(n) time. — Preceding unsigned comment added by Dhruvbird (talk • contribs) 02:19, 13 August 2012 (UTC)
- Do you have a reference that says a stable in-place merge takes O(n) time? References are really what WP requires. A stable out-of-place merge can be done in O(n) time. Stable merging in-place is a bit trickier -- what happens to displaced elements? Glrx (talk) 16:17, 14 August 2012 (UTC)
- @Glrx: The constant factor tends to be a problem in practice, but O(n) algorithms are well known. 71.41.210.146 (talk) 20:04, 26 December 2016 (UTC)
- Symvonis, Antonios (1995). "Optimal Stable Merging" (PDF). The Computer Journal. 38 (8): 681–690. CiteSeerX 10.1.1.40.862. doi:10.1093/comjnl/38.8.681.
- @Glrx: The constant factor tends to be a problem in practice, but O(n) algorithms are well known. 71.41.210.146 (talk) 20:04, 26 December 2016 (UTC)
"Now repeat after" me problem
[edit]I highly object to the code that is introduced in this wiki. Either write a pseudo-code that illustrates the idea or write a concise code where the idea is obvious. Writing a code that is optimized in any way, which always means more difficult to read is really not good. I have seen in many books the same, sorry to say, but mess
if a[swap] < a[child] swap ← child if child+1 ≤ end and a[swap] < a[child+1] swap ← child + 1 if swap ≠ root swap(a[root], a[swap]) root ← swap (: repeat to continue sifting down the child now :) else return
where special case child+1 ≤ end is mixed with everything else. There is no reason for sift-up and sift-down to look this much different in code, while they are the same basic idea. I know that one can object and say that in most literature this is how this algorithm is presented, I am just saying that it is better then to leave just the idea in pseudo-code with a reference to the literature, than to repeat one option. The problem is that, if you present the subject this way, as it is here and in other places other algorithms, you make an already subtle subject even more complicated for no reason. (I truly do not understand why everything I can read is cleared and brought to higher and higher levels of abstraction, precision and simplification everywhere, except in programming. The solution that was taught 30 years ago is the same as today down to the last line. Mind-boggling for me.) — Preceding unsigned comment added by Aperisic (talk • contribs) 16:35, 18 February 2014 (UTC)
Algorithm issues - pseudocode does not work
[edit]duplicate keys?
none in any code I find, but there certainly are in "the real world"
if both children are >= "swap"
[terrible name - same as procedure in pseudocode - could be "swapHere"]
if a[swap] < a[child]
swap ← child
if child+1 ≤ end and a[swap] < a[child+1]
swap ← child + 1
How is the fact that the first assignment may be overwritten acceptable?
problem confirmed - implemented first block of pseudocode - fails to sort array — Preceding unsigned comment added by 68.183.37.136 (talk • contribs) 03:26, 4 June 2014
- Look at the code again. The assignment is not "overwritten" / thrown away in the sense you imply. The heap must promote the largest element to the parental position. The index value
swap
starts out as the parent index. If the left child is greater, thenswap
is set to the child. If there is a right child and the right child is greater than whatswap
currently points to (the larger of parent and left child), thenswap
is set to the right child. The result isswap
is the index of the largest of the parent its two children. Then that index and the index of the parent are swapped to promote the largest of the three values to the parental position. Glrx (talk) 01:06, 13 June 2014 (UTC)
Non-Optimal Algorithm Used
[edit]172.56.32.126 (talk) 04:53, 26 January 2015 (UTC) 2015-01-25: I think this page has an interesting issue. Instead of heapify putting the max element on the left of the array, I don't see why we cannot just put the max element on the right instead. Doing so will save us around N swaps right away.
- In an O(n log(n)) sorting algorithm, saving n swaps does not have an impact. Reliable sources put the root (the heap max) on the left, so we don't get to say different: rewriting the algorithm would be original research. Furthermore, putting the root on the right to avoid the copy means the entire heap would need to shift to the left (or a different heap structure would be needed). Glrx (talk) 16:50, 31 January 2015 (UTC)
Pseudocode question #5
[edit]In this snippet of pseudocode
procedure heapify(a, count) is (start is assigned the index in 'a' of the last parent node) (the last element in a 0-based array is at index count-1; find the parent of that element) start ← floor ((count - 2) / 2)
Is the (count-2) correct? The previous line says it should be (count-1), shouldn't it? — Preceding unsigned comment added by A.s.serov (talk • contribs) 11:44, 23 June 2015 (UTC)
iParent(i-1)
. Glrx (talk) 01:26, 27 June 2015 (UTC)
Request to change the algorithm
[edit]The following code of heapify() with siftDown() is very complex, and isn't explained why it works and why it's correct. (For example, I even believe it's wrong.) A much simpler version of heapify() using siftUp() must be used as the basic code. Then siftDown() version can be presented as a little better alternative.
(Put elements of 'a' in heap order, in-place) procedure heapify(a, count) is (start is assigned the index in 'a' of the last parent node) (the last element in a 0-based array is at index count-1; find the parent of that element) start ← iParent(count-1) while start ≥ 0 do (sift down the node at index 'start' to the proper place such that all nodes below the start index are in heap order) siftDown(a, start, count - 1) (go to the next parent node) start ← start - 1 (after sifting down the root all nodes/elements are in heap order)
Zyavrik (talk) 02:17, 12 May 2019 (UTC)
Contentious issue about siftDown function
[edit]I had made some changes in the siftDown function which seemed to correct a mistake in my view. (This was my first ever edit), The change I made was reverted twice, so I think it should be discussed here.
My original change was done November 17th. at 3:51, I included the following message:
There is a mistake in the siftDown pseudo-code, the last assingment has to be "swap = root", not "root = swap" - this causes the method to halt. Check https://github.com/pat-jpnk/algorithms/blob/main/sorting/heapsort.py for working example
Am I wrong ?
Pat-jpnk (talk) 12:12, 11 December 2021 (UTC)
- @Pat-jpnk: I was notified of this by another editor. The pseudocode on this article is correct. The error is in your implementation, of which there's a few issues. Firstly, you're missing the heapify function. In its place you have a recursive perculate function, that looks like it's doing a simple bubble sort as max_heap iterates over the array it is invoked with. Next you're feeding the result of the bubble sort into heap sort, however you have two mistakes in your sift_down function. On line 55, where you're doing the calculation
((2 * start) + 1 <= end)
you should instead be doing((2 * root) + 1 <= end)
. The reason for this is that at each of those lines you want to get the left child of whatever the current value of root is, whereas your code gets the left child of whatever the current value of start is. This is more obvious when you put that calculation off into a function as in the pseudocode example. Once you fix those two lines, line 70 should readroot = swap
and your code should execute properly. Sideswipe9th (talk) 22:47, 13 December 2021 (UTC)
Thanks for your input!
O(N^2) heapsort code using siftUp
[edit]I think the version of heapsort using siftUp needs to be fixed or removed. As currently written heapsort using siftUp is an O(N^2) algorithm at best because of the repeated calls to heapify which itself is O(N log(N)). Currently it's presented as an alternative O(N log(N)) heapsort implementation and that appears to be a mistake. The writing also gives the impression that this is Williams' original algorithm but I believe that although his original algorithm used siftUp in heapify it still made use of siftDown in the heapsort procedure itself. So this version is neither historically relevant nor performant enough to warrant inclusion in the article. I've copied and pasted the current code below for reference.
procedure heapify(a,count) is (end is assigned the index of the first (left) child of the root) end := 1 while end < count (sift up the node at index end to the proper place such that all nodes above the end index are in heap order) siftUp(a, 0, end) end := end + 1 (after sifting up the last node all nodes are in heap order)
procedure heapsort(a, count) is input: an unordered array a of length count (Build the heap in array a so that largest value is at the root) heapify(a, count) (The following loop maintains the invariants that a[0:end] is a heap and every element beyond end is greater than everything before it (so a[end:count] is in sorted order)) end ← count - 1 while end > 0 do (a[0] is the root and largest value. The swap moves it in front of the sorted elements.) swap(a[end], a[0]) (rebuild the heap using siftUp after the swap ruins the heap property) heapify(a, end) (reduce the heap size by one) end ← end - 1 — Preceding unsigned comment added by Mithrandir314 (talk • contribs) 01:50, 10 March 2022 (UTC)
siftDown in Pseudocode is incorrect?
[edit]swap ← root
will overwrite swap ← child
or swap ← child + 1
, possibly looping infinitely. swap ← root
should move out of the while loop?
Comments in pseudocode?
[edit]I find that using brackets for comments is quite unreadable, especially since brackets are already being used for function calls. Permission to change the format?
Probably something like this:
(this is a comment)
to this:
// this is a comment
COArSe D1RTxxx (talk) 22:30, 27 September 2023 (UTC)
- Adding whitespace and color would be more helpful, I think. Compare:
while end > 0 do (a[0] is the root and largest value.) swap(a[end], a[0]) (the heap size is reduced by one) end ← end - 1 (the swap ruined the heap property, so restore it) siftDown(a, 0, end)
with
while end > 0 do (a[0] is the root and largest value.) swap(a[end], a[0]) (the heap size is reduced by one) end ← end - 1 (the swap ruined the heap property, so restore it) siftDown(a, 0, end)
- B-Class level-5 vital articles
- Wikipedia level-5 vital articles in Mathematics
- B-Class vital articles in Mathematics
- B-Class Computing articles
- Mid-importance Computing articles
- All Computing articles
- B-Class Computer science articles
- High-importance Computer science articles
- WikiProject Computer science articles