Jump to content

Talk:Heapsort: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 395: Line 395:


==Algorithm issues - pseudocode DOES NOT work==
==Algorithm issues - pseudocode DOES NOT work==
*duplicate keys?
duplicate keys?


none in any code I find, but there certainly are in "the real world"
none in any code I find, but there certainly are in "the real world"


*if both children are >= than "swap"
if both children are >= "swap"


[terrible name - same as procedure in pseudocode - could be "swapHere"]
[terrible name - same as procedure in pseudocode - could be "swapHere"]

Revision as of 03:17, 4 June 2014

WikiProject iconComputing B‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.
WikiProject iconComputer science B‑class High‑importance
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.
Things you can help WikiProject Computer science with:

Worst case performance

The worst case performance is currently listed as O((n log n)^n), I believe this may be error and I going to change it to O(n log n). If anyone wants to explain why I am wrong, please do so here. In fact looking at the rest of the article, which lists the worst case as O(n log n) several times, I think that the previous worst case was either typo or vandalism. -Uselesswarrior (talk) 17:28, 2 March 2009 (UTC)[reply]

C-code

The example C-code was tested and found to be flawed (Obviously, since in the heapify() routine, it says start=count%2 instead of count/2). The following test vector {2,5,3,4,1} returned {1,2,4,3,5}. Many other failing examples have been found. The problem seems to be in the build heap portion of the code, which sometimes fails to create a valid max-heap. When the sift_in function is replaced with a simpler sift_down function for building the heap, the code works.

Also, the comments of the sift_in function seem incorrect. The function first performs a sift_down and then a sift_up, unless you are viewing the heap upside down.

—The preceding unsigned comment was added by Rekinser (talkcontribs) 21:41, 8 December 2006 (UTC).[reply]

I concur. The initial heap build does not work. The build phase examines the first half of the data, in reverse order. With other algorithms, this only affects children that are to the right of the entry being processed. Hence the max heap property is progressively made true with right to left processing of all non-leaf entries.

Where the C code fails is that its "sift_in" will affect parent nodes of the initial "free" node being processed.

A bit of horrible patching on "sift_in" does make it work:
a) Change its parameter from "free" to be "free_in"
b) Add "unsigned free = free_in;" at the start
c) Modify the "while"'s "(i=free/2)" to say "((i=free/2)>=free_in)"

MESSY! Probably not optimal. Can anyone suggest a more sensible change? Lauwr 00:06, 23 January 2007 (UTC)[reply]

The C code does not include stdlib and I realize that "free" is a descriptive variable name, but wouldn't it still be good practice not to have a variable name that could so easily be confused with the stdlib function? 85.11.196.122 07:32, 23 January 2007 (UTC)[reply]

Some time has past and the code on the page has not been corrected. I've also come into c heapsort implementation flaw. As I don't want another person to waste time (as I did) I will correct the main page with the change proposed by Lauwr. Sergio Demian Lerner. --24.232.216.146 20:39, 31 May 2007 (UTC)[reply]

This C code seems to work, it's an almost direct translation of the pseudocode on the page:

       void heapsort (int *, int);
       void heapify (int *, int);
       void siftdown (int *, int, int);
       void heapsort (int *arr, int len) {
               int end;
               int tmp;
               heapify (arr, len);
               end = len - 1;
               while (end > 0) {
                       tmp = arr[0];
                       arr[0] = arr[end];
                       arr[end] = tmp;
                       siftdown (arr, 0, --end);
               }
       }
       void heapify (int *arr, int len) {
               int start;
               for (start = len / 2 - 1; start >= 0; --start)
                       siftdown (arr, start, len - 1);
       }
       void siftdown (int *arr, int start, int end) {
               int root, child, tmp;
               root = start;
               while (root * 2 + 1 <= end) {
                       child = root * 2 + 1;
                       if (child < end && arr[child] < arr[child + 1])
                               ++child;
                       if (arr[root] < arr[child]) {
                               tmp = arr[root];
                               arr[root] = arr[child];
                               arr[child] = tmp;
                               root = child;
                       } else {
                               return;
                       }
               }
       }

Is it acceptable to use code adapted from Numerical Recipies in C? I don't own a copy, so I can't quote it, but I am led to understand that it is very restrictive in its usage licence. PhilHibbs | talk

That's correct. The code can only be used on one screen if you purchase a single-screen license; distribution is prohibited. Deco 00:27, 27 Apr 2005 (UTC)

Pseudocode question

I'm fairly certain that the pseudocode in this article does not work on some arrays of odd length. That is, if length() is odd, then occassionally heapSort() produces an array that is one swap away from sorted. For example,

 heapSort([1,2,3]) -> [1,3,2]

and

 heapSort([2,1,3,5,4]) -> [1,2,4,3,5].

If, in heapSort, I replace the line

   var int start := count / 2 - 1

with

   var int start := count / 2

it seems to correct the bug. Have I missed something? --Quaternion 22:28, 18 Feb 2005 (UTC)

Some algorithms employ data sentinels to avoid checking for array bounds. They can simplify the algorithm and also speed up execution. For heapsort, a data sentinel at the end of the array would prevent the need to check for the special case of a node having just one child, which occurs when N is odd.Gj7 13:48, 23 June 2007 (UTC)[reply]

Oops, that should have been "which occurs when N is even."Gj7 13:55, 23 June 2007 (UTC)[reply]



Is it for some reason impractical to implement a quaternary heapsort?

It's not impractical to implement a quaternary heapsort. In fact, I managed to implement an N-ary Heapsort algorithm in Java. — pguedes

i think there's a bug in the final line of the python program. should it not be called with the first element of the array, 1 instead of with 0?


To my mind O(N log N) is not approximately linear for large N, but I suppose it depends on your viewpoint. There are sort methods which are linear - e.g pigeon hole sort for a densely populated set of integers, and these could be expected to run significantly faster.

However, both heapsort and quicksort are, for most practical purposes, amongst the fastest sorts. Note that scrambling the elements before doing a quicksort is often more effective - as often the elements are almost in order, in which case quicksort is known to be slow. This does not apply to heapsort.

I agree with the O(N log N) statement, and eliminated the claim that it's about linear. Scrambling the elements isn't a really effective strategy for improving quicksort, since scrambling involves a lot of cache misses, making it a very slow operation. Deco 19:36, 4 Nov 2004 (UTC)

The article states that an in-place (O(1) auxiliary space) quicksort is possible. Is that true? Don't you need a stack to manage what parts of the array you still need to sort? --Ryan Stone 15:52, 4 Nov 2004 (UTC)

Oh, I see it now. The data stored on the stack in quicksort is the pivot positions. It is quite possible to compute, rather than store, the pivot positions, in restricted implementations — for example, in a quicksort implementation which always uses the median. I'm not aware of any fast in-place quicksort variant though. Deco 19:31, 4 Nov 2004 (UTC)

There is something I've always wondered about heapsort. When you've created the heap and starts to extract the largest element of it, you swap it with the element with the largest index in the heap to get the value in the right position (ie when you just have created the heap, the first thing you do is to swap the 1st element with the Nth, then heapify, then you swap the 1st element with the (N-1)th and heapify and so on). But if you do that you will most of a time get a very low value at the top of the heap (since the value was previously at the bottom of the heap), and since that value is pretty low the siftup will take longer. Why don't you instead implement the heap as a min-heap instead of a max-heap? Then the head would be at the right position directly (no need to swap) and you could make one of it's children (the element just after it) the head and then heapify. The heapifying would take shorter time on average wouldn't it? I mean, it obvoiusly wouldn't be asymptotically faster, but it would be faster, n'est pas? Gkhan 01:21, Feb 14, 2005 (UTC)

And oh yeah, I realize that the talk-page isn't really the right forum for asking techincal questions, but indulge me Gkhan 01:22, Feb 14, 2005 (UTC)
Note: I removed my previous answer here -- I had the algorithm Gkhan was describing completely wrong
Put briefly, the algorithm you describe isn't very effective at all -- assymptotically it's worse than Bubble sort. Actually, what you describe is an expensive version of Selection sort.
Here's the reason why it's so poor: The heapifying operation takes O(n log n). What you're suggesting is that we heapify n elements, then n - 1 elements, ... until finally heapifying an array with 1 element(ok, we could skip that, but it doesn't make too much of a difference, and does make the calculations below easier.
The number of operations we do will thus be O(n log n) + O ( (n - 1) log (n - 1) ) + ... + O ( 1 log 1)
So it's O ( sum(i = 1 to n) of i log i). I'm not if that can be simplified, but I do know that sum(i = 1 to n) of i is n(n + 1)/2. So your algorithm will be worse than O (n ^ 2).
I don't think that on average, things will be much faster. Let me think about it and see if I can come up with an answer for that.--Ryan Stone 21:03, 18 Feb 2005 (UTC)
Well, I've thought about this a bit more, and what I realized was that even if Gkhan's algorithm was faster in the average case, it still probably couldn't beat quicksort. Heapsort's greatest advantages over quicksort is its guaranteed O (n log n) and that it's an in-place sort. If you're willing accept poor performance on certain inputs anyway, you might as well go with quicksort.--Ryan Stone 02:08, 23 Feb 2005 (UTC)

pseudocode question #2

When getting the child, is multiplying root * 2 enough? I mean, root = 0, then child = 0? I'm thinking it should be ((root + 1) * 2) - 1

For the sake of simplicity, the arrays are one-based. However, this should have been explicitly noted and now is. Thanks. Deco 02:01, 26 Jun 2005 (UTC)
Oops, I was wrong here. It seems like other parts rely on it being zero-based. Hmm. Deco 28 June 2005 21:17 (UTC)

The comment that Heapsort is usually faster than Mergesort may be a bit dated.

On architectures with a (relatively) small CPU cache and large main memory (e.g. Pentiums), Heapsort is usually *much* slower than Mergesort, when the size of the array being sorted is significantly larger than that of the cache but less than half that of main memory, *because* there are so many cache misses (the penalty is appalling: on my home PC a Heapsort of, say, 500,000 32-bit integers takes about twice as long, and of 90,000,000, takes about 10 times as long, as Mergesort and Quicksort).

Heapsort should *not* be used for large N.

Re: the "sorting revisited" link: it just ain't so. By a stroke of luck, the v8 Intel C++ compiler optimizes binary Heapsort better than it optimizes Quicksort or Mergesort (if mergesort is coded in assembler to avoid branch mis-predictions it is slightly faster) (but I'll be damned if I can dream up an efficient way to do the same thing for Quicksort!). The analysis doesn't extend to *large* values of N, which is where Heapsort does very badly indeed. The Insertion Sort implementation used for the test cases is... non-standard and very inefficient... which knobbles both Quicksort and Mergesort. I recommend: *Pull* the link.

-James Barbetti

This is interesting. I wasn't aware that heapsort had such terrible cache behaviour, but it makes sense considering its access pattern. I can add something regarding this. Deco 19:55, 11 July 2005 (UTC)[reply]

Pseudocode question #3

How does the following line of code translate to other languages?

var int root = start, child

How can two variables be assigned to a single varaiable? Can someone please explain how this works and perhaps split this line up into something that can carry over more easily into other languages?

Thank you -- Random Heapsort Investigator

Presumably, it's like C, where int x, y; declares both x and y as integers. One can make an assignment as well, so int x = 2, y; declares both x and y as integers and sets x to be 2. Dysprosia 03:36, 15 August 2006 (UTC)[reply]

Pseudocode question #4

Is it true that both heapifys run in O(n)? The one that uses siftUp should perform

sum_{i=2}^{i=n} floor(log_2(i))

comparisons in the wost case (every new element is shifted all the way to the root). If a recall correctly, the previous summation is \theta(n log n). 193.120.148.177 20:50, 12 September 2007 (UTC)[reply]

Oops, I didn't notice that this issue had already been raised when I posted my comment under "Heapify approaches are equivalent?" below. Since we both questioned the statement independently, I'm going to take that as consensus and fix it. Peristarkawan 22:02, 28 September 2007 (UTC)[reply]

Quicksort not in-place

Isn't it inappropriate to state that an advantage of Heapsort over QuickSort is that Heapsort is in-place? After all, QuickSort seems to be easily done in-place as shown in the Quicksort article.

Quote:

Although somewhat slower in practice on most machines than a good implementation of quicksort, it has the advantages of worst-case O(n log n) runtime and being an in-place algorithm.

Quicksort is not in place, as the article discusses at some length. Quicksort often uses an in-place partition, and has a space-saving tail-recursive variant, but even then it requires Ω(log n) space. The quote you gave is actually talking about heapsort, so it's an argument against your claim rather than for. Deco 00:30, 6 December 2005 (UTC)[reply]

is smoothsort the same as heapsort?

It seems smoothsort redirects to heapsort. Does this make sense? The table on sorting algorithm lists both heapsort and smoothsort as seperate. --Apantomimehorse 21:21, 20 August 2006 (UTC)[reply]


I noticed the same thing. Have a look under "Variations" and they do talk about smoothsort though.

http://en.wikipedia.org/wiki/Smoothsort#Variations

Blakeops 06:40, 2 February 2007 (UTC)[reply]

SelectionSort, HeapSort, and SmoothSort are all variations on the idea of "find the largest element in the unsorted part of the list and add it to the sorted part". Their main difference is in how they structure the unsorted part of the list: SelectionSort doesn't bother to structure it; HeapSort structures it as a heap; SmoothSort structures it in a way that doesn't have a name.
As a result, HeapSort's performance is O(nlog(n)) in all cases whereas SelectionSort's performance is always O(n2) and SmoothSort's performance is O(n) in the best case and O(nlog(n)) in the worst case.
While it's fair to say that SmoothSort was inspired by HeapSort, I think that it's as misleading to say that SmoothSort is a variation of HeapSort as it would be to say that SmoothSort is a variation of SelectionSort. There's no doubt that at a high enough level they have something in common but then so do all sorting algorithms. -- Derek Ross | Talk 18:14, 8 May 2008 (UTC)[reply]

correcting a heapsort reference

chuck bradley, bradley@tiac.net the reference to introduction to algorithms is partly wrong. chpt 6 is called heapsort. it covers heaps, heapsort, and priority queues. chapter 7 is quicksort.

Two versions of the code?

The second version of the pseudocode gives a feeling of dissonance to this article. I think it's because of the way the intro to the second version complains slightly about the example before it. Can something be done to correct this? --Masamage 17:14, 7 December 2006 (UTC)[reply]

Pseudocode changes and clarifications

 The strange thing about the above implementation is that it uses heapify-down operations
 to achieve what we really want to achieve using heapify-up operations.  Imagine building the
 heap.  As we add new elements, we want them to crawl up the heap.  For the actual sorting,
 however, the standard implementation jibes with intuition.  The following implementation
 jibes completely with intuition and is still O(n log n).

The two implementations are equivalent, except in the order in which they process data. The first one starts at the bottom and moves up while sifting down, while the second starts at the top and moves down while sifting up. Either way is acceptable and neither is strange to me. The only plus side to using the first piece of code is that you only need one sift function, other than that they are equivalent in execution time.

I am having trouble understanding some of the notation used in this function.

 function siftup(a, start) {
     var int child := start, root, remainder

     while child > 0 {
         remainder := (child - 1) % 2
         root := ((child - 1) - remainder) / 2 
         if a[root] < a[child]
             swap(a[root], a[child])
             child := root
         else
             return
     }
 }

So I rewrote it. Most languages default to integer division operations when using integers. So unless start, child, and/or root are defined as floating point they are unlikly to cause floating point division in the Java, or C languages. This algorithm is correct, but the remainder is (in this algorithm) equivalent to using the floor function.

I changed the original sift down function to take the end as a parameter instead of count, because it may be difficult to understand that sift(a, 0, end) means to sift down all the way except for the last element in the heap as you are inputting end for count. I also changed the format to be in greater accordence with Wikipedia:Algorithms_on_Wikipedia. --ANONYMOUS COWARD0xC0DE 08:10, 18 December 2006 (UTC)[reply]

Heapsort Explanation

The article is pretty good at showing the logic behind heapsort. I'm going to attempt to explain this, then present an example. After the numbers or fields are loaded into an array, think of the heap as a corporation where each supervisor has one or two people reporting to them. The first number A(1) is the CEO. The next two numbers, A(2) and A(3) report to A(1). The next two numbers, A(4) and A(5) report to A(2), A(6) and A(7) report to A(3), A(8) and A(9) report to A(4). If there are an even number of entries, then the last supervisor only has one person reporting to them. Thus, for any supervisor J, there will be up to two employees, A(2J) and A(2J+1) reporting to them. Therefore, in a company of N employees, int(N/2) of those employees are supervisors.

We start off with the last supervisor, int(N/2)=J, and compare A(J) with his one or two subordinates, A(2J) and A(2J+1). If N is even, then the last supervisor only has one employee reporting to him, A(2J). Between the three people, the supervisor must have the highest numbers. Then we move backwards through the supervisors and compare each supervisor with his two employees. When we get to A(1), then A(1) will have the highest value. Then we switch the value of the CEO, A(1), with the last employee, A(N) so A(N) will have the highest number. Then A(N) will retire, so there are N - 1 employees left. We then recalculate the number of supervisors, which is int((N-1)/2), work backwards from the last supervisor in N-1 employees. Then when A(1) has the highest value, we switch A(1) with the last employee, which is now A(N-1) and A(N-1) retires. Now there are N-2 employees and int((N-2)/2) supervisors and continue these comparison until there is only one employee left. Here is the logic for that.

LET N = NUMBER OF RECORDS
LET J = N
WHILE J > 1
 I = INT(J/2)  'Calculate number of supervisors
 While I > 0
   If A(I) < A(2I) then
      TEMP=A(I)   'Supervisor gets highest value.
      A(I)=A(2I)
      A(2I)=TEMP
   End If
   If A(I) < A(2I+1) and 2I+1 <= J then
      TEMP=A(I)   'Supervisor gets highest value.
      A(I)=A(2I+1)
      A(2I+1)=TEMP
   End If
   I = I - 1  'Go backward through the supervisors
 Loop
 TEMP=A(J)    'Switch CEO and last employee.
 A(J)=A(1)
 A(1)=TEMP
 J = J - 1    'Retire last employee 
LOOP
K=1
WHILE K <= N
 PRINT A(K)
 K = K + 1
LOOP
      

--Trust101 07:22, 27 April 2007 (UTC)[reply]

Wikibook Quotation

Why not to use implementation suggested in Wikibook Algorithm implementation? KorDen 16:27, 20 February 2007 (UTC)[reply]

heapsort variant and heapsort generalization

There is a relatively simple variation (exchanging two values instead of one during each Heapsort iteration) that saves N/2 comparisons and N/2 moves. The variation is described in the paper at http://www.eduneer.com/pub/dualheapsort.pdf, which also may be of interest to those reading the heapsort article.

Gj7 16:21, 20 June 2007 (UTC)[reply]

Heapify approaches are equivalent?

"It can be shown that both variants of heapify run in O(n) time."

I'm fairly certain that this is incorrect. One half of the nodes in a heap are leaves. When using the siftUp variant, sifting up from each leaf node will take Θ(log n) time in the worst case, making the algorithm Ω(n log n). The asymptotic complexity of the siftDown variant is harder to demonstrate, but my recollection from school is that it is in fact O(n). – Peristarkawan 02:42, 28 September 2007 (UTC)[reply]

Code block templates

Hi all. A while ago I began an experiment with a new method for discouraging incorrect good-faith changes to code and pseudocode called "code block templates" on the articles quicksort and binary search algorithm. So far there haven't been any incorrect new changes to those code block templates, but very few changes have been made, so I'd like to expand the scope of this experiment to see how effective it is. This is one of the articles I'm targeting for this second phase. Please let me know on my talk page if you have any questions or concerns. Thanks! Dcoetzee 21:37, 7 December 2008 (UTC)[reply]

Bug in pseudocode?

If I read it right, then it doesn't sort the simple vector: {1,2}.
If I'm right then it's because after the swap (when poping elements), the sift operations still includes the poped element.
While it shouldn't.

79.178.112.120 (talk) 21:48, 10 July 2009 (UTC)[reply]

Sorting an almost sorted list

When you heap sort a list that is "almost sorted" or even completely sorted (like if you just added 1 element), heapsort makes a complete full sort, by recreating the entire heap, and moving just about every element in the list.

Isn't that a big disadvantage to a sort method like even the basic bubble sort, which can do a 1 pass sort?

Shouldn't that be mentioned in the article? happypal (Talk | contribs) 08:39, 30 July 2009 (UTC)[reply]

Proof that heapify with siftDown is O(n) and not just O(n log n)

Each level k has 2k-1 elements (counting levels from 1 to log n).
In the worst case scenario of fully sorted array, each element has to be pushed to the end.
So siftDown at level k will take (log n) - k operations, since the new parent is the smallest element, so it will siftDown to the bottom.
So at each level k you have 2k-1 elements to be siftDown, each taking (log n) - k, or
Sum[1, log n] (2k-1(log n - k)) =
S(2k-1 log n - 2k-1 k) =
log n S(2k-1) - S(k 2k-1) =
log n (2log n - 1) - ( 2 + 2log n(log n - 1) ) =
log n (n - 1) - ( 2 + n(log n - 1) ) =     // replaced 2log n with n
n log n - log n - 2 - n log n + n =
n - log n - 2 = O(n)

For comparison, heapify with siftUp takes O(n log n)
This time, each level k takes k-1 operations to siftUp in worst case.
Let's look at the last level log n. It contains 2log n - 1 or n/2 elements.
Pushing them up takes n/2 elements pushing up log n - 1 levels. Or total for last level:
(log n - 1)*n/2 = O(n log n)

Pla123 (talk) 00:49, 16 January 2010 (UTC)[reply]

Pictures to Step through algorithm?

Hi all. I was wondering if we could get any pictures that step through the algorithm? The one's on Prim's algorithm, and Kruskal's algorithm are particularly useful. I am more of a visual learner, and the pictures are 10x easier to understand than the pseudocode. thoughts? Crabpot8 (talk) 17:45, 10 February 2009 (UTC)[reply]

Maybe someone could use this: http://corte.si/posts/code/visualisingsorting/index.html 84.182.107.93 (talk) 16:49, 9 February 2010 (UTC)[reply]

Best Case Run Time

Can someone tell me an instance where the best case would be O(n)? I think this is an error. Epachamo (talk) 21:45, 15 October 2011 (UTC)[reply]

The best case is achieved when all elements are identical. — Preceding unsigned comment added by 89.0.35.62 (talk) 20:55, 25 January 2014 (UTC)[reply]

Is Heapsort a stable sort algorithm or not ?

I think that heapsort - like each other sort algorithm - can be both a stable sort algorithm or not. This depends in my opinion only on the concrete implementation of the sort algorithm.

If we asume that the given array of elements is already sorted by any sort keys (SortKey_0, SortKey_1, ..., SortKey_I-1) so (considering all the already processed sort keys) all elements with a lower array index are lower or equal to all the elements with a larger array index. This already existing sort order (of all the already processed sort keys) should be changed (for the current SortKey_I) in both of the following to cases: if the value of an element with a lower array index (parent) is greater then OR is equal to the value of an element with a larger array index (one of it's children). This realy little change in the pseudo code (two times replace operator ">" with operator ">=") should garantee that from all elements with the same value (considering the current SortKey_I) that element with the largest array index as first is swapped to the root element and finally stored at the end of the array. That means the already existing sort order (of all the already processed sort keys) will be unchanged now if the values of two different elements are equal (considering the current SortKeyI). For this reason this sort algorithm should be stable now. Is this correct?

Aragorn321 (talk) 11:12, 18 July 2012 (UTC)[reply]

Merge Sort requires requires Ω(n) auxiliary space - really?

"Heapsort also competes with merge sort, which has the same time bounds, but requires Ω(n) auxiliary space, whereas heapsort requires only a constant amount."

This is NOT true, since merging can be done in-place in O(n) time. — Preceding unsigned comment added by Dhruvbird (talkcontribs) 02:19, 13 August 2012 (UTC)[reply]

Do you have a reference that says a stable in-place merge takes O(n) time? References are really what WP requires. A stable out-of-place merge can be done in O(n) time. Stable merging in-place is a bit trickier -- what happens to displaced elements? Glrx (talk) 16:17, 14 August 2012 (UTC)[reply]

"Now repeat after" me problem

I highly object to the code that is introduced in this wiki. Either write a pseudo-code that illustrates the idea or write a concise code where the idea is obvious. Writing a code that is optimized in any way, which always means more difficult to read is really not good. I have seen in many books the same, sorry to say, but mess

if a[swap] < a[child]
           swap ← child
       if child+1 ≤ end and a[swap] < a[child+1]
           swap ← child + 1
       if swap ≠ root
           swap(a[root], a[swap])
           root ← swap            (: repeat to continue sifting down the child now :)
       else
           return

where special case child+1 ≤ end is mixed with everything else. There is no reason for sift-up and sift-down to look this much different in code, while they are the same basic idea. I know that one can object and say that in most literature this is how this algorithm is presented, I am just saying that it is better then to leave just the idea in pseudo-code with a reference to the literature, than to repeat one option. The problem is that, if you present the subject this way, as it is here and in other places other algorithms, you make an already subtle subject even more complicated for no reason. (I truly do not understand why everything I can read is cleared and brought to higher and higher levels of abstraction, precision and simplification everywhere, except in programming. The solution that was taught 30 years ago is the same as today down to the last line. Mind-boggling for me.) — Preceding unsigned comment added by Aperisic (talkcontribs) 16:35, 18 February 2014 (UTC)[reply]

Algorithm issues - pseudocode DOES NOT work

duplicate keys?

none in any code I find, but there certainly are in "the real world"

if both children are >= "swap"

[terrible name - same as procedure in pseudocode - could be "swapHere"]

if a[swap] < a[child]
swap ← child

if child+1 ≤ end and a[swap] < a[child+1]
swap ← child + 1

How is the fact that the first assignment may be overwritten acceptable?

problems confirmed - implemented first block of pseudocode - fails to sort array