Jump to content

Talk:Tree traversal

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Jochen Burghardt (talk | contribs) at 17:35, 9 October 2024 (Non-binary trees: Reply). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Iterative Inorder Traversal

[edit]

Algorithm requires a check if a node has been already printed other wise it will always keep on printing the leftmost and its parent. —Preceding unsigned comment added by 66.235.10.126 (talkcontribs) 30 December 2005

Thanks! When doing the recursive to iterative translation I forgot to keep in mind that we somehow have to remember the code position at the last call. I'll ponder how to do this in a neat clean way. Deco 10:59, 30 December 2005 (UTC)[reply]

This is what I came up with in C/C++

   void inorder(node* root) {
     bool done = false;
     Stack<node*> stack;
     node* current = root;
     while (!done)
     {
       if (current)
       {
           stack.push(current);
           current = current->left;
        }
       else
       {
            if ( stack.empty() == false)
            {
              current = stack.pop();
              cout << current->value;
              current = current->right;
             }
             else 
                   done = true;
       }
     }
   }

—Preceding unsigned comment added by 131.107.0.106 (talkcontribs) 31 December 2005

Iterative Traversal Redux

[edit]

The currently presented iterative algorithm I find a little too obscure. For reference, here it is:

visit(root) {
    prev    := null
    current := root
    next    := null
   
    while current ≠ null {
        if prev == current.parent
            prev := current
            next := current.left
        if next == null or prev == current.left
            print current.value
            prev := current
            next := current.right
        if next == null or prev == current.right
            prev := current
            next := current.parent
        current := next
    }
}

For a few days, the following version was in the article:

visit(root) {
    prev    := null
    current := root
    next    := null
   
    while current ≠ null {
        if prev == current.parent
            next := current.left
        if next == null or prev == current.left
            print current.value
            next := current.right
        if next == null or prev == current.right
            next := current.parent
        prev := current
        current := next
    }
}

It's a little simpler because the "prev := current" has been factored out. But it doesn't work in some cases because of subtleties in the code. It's not a simple factoring because in the original the "prev := current" affected the if statements which followed it, which the factored version does not. Only occasionally does that matter.

One case it fails for is where the root has only one child, a left-hand child. After taking the first then clause ("next := current.left") it mistakenly executes the third then clause ("next := current.parent") causing next to be set to null and the loop to be exited. This is because the test "prev == current.right" succeeds since both values are null. In the original/current code, this failure was prevented by the earlier execution of "prev := current".

I think the current algorithm is too subtle to be a good example. I'm looking at modifications to clarify it. -R. S. Shaw 03:27, 9 June 2006 (UTC)[reply]

Ah, I see. I didn't take into account the fact that the flow of execution fell through to the next if statement. Al 03:44, 9 June 2006 (UTC)[reply]
I agree. I wrote it. If you can come up with anything simpler that would be great. Deco 09:10, 9 June 2006 (UTC)[reply]
After looking at some alternatives, I've put in a new version of the algorithm. This one uses a nested subroutine (for something that could have just been duplicate code) because it seemed a little clearer despite the extra routine. -R. S. Shaw 00:17, 10 June 2006 (UTC)[reply]
Personally I find that more complicated, since "midvisit" doesn't seem to have a clear conceptual role. Maybe I'm missing something. Deco 01:20, 10 June 2006 (UTC)[reply]

This is the true iterative traversal: The one that doesn't use a stack at all. Those "iterative with stack" traversals should be called "pseudo-iterative" in my book. The disadvantage of the true iterative traversal is that it can only be done on trees where each node, in addition to pointers to its children, also holds a pointer to its parent. Medinoc (talk) 07:08, 22 October 2016 (UTC)[reply]

Flaws in All These Algorithms

[edit]

I notice that all of the algorithms that I looked at on this page contain at least one major bug: They will fail if they are initially passed an empty tree. There is a better way to write them, that avoids this bug (or obligation on the caller if you prefer) and is just as efficient. For example, the preorder traversal could be written (using the same style):

 visit(node)
     if (node == null) return
     print node.value
     visit(node.left)
     visit(node.right)

While this makes twice as many calls to "visit", it also avoids half the dereferences to node's children, so that's arguably a performance wash (it depends on the circumstances as to which will truly run faster). And it eliminates the major bug. All of these code examples can and should be rewritten in the same way.

I think it would also be clearer to call the functions "preorder" "postorder" and "inorder," respectively, instead of calling them all "visit." Cliff Shaffer, 10:25, 21 September 2006.

Rewrite

[edit]

I rewrote the article, removing implementations, as they were probably incorrect, according to the various comments on them, and as implementations don't replace a good explanation. I consider the article to be a stub now, and valid and clean implementations could be added (I plan to). I also didn't keep the external links, that were not even really related to tree traversal, but merely to the use of graphs in the context of database, and probably came from a web development background (PHP and MySQL related links).

There are surely far better and more classic references to the subject, but the O'Reilly book is the one where I learned tree traversal and where I verified my stub. In particular, I suspect there is a treatment of trees and corresponding algorithms in The Art of Computer Programming, but I don't have it at hand.

Nowhere man 22:15, 1 December 2006 (UTC)[reply]

Reverse Polish Notation

[edit]

The reverse polish notation used in HP calculators is the postfix, not the prefix notation. I don't know English enough to rewrite...

Changes on 14 Feb 07

[edit]

To anyone watching this article, let me know what you think of the modifications. Restructured it quite a bit, no doubt slipped some typos/mistakes in, but I tried to make it look a bit more professional.

Also changed all the pseudocode to be consistent with one another (some were using curly braces to mark blocks, others just used whitespace - I started at the top where there was whitespace delimited blocks, and made the rest like that), and completely changed the algorithm for iterative/explicit stack based traversals. Let me know if I stuffed it up ! The original code definitely works, but it's possible I mucked up the pseudocode translation. Benefit of the new pseudocode is that at any point the direction could be changed (just use rights instead of lefts, lefts instead of rights), even mid traversal... and that the stack based implementation is virtually identical to the parent pointers implementation.

Also added an example for how to in-order traverse a threaded tree.

Phew. Over an hour down the drain, hope somebody finds it useful =P Themania 06:24, 28 February 2007 (UTC)[reply]

It looks great. I'll be reviewing it for stylistic concerns, but that's a lot of useful information you've added in. — Edward Z. Yang(Talk) 22:48, 28 February 2007 (UTC)[reply]
Thanks Edward =) apologies about reverting your revert - I'm pretty confident that the anomynous ip-logged user was correct, although reverting reverts should probably be mentioned on the talk page..? —The preceding unsigned comment was added by Themania (talkcontribs) 00:11, 1 March 2007 (UTC).[reply]
No, it's okay. It's tough to tell whether or not an edit is legit when there's no edit summary given. — Edward Z. Yang(Talk) 02:29, 1 March 2007 (UTC)[reply]
Would be, especially for non user edits. Just reassuring to know that there's people watching =) Themania 15:15, 1 March 2007 (UTC)[reply]

Problem with Iterative Solution

[edit]

inorder(node)

while hasleftchild(node) do
  node = node.left
do
  visit(node)
  if not (hasrightchild(node)) then
    node = node.right // <------------------ look here
  else
    node = node.right // <------------------ and here
  while hasleftchild(node) do
    node = node.left
while node ≠ null

This can't be right... Philfreo 18:59, 23 March 2007 (UTC)[reply]

You're right, it's wrong. The inner while should have been under the then. I've fixed the article. Note that this is not just some iterative solution, it is only for a threaded tree, which essentially always has pointers to skip directly to the next inorder node from a node without a right child, so it happens to have a very simple inorder walk because of the extra threading. -R. S. Shaw 03:31, 24 March 2007 (UTC)[reply]

Looking at the previous question, I'm still not sure how this can work. If node does not have a right child, how can you do "else node = node.right" ? According to the graph at the top of the article, once the inorder traversal got to 'A' and there are no child nodes, then it goes back to its parent node.

Maybe something is going on at visit(node) that I don't understand.

inorder(node)

 while hasleftchild(node) do
   node = node.left
 do
   visit(node)
   if (hasrightchild(node)) then
     node = node.right
     while hasleftchild(node) do
       node = node.left
   else
     node = node.right //? here
 while node ≠ null

Ddgun (talk) 16:48, 12 February 2008 (UTC)[reply]

Apparently you're overlooking the word threaded. It's important. See Threaded binary tree. -R. S. Shaw (talk) 22:34, 13 February 2008 (UTC)[reply]

Preorder Iterative Solution in Java

[edit]

I think the following code is a nice pre-order iterative traversal, using Java code.

I do think that the comment about iterative solutions being easily derived from the recursive ones is not helpful - at least I had a hard time getting solutions that I liked.

I did find nice solutions to all three traversals at the following page: http://discuss.techinterview.org/default.asp?interview.11.318495.11

     void preOrder (TreeNode root) {  
        Stack <TreeNode> s = new Stack <TreeNode> ();
        TreeNode n = root;
     
        s.push (root);
        while (!s.empty()) {
           n = s.pop();
           if (n != null) {
              System.out.print (" " + n);
              s.push (n.right);
              s.push (n.left);
           } // end if this node exists
        } // end while there are nodes to visit
     } // end method preOrder

nduchon@umuc.edu —Preceding unsigned comment added by 70.104.232.232 (talk) 16:00, August 30, 2007 (UTC)

Non-binary trees

[edit]

This article seem to focus exclusively on the traversal of binary trees. My question is, if we want to include something about, for instance, traversal of directories on disk, should the focus of this article be changed, or should it be renamed to "binary tree traversal" and a more general article started? I am leaning towards the latter, but want more input than my own before I do it. W (talk) 13:55, 9 May 2008 (UTC)[reply]

I think we should just change everything to support all kinds of trees and mention somewhere how they can be done a little differently for binary trees. asmeurer (talk | contribs) 18:42, 2 July 2012 (UTC)[reply]
Agreed. The use of the terms "left" and "right" is very misleading for a "tree traversal" article and might lead people who don't know any better to think that all trees are binary. — Preceding unsigned comment added by 82.9.176.129 (talk) 02:22, 24 February 2014 (UTC)[reply]
I agree, I was shocked to see only binary trees were used as illustrations (images) and implementations. It is mentioned in the article, but it really should be either renamed or rewritten to be more generic. The current article is misleading. PhiLho — Preceding undated comment added 22:55, 7 August 2014 (UTC)[reply]
I agree this article should be renamed binary tree traversal. What is the process for doing this? I am a naive editor, so I would like someone more knowledgeable to do this.
I would hope that a new tree traversal article was created, even if only a stub.
Personally, I would like such an article to cover how users of applications traverse tree directory structures. In particular where the application has stored the last used location for that action to speed up ( often) a related action. CuriousMarkE (talk) 06:04, 8 October 2024 (UTC)[reply]
I'd prefer a "Generalizations" section here over an own article. Afaik, pre- and post-order traversal generalizes to "visit the node before descending to its first child" and "... after ascending from its last child", respectively. I don't think much more can be said about the general case. - Jochen Burghardt (talk) 17:35, 9 October 2024 (UTC)[reply]

Traversal - Example

[edit]

Hi, i'm sry, but i don't really understand why the inorder-way in the example ends with "H I" and not with "I H". G's the first right child of the root. There's no left child of G, so we go to I and I' only child is H, so my intuition says there's no way in generell to "skip" a node. And is there a difference mentionable between "rightnode" and "rightchild" ? (btw is there a reason to write rightchild together or divided "right child"...). Sorry for faultily english. If someone's got the time please post me here that the question is answered. Thx a lot. --WissensDürster (talk) 19:29, 24 January 2009 (UTC)[reply]

I guess this answer would help you right now but maybe someone else will read it. I'd agree that the order is wrong for the "inorder" example. It doesn't make sense to skip this node. G I H should ne the right order! 188.101.213.177 (talk) 10:52, 15 August 2012 (UTC)[reply]

Traversal - Terminology

[edit]

All the kinds of traversal say, e.g. 'visit the left subtree; visit the root; visit the right subtree' Surely it should be 'visit the node' rather than 'visit the root'. We do NOT keep going back to the root. JamesHDavenport (talk) 23:04, 17 March 2010 (UTC)[reply]

All of the algorithms for traversals detailed here (pre-order, in-order or symmetric-order, and post-order) are depth-first traversals. They traverse the nodes of the tree in the same order, but they "visit" them at different times. However, "visit the root" is correct terminology because it is a recursive algorithm. For example, when traversing the right sub-tree, "root" becomes the root of the right sub-tree. Xonev (talk) 19:57, 14 April 2010 (UTC)[reply]

Sure, but the article is titled "tree traversal", not "binary tree traversal", even though the term "left subtree" is completely meaningless for most other types of tree structure. The problem here is that the article uses very tree-type-specific and context-specific language and yet purports to be about trees in general. — Preceding unsigned comment added by 82.9.176.129 (talk) 02:27, 24 February 2014 (UTC)[reply]

iterativePostorder()??

[edit]

I'm not convinced iterativePostorder() is correct. I've tried coding it up in Java and could not get it to work - although I accept it may be my own inability! A post-order tree traversal is the most difficult to implement of all the methods.

What I do know is that the pseudo code is poorly formatted with respect to the if-elseif-else block. The use of brackets may take up more lines but eliminates error and reader confusion.

I have a slight problem with the algorithms presented. They are primarily for binary trees with the nodes having left and right child nodes. However, the article title is "Tree Traversal" and not "Binary Tree Traversal". Thus, any algorithms presented should strictly be for general trees; ie a variable list of child nodes. This is what I'm trying to implement, the reason why I looked at this article and as a result have found the article confusing and of little use for general tree traversal. — Preceding unsigned comment added by 81.187.174.42 (talk) 09:48, 11 August 2011 (UTC)[reply]

References and proper citing

[edit]

I have given a reference for the Recursive Traversals in C section. I've also put a <references/> tag under the section References. Please cite the books and the pdf properly in their respective places. This will make the page look good and maintain uniformity. Thanks!
BPositive (talk) 08:00, 6 December 2011 (UTC)[reply]

Shouldn't this be graph traversals?

[edit]

As trees are just acyclic graphs, there should be a general page Graph traversals and as part of that page we should have tree traversals. Do you agree? --Vitalij zad (talk) 13:02, 4 March 2012 (UTC)[reply]

I agree. This article is very misleading, because it only deals with binary tree traversal and yet it's entitled just "Tree Traversal". It's also linked from the Graph Traversal article and referred to as a "special case of graph traversal", when it's really a "special case of a special case of graph traversal". As it stands, Wikipedia does not have an article that can be legitimately titled "Tree Traversal" -- so I think this should either be merged into Graph Traversal, renamed to "Binary Tree Traversal" or expanded so as to not be misleading.

Iterative traversal with parent pointers

[edit]

I've replaced this code with a version that is more clear and practical for use. It only needs to keep the current node and not the previous node, so it can easily be expressed as a pair of first/next functions. The code is from my AVL tree at http://code.google.com/p/badvpn/source/browse/trunk/structure/BAVL.h#685 and I've been using it for a long time with no problems. 193.77.101.149 (talk) 00:23, 20 July 2012 (UTC)[reply]

Postorder conspicuously missing from 'Uses' section

[edit]

Does anyone know a use for it? (If not, can I just add that it's useless and make someone prove otherwise? Just kidding.) Will Faught (talk) 22:20, 26 August 2012 (UTC)[reply]

Traversals on linearly mapped binary trees

[edit]

The main article treats binary trees inherently as a nonlinear data structure, however they can be easily mapped to one (see https://en.wikipedia.org/wiki/Binary_tree#Arrays for a nice pictorial example). As a linear data structure, the traversal is trivial. This requires that the map to the linear array follows the traversal desired.

These mappings do not provide for fast (re)balancing. Groups of nodes are implied by the positions in the array, so moving a group generally requires moving the contents of all associated nodes (as opposed to generic binary trees).

In addition to an accelerated traversal, these maps provide controlled data locality, potentially improving performance in tree searches (which these structures are typically used for). Allocation on the fly guarantees no data locality at all, which reduces the efficiency of both tree searches and traversals.

As a result, linear mapped binary trees are particularly suited for fast tree searches and traversals on static binary search trees, where the tree structure is preserved (though member variables may change).

Qm physics (talk) 04:32, 10 March 2014 (UTC)[reply]

including recursively / including corecursively

[edit]

The sentence containing these 2 phrases is quite clumsy. I'm not even sure what is really meant, but perhaps "using recursion" and "using corecursion" express the idea properly. — Preceding unsigned comment added by 68.183.92.210 (talk) 20:25, 14 May 2014 (UTC)[reply]

Morris threading

[edit]

"It is more prone to errors when both the children are not present and both values of nodes point to their ancestors" is nonsensical. It implies that no (or maybe 1?) children are present, but the values point to something?! — Preceding unsigned comment added by 68.183.92.210 (talk) 21:27, 14 May 2014 (UTC)[reply]

[edit]

There are several broken links pointing to and within this page. E.g. https://en.wikipedia.org/wiki/Inorder_traversal#Inorder_traversal (not sure where I found that link). Additionally links are broken within the page, e.g. "threading the tree"

I don't know enough about editing Wikipedia to know a way to find all these links but I thought I would point it out as there seems to have been some work done on this. Thanks!

Chris Alexander UK (talk) 19:10, 19 May 2014 (UTC)[reply]

New section "Non-recursive tree traversal by Iteration"

[edit]

Foregoing was herein the talk #Iterative traversal with parent pointers (see section above), a new section in the article called "Iterative in-order and without callback" by user:Nomen4Omen, a revert by user:Altenmann with the grounds "Rv primary source nonnotable. (TW)" and a user talk page discussion between user:Nomen4Omen and user:Altenmann:

In User talk:Altenmann#tree traversal:
Dear Altenmann, you reverted my new section in "Tree traversal" because "primary source nonnotable. (TW)". What do you mean by that: Is the reference to Ben Pfaff defective? What about the other pseudocode examples which do not carry any reference at all? And have a {{unreferenced section|date=June 2013}} since June 2013 and are still in the article? Pls answer in your user talk page. Best regards, --Nomen4Omen 08:07, 27 August 2015 (UTC)[reply]

In User talk:Nomen4Omen#tree traversal:
The article tree traversal is horrible, but this does not mean it is OK to increase the mess.
Wikipedia does not accept self-published references; see WP:RS.
There is other pseudocode, but I it is mostly for standard descriptions found in books. If something is not from books, it must be deleted.
But there is larger problem with the addition: its questionable notability. Any algorithm may be implemented in numerous ways. Wikipedia is encyclopedia, not FSF repository. It gives basic descriptions of ideas of algortihms. The arguments from your addition (callbacks are cumbersome, tree traversed only in full) are about minor implementation tweaks, contributing nothing new to the idea of the algorithms themselves. If there is any significant algorithmic novelty, I didn't see it, however I admit I may be mistaken.
Finally, article content must be discussed in article talk pages, where other editors may see the discussion and join it, not in user talk pages. - üser:Altenmann 15:33, 27 August 2015 (UTC)

With the new edit I hope to give sufficient notability and sources. Sedgewick places the question into the context of binary search trees and stresses its importance on p.368. Maybe with search trees there arises the biggest interest. But the solution as such belongs without doubt into tree traversal. Maybe also: the pseudocode example could be suppressed and instead given a reference similar to that in #Iterative traversal with parent pointers. Open to any improvement. --Nomen4Omen (talk) 17:47, 28 August 2015 (UTC)[reply]

I forgot to mention that I would like to refer to the new section as a means of navigating in a binary search tree, the secondmost important navigation after binary search. The recursive approaches cannot be used for that purpose, so at least something of the nonrecursive (and single step) stuff should survive for being referenced.
I also made some minor corrections in the article. --Nomen4Omen (talk) 08:42, 29 August 2015 (UTC)[reply]

Dear Altenmann, you reverted several edits and again the section "Nonrecursive tree traversal by iteration" in Tree traversal because "Rv unreferened original research". You certainly know that it is NOT original research − see the mentioned sources Pfaff and Sedgewick, although Sedgewick is not completely explicit. But there is also http://code.google.com/p/badvpn/source/browse/trunk/structure/BAVL.h#685. Would it help you adding this one? Best regards, --Nomen4Omen (talk) 08:00, 10 September 2015 (UTC)[reply]

Summary Depth-First wrong

[edit]

"Trees can be traversed in pre-order, in-order, or post-order." No, these are standard ways of doing so, but nothing speaks against doing a reverse in-order (right branch, current, left branch), e.g. There are obviously 3! = 6 methods... — Preceding unsigned comment added by 217.33.202.98 (talk) 00:36, 9 February 2016 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified 2 external links on Tree traversal. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 13:13, 10 November 2016 (UTC)[reply]

Question mark meaning what?

[edit]

In the section "Infinite trees", the text says: "for example, the breadth-first search of the depth 2 tree above will take ?·2 steps: ? for the first level, and then another ? for the second level."

What does "?" mean here? And what does ":" mean in "?·2 steps: ?"? I have never met neither ":" nor "?" in mathematics before, so they should be explained and defined, at least with a wikilink to their apparently rather specialized meaning. --Jhertel (talk) 17:26, 17 April 2017 (UTC)[reply]

Thanks, Alexei! --Jhertel (talk) 15:14, 13 June 2018 (UTC)[reply]

In-order traversal for non-binary trees

[edit]

@Jochen Burghardt: IMHO ″In-order traversal cannot be generalized to trees with more than 2 children.″ is NOT true. Especially in-order traversal IS easily generalizable: It simply means to travel from lowest member to highest. Thus it is defined for 2-3-4 trees, 2-3 trees, general trees of any type. The tree simply must be able to support a linear order. It is, however, not so easy how to define a pre- or post-order traversal on such trees. --Nomen4Omen (talk) 15:44, 7 January 2018 (UTC)[reply]

@Nomen4Omen:: Maybe I'm wrong. I followed the article's explanation (steps 1.-4.), which refers to a left and a right subtree. The same applies to the pre- and post-order explanations, but e.g. for pre-order one can easily imagine the generalization "1. check for null; 2. display current node; 3./4. traverse all subtrees from lowest to highest member", so left and right subtrees aren't essentially required there. For in-order traversal, I wonder at which point in time the current node is displayed if there are more than 2 subtrees (your explanation doesn't seem to answer this question, does it?). This was my motivation for inserting the sentence you quoted. Best regards - Jochen Burghardt (talk) 18:24, 7 January 2018 (UTC)[reply]
@Jochen Burghardt: The term "traversal" has meaning beyond the article and cannot be confined to the article's subject. Maybe you mean some generalization of some implementation. But who cares? --Nomen4Omen (talk) 20:23, 7 January 2018 (UTC)[reply]
@Nomen4Omen: Since you know how to generalize in-order traversal, could you add it to the article? In its current form, it is rather confusing. For example, what is 'left subtree' generalized to? - Jochen Burghardt (talk) 06:53, 9 January 2018 (UTC)[reply]
@Jochen Burghardt: I looked for it in 2–3–4 tree and 2–3 tree, where is no mention, and B-tree where is some mention but no implementation. Before there is some I guess there is no need to present some here. Especially in the light of the sentence in the intro:
"The following algorithms are described for a binary tree, but they may be generalized to other trees as well."
If you like you may repeat this for every traversal: "Pre-order for binary tree", "In-order for binary search tree", "Post-order for binary tree". --Nomen4Omen (talk) 10:27, 9 January 2018 (UTC)[reply]

₡== Tree duplication in pre- or in post-order? ==

In section "Applications", I changed in the sentence

Pre-order traversal while duplicating nodes and edges can make a complete duplicate of a binary tree.

the word "Pre-order" into "Post-order". This was reverted repeatedly, so I'll use my first edit summary to start an official discussion:

to duplicate, all child node need to be available when the parent node is handled.

A duplication routine in C can look like this:

#include <stdlib.h>
struct _tree {
    int key;
    struct _tree *left;
    struct _tree *right;
};

struct _tree *duplicate(
    const struct _tree *tree)
{
    if (tree == NULL) {
        return NULL;
    } else {
        struct _tree * const t = (struct _tree*)(malloc(sizeof(struct _tree)));
        t->key   = tree->key;
        t->left  = duplicate(tree->left);
        t->right = duplicate(tree->right);
        return t;
    }
}

Clearly, the right subtree has to be duplicated before the duplicate of the current node can be initialized completely. Therefore, I believe, "Post-order" is correct. - Jochen Burghardt (talk) 18:52, 16 November 2019 (UTC)[reply]

I agree completely with the rule "to duplicate, all child nodes need to be available (duplicated) before the parent node is finished". As far as I can see, the step "Display the data part of the current node" (in the article) does not catch the point. E.g. copying (duplicating) the key and data of the current node is commutative with each of the two recursive calls. As a result, tree duplication can be done in post-, pre-, or even in in-order mode. --Nomen4Omen (talk) 07:49, 19 February 2020 (UTC)[reply]
You have a point there. In the article, pre/in/post-order is defined only for algorithms that output the node data. If we stick to that definition, the paragraph about duplicating should be removed completely.
Alternatively, the definition could be generalized to an arbitrary subroutine that "visits" the node, like in the visitor design pattern. A call to visit(n,...) would then print n's data in most cases, but would duplicate the source node n in the tree duplication case. The else part of my above code could be changed to
struct _tree * const tl = duplicate(tree->left);
struct _tree * const tr = duplicate(tree->right);
return visit(tree,tl,tr);
with the additional subroutine definition:
struct _tree *visit(
    const struct _tree *tree,
    const struct _tree *tl,
    const struct _tree *tr)
{    
    struct _tree * const t = (struct _tree*)(malloc(sizeof(struct _tree)));
    t->key   = tree->key;
    t->left  = tl;
    t->right = tr;
    return t;
}
I admit that this looks quite rigged up to me. On the other hand, I think it is impossible to write a similar visitor-based duplication routine that fits into the in- or the pre-order scheme, because the visit subroutine will need the duplicated child nodes as parameters.
I suggest that unless we find a source for the generalized pre/in/post-order notion, we just delete the "duplication" paragraph, or we mention that pre/in/post doesn't apply in such cases. Would that be ok? - Jochen Burghardt (talk) 09:39, 19 February 2020 (UTC)[reply]
Of course, a reliable source would be the best. And you are right that the “essential part” of your (first) duplicate-subroutine is its else-part. This else-part requires the existence and the size of the node to be copied. Doesn't this mean that your (first) duplicate-subroutine is essentially pre-order? Because malloc has to come first and “look” at (i.e. visit) the node! All other operations commute, but have to follow look and malloc. --Nomen4Omen (talk) 15:03, 19 February 2020 (UTC)[reply]
I think I see your point: The duplication algorithm can as well be rigged up to fit the pre-order scheme, with the following else part:
struct _tree * const t = visit(tree);
t->left  = duplicate(tree->left);
t->right = duplicate(tree->right);
return t;
and with the additional subroutine definition:
struct _tree *visit(
    const struct _tree *tree)
{    
    struct _tree * const t = (struct _tree*)(malloc(sizeof(struct _tree)));
    t->key   = tree->key;
    return t;
}
So it seems that pre/in/post depends on how the visitor pattern is mapped to a (non-object-oriented) imperative programming language: Is visit allowed to have extra parameters (needed for post-order)? Is its result allowed to be used outside return (needed for pre-order)? If both is allowed, duplicate could even be made fit the in-order scheme, I guess. - Jochen Burghardt (talk) 07:46, 20 February 2020 (UTC)[reply]

I saw that the grafix "Sorted_binary_tree_ALL.svg" is your work. It shows that the difference between the 3 modes is mainly the resulting sequence of acting at the instances. I'm sure that it is allowed to use the result of visit outside return. But it is certainly also allowed to malloc the children first, save the resulting pointers in local variables, and then malloc the current. It seems also to be obvious that visit means act on key and data of current only and does not include [as in your last proposal] the looking at the children pointers, because this has to be done with all modes, pre-, in- and post-order. This means that every application where the sequence is irrelevant (as in our case) can be done in every order mode. --Nomen4Omen (talk) 10:26, 20 February 2020 (UTC)[reply]

@Jochen Burghardt: Sorry, you were absolutely right. Duplicating is inherently post-order − see changed article. --Nomen4Omen (talk) 07:04, 22 February 2020 (UTC)[reply]
@Jochen Burghardt: and @Nomen4Omen: in several references (textbooks or otherwise, for e.g. https://qr.ae/pNsElh), as an alternative, the output of an pre-order traversal collects the values of the nodes in a list. And that list is then subsequently used to duplicate/clone another binary tree. While other traversal outputs may also be used to clone a tree, they are neither common nor intuitive. I strongly recommend that we revert back to "Pre-order" - Kgashok (talk) 10:46, 6 August 2020 (UTC)[reply]
@Kgashok: Isn't https://qr.ae/pNsElh a post-order example, because the parent is finished only after all children are finished ?
This is because return temp; transports the pointer to the cloned current node to its parent — the last essential action in the function. --Nomen4Omen (talk) 19:19, 6 August 2020 (UTC)[reply]
@Nomen40men: The way I see it, the parent's data is copied first (line 6), then the left and then the right. That's the classical pre-order sequence. - Kgashok (talk) 03:34, 7 August 2020 (UTC)[reply]

Iterative Inorder Traversal Redux v2

[edit]

Current implementation of iterative in-order traversal seems to be correct and looks beautiful by being so concise. Unfortunately there is one major flaw in it: current node is pushed onto the stack before the check if child is absent. It means, that all leaf nodes are pushed into the stack then immediately popped out.

My idea is quite simple - we all know that everything around is a trade-of: we trade speed for space, vise-versa, e.t.c.; where "pure" optimization is an elimination of unnecessary work/space... and given that we are talking about all the leaves here, the impact might be quite severe in some cases. Someone would argue that this implementation will benefit more in it's current state - as more comprehensible or maybe provided here for educational/introductory purposes only and not tied to optimization problem. But hit me if I'm wrong, this topic is so general, that this code will end up in many many projects around the globe. Arent' we are not just ruining the overall quality of the code, but also the coding hygiene of younger developers?

I run a couple of tests and achieved a speedup of 17%+ by just eliminating this unnecessary push/pop. Checked some other places around, like geeksforgeeks, first 5 videos on youtube e.t.c. - and, man, this dirty code is all over the place! Am I missing something?

Current implementation for the reference:

iterativeInorder(node)
  s ← empty stack
  while (not s.isEmpty() or node ≠ null)
    if (node ≠ null)
      s.push(node)
      node ← node.left
    else
      node ← s.pop()
      visit(node)
      node ← node.right

Possible implementation - just to describe rough idea here (i.e. didn't check this exact implementation thoroughly, as mine is a bit more complicated for supporting ranges)

s ← empty stack
fillStack(node)
  while((child ← node.left) ≠ null)
    s.push(node);
    node ← child;
  return node
iterativeInorder(node)
  node ← fillStack(node)
  while(true)
    visit(node);
    node ← node.right;
    if(node = null)
      if(s.isEmpty())
        return
      node ← stack.pop();
    else
      node ← fillStack(node)

DYefimov (talk) 01:59, 18 February 2020 (UTC)[reply]

In my opinion the most interesting (tree) traversal is the 1-step-next traversal to the next in-order neighbour either right or left of the current node (=up or down). This also means that visit(node) is completely to the hand of the caller. Does somebody know a source of such an approach? --Nomen4Omen (talk) 16:16, 19 February 2020 (UTC)[reply]


Vertical or Orthogonal Traversal is missing

[edit]

There are applications of this in computational geometry as well. For e.g. https://www.cs.princeton.edu/courses/archive/spr15/cos226/lectures/99GeometricSearch.pdf

Kgashok (talk) 13:13, 7 August 2020 (UTC)[reply]

Generic tree algorithm, in-order nodes visit after last child--bug?

[edit]

The algorithm presented for generic tree has it performing an in-order operation as well as a post-order operation after the last child. My expectation is the in-order operations should occur between children and only post-order after the last child. Does anyone agree? It could just be an out-by-one-error in the loop counter aka fence post problem. — Preceding unsigned comment added by 2A00:23C4:8585:7D01:BD32:6CC3:3CC1:84D7 (talk) 19:07, 13 January 2021 (UTC)[reply]

Agree!
And changed.
In addition, better distinction between "visit", "perform", and "perform recursively".
But still a source is missing.
Nomen4Omen (talk) 09:44, 14 January 2021 (UTC)[reply]

What happened to the actually iterative traversal implementations?

[edit]

All the current "iterative" traversal implementations use stacks, which makes them disguised recursive implementations. Why were the actual iterative ones (the ones requiring that each node keep a pointer to its parents in addition to children) replaced with those?Medinoc (talk) 10:58, 8 November 2022 (UTC)[reply]

There is an implementation in the threaded binary tree article, it probably got moved. Mathnerd314159 (talk) 16:08, 8 November 2022 (UTC)[reply]

Well I guess that isn't a full implementation. There was an implementation by Nomen4Omen that got deleted because it was unsourced; Nomen4Omen complained above but it doesn't seem to have been resolved. Mathnerd314159 (talk) 19:07, 8 November 2022 (UTC)[reply]
Bu if there were a reliably sourced implementation using the parent pointer, I would say it would go in the threaded article, because this article really seems to assume that pointers only go down. Mathnerd314159 (talk) 19:30, 8 November 2022 (UTC)[reply]