Genetic programming: Difference between revisions
Line 8: | Line 8: | ||
In 1954, GP began with the [[evolutionary algorithm]]s first used by [[Nils Aall Barricelli]] applied to evolutionary simulations.<ref>{{cite journal|last1=Barricelli|first1=Nils|title=Esempi numerici di processi di evoluzione|journal=Methodos|date=1954|volume=6|issue=21-22|pages=45-68|trans_title=Numerical examples of evolution processes|language=Italian}}</ref> In the 1960s and early 1970s, [[evolutionary algorithm]]s became widely recognized as optimization methods. [[Ingo Rechenberg]] and his group were able to solve complex engineering problems through [[evolution strategies]] as documented in his 1971 [[PhD]] thesis and the resulting 1973 book. [[John Henry Holland|John Holland]] was highly influential during the 1970s. |
In 1954, GP began with the [[evolutionary algorithm]]s first used by [[Nils Aall Barricelli]] applied to evolutionary simulations.<ref>{{cite journal|last1=Barricelli|first1=Nils|title=Esempi numerici di processi di evoluzione|journal=Methodos|date=1954|volume=6|issue=21-22|pages=45-68|trans_title=Numerical examples of evolution processes|language=Italian}}</ref> In the 1960s and early 1970s, [[evolutionary algorithm]]s became widely recognized as optimization methods. [[Ingo Rechenberg]] and his group were able to solve complex engineering problems through [[evolution strategies]] as documented in his 1971 [[PhD]] thesis and the resulting 1973 book. [[John Henry Holland|John Holland]] was highly influential during the 1970s. |
||
In 1964, [[Lawrence J. Fogel]], one of the earliest practitioners of the GP methodology, applied evolutionary algorithms to the problem of discovering [[Finite-state machine|finite-state automata]]. Later GP-related work grew out of the [[learning classifier system]] community, which developed sets of sparse rules describing optimal policies for [[Markov decision process]]es. The first statement of modern "tree-based" genetic programming (that is, procedural languages organized in tree-based structures and operated on by suitably defined GA-operators) was given by [[Nichael L. Cramer]] (1985).<ref name="sover1985">[http://www.sover.net/~nichael/nlc-publications/icga85/index.html Cramer, 1985]</ref> This work was later greatly expanded by [[John Koza|John R. Koza]], a main proponent of GP who has pioneered the application of genetic programming in various complex optimization and search problems.<ref>[http://www.genetic-programming.com/ genetic-programming.com-Home-Page<!-- Bot generated title -->]</ref> [[Gianna Giavelli]], a student of Koza's, later pioneered the use of genetic programming as a technique to model DNA expression.<ref>The Genetic Coding of Behavioral Attributes in Cellular Automata. Artificial Life at Stanford 1994 Stanford, California, 94305-3079 USA.</ref> |
In 1964, [[Lawrence J. Fogel]], one of the earliest practitioners of the GP methodology, applied evolutionary algorithms to the problem of discovering [[Finite-state machine|finite-state automata]]. Later GP-related work grew out of the [[learning classifier system]] community, which developed sets of sparse rules describing optimal policies for [[Markov decision process]]es. |
||
In 1981 Richard Forsyth evolved tree rules to classify heart disease<ref name="kybernetes:forsyth">[http://dx.doi.org/10.1108/eb005587 Kybernetes 1981]</ref>. |
|||
The first statement of modern "tree-based" genetic programming (that is, procedural languages organized in tree-based structures and operated on by suitably defined GA-operators) was given by [[Nichael L. Cramer]] (1985).<ref name="sover1985">[http://www.sover.net/~nichael/nlc-publications/icga85/index.html Cramer, 1985]</ref> This work was later greatly expanded by [[John Koza|John R. Koza]], a main proponent of GP who has pioneered the application of genetic programming in various complex optimization and search problems.<ref>[http://www.genetic-programming.com/ genetic-programming.com-Home-Page<!-- Bot generated title -->]</ref> [[Gianna Giavelli]], a student of Koza's, later pioneered the use of genetic programming as a technique to model DNA expression.<ref>The Genetic Coding of Behavioral Attributes in Cellular Automata. Artificial Life at Stanford 1994 Stanford, California, 94305-3079 USA.</ref> |
|||
In the 1990s, GP was mainly used to solve relatively simple problems because it is very computationally intensive. Recently GP has produced many novel and outstanding results in areas such as [[quantum computer|quantum computing]], electronic design, game playing, [[sorting]], and [[Search algorithm|searching]], due to improvements in GP technology and the [[Moore's law|exponential growth in CPU power]].<ref>[http://www.genetic-programming.com/humancompetitive.html humancompetitive]</ref> These results include the replication or development of several post-year-2000 inventions. GP has also been applied to [[evolvable hardware]] as well as computer programs. |
In the 1990s, GP was mainly used to solve relatively simple problems because it is very computationally intensive. Recently GP has produced many novel and outstanding results in areas such as [[quantum computer|quantum computing]], electronic design, game playing, [[sorting]], and [[Search algorithm|searching]], due to improvements in GP technology and the [[Moore's law|exponential growth in CPU power]].<ref>[http://www.genetic-programming.com/humancompetitive.html humancompetitive]</ref> These results include the replication or development of several post-year-2000 inventions. GP has also been applied to [[evolvable hardware]] as well as computer programs. |
Revision as of 10:35, 26 July 2015
In artificial intelligence, genetic programming (GP) is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task. Essentially GP is a set of instructions and a fitness function to measure how well a computer has performed a task. It is a specialization of genetic algorithms (GA) where each individual is a computer program. It is a machine learning technique used to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task.
History
In 1954, GP began with the evolutionary algorithms first used by Nils Aall Barricelli applied to evolutionary simulations.[1] In the 1960s and early 1970s, evolutionary algorithms became widely recognized as optimization methods. Ingo Rechenberg and his group were able to solve complex engineering problems through evolution strategies as documented in his 1971 PhD thesis and the resulting 1973 book. John Holland was highly influential during the 1970s.
In 1964, Lawrence J. Fogel, one of the earliest practitioners of the GP methodology, applied evolutionary algorithms to the problem of discovering finite-state automata. Later GP-related work grew out of the learning classifier system community, which developed sets of sparse rules describing optimal policies for Markov decision processes. In 1981 Richard Forsyth evolved tree rules to classify heart disease[2]. The first statement of modern "tree-based" genetic programming (that is, procedural languages organized in tree-based structures and operated on by suitably defined GA-operators) was given by Nichael L. Cramer (1985).[3] This work was later greatly expanded by John R. Koza, a main proponent of GP who has pioneered the application of genetic programming in various complex optimization and search problems.[4] Gianna Giavelli, a student of Koza's, later pioneered the use of genetic programming as a technique to model DNA expression.[5]
In the 1990s, GP was mainly used to solve relatively simple problems because it is very computationally intensive. Recently GP has produced many novel and outstanding results in areas such as quantum computing, electronic design, game playing, sorting, and searching, due to improvements in GP technology and the exponential growth in CPU power.[6] These results include the replication or development of several post-year-2000 inventions. GP has also been applied to evolvable hardware as well as computer programs.
Developing a theory for GP has been very difficult and so in the 1990s GP was considered a sort of outcast among search techniques.[citation needed]
Program representation
GP evolves computer programs, traditionally represented in memory as tree structures.[3] Trees can be easily evaluated in a recursive manner. Every tree node has an operator function and every terminal node has an operand, making mathematical expressions easy to evolve and evaluate. Thus traditionally GP favors the use of programming languages that naturally embody tree structures (for example, Lisp; other functional programming languages are also suitable).
Non-tree representations have been suggested and successfully implemented, such as linear genetic programming which suits the more traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP software Discipulus uses automatic induction of binary machine code ("AIM")[7] to achieve better performance. µGP[8] uses directed multigraphs to generate programs that fully exploit the syntax of a given assembly language.
Genetic operators
The main operators used in evolutionary algorithms such as GP are crossover and mutation.
Crossover
Crossover is applied on an individual by simply switching one of its nodes with another node from another individual in the population. With a tree-based representation, replacing a node means replacing the whole branch. This adds greater effectiveness to the crossover operator. The expressions resulting from crossover are very different from their initial parents.
Mutation
Mutation affects an individual in the population. It can replace a whole node in the selected individual, or it can replace just the node's information. To maintain integrity, operations must be fail-safe or the type of information the node holds must be taken into account. For example, mutation must be aware of binary operation nodes, or the operator must be able to handle missing values.
Other approaches
The basic ideas of genetic programming have been modified and extended in a variety of ways:
- Extended Compact Genetic Programming (ECGP)
- Embedded Cartesian Genetic Programming (ECGP)
- Probabilistic Incremental Program Evolution (PIPE)
- Strongly Typed Genetic Programming (STGP)
Meta-Genetic Programming
Meta-Genetic Programming is the proposed meta learning technique of evolving a genetic programming system using genetic programming itself. It suggests that chromosomes, crossover, and mutation were themselves evolved, therefore like their real life counterparts should be allowed to change on their own rather than being determined by a human programmer. Meta-GP was formally proposed by Jürgen Schmidhuber in 1987.[9] Doug Lenat's Eurisko is an earlier effort that may be the same technique. It is a recursive but terminating algorithm, allowing it to avoid infinite recursion.
Critics of this idea often say this approach is overly broad in scope. However, it might be possible to constrain the fitness criterion onto a general class of results, and so obtain an evolved GP that would more efficiently produce results for sub-classes. This might take the form of a Meta evolved GP for producing human walking algorithms which is then used to evolve human running, jumping, etc. The fitness criterion applied to the Meta GP would simply be one of efficiency.
For general problem classes there may be no way to show that Meta GP will reliably produce results more efficiently than a created algorithm other than exhaustion. The same holds for standard GP and other search algorithms.
See also
- Bio-inspired computing
- Gene expression programming
- Genetic representation
- Grammatical evolution
- Fitness approximation
- Inductive programming
- Linear genetic programming
- Multi expression programming
- Propagation of schema
References and notes
- ^ Barricelli, Nils (1954). "Esempi numerici di processi di evoluzione". Methodos (in Italian). 6 (21–22): 45–68.
{{cite journal}}
: Unknown parameter|trans_title=
ignored (|trans-title=
suggested) (help) - ^ Kybernetes 1981
- ^ a b Cramer, 1985
- ^ genetic-programming.com-Home-Page
- ^ The Genetic Coding of Behavioral Attributes in Cellular Automata. Artificial Life at Stanford 1994 Stanford, California, 94305-3079 USA.
- ^ humancompetitive
- ^ (Peter Nordin, 1997, Banzhaf et al., 1998, Section 11.6.2-11.6.3)
- ^ MicroGP page on SourceForge, complete with tutorials and wiki
- ^ 1987 THESIS ON LEARNING HOW TO LEARN, METALEARNING, META GENETIC PROGRAMMING, CREDIT-CONSERVING MACHINE LEARNING ECONOMY
External links
- Riccardo Poli, William B. Langdon,Nicholas F. McPhee, John R. Koza, "A Field Guide to Genetic Programming" (2008)
- Aymen S Saket & Mark C Sinclair
- The Hitch-Hiker's Guide to Evolutionary Computation
- GP bibliography