Inference: Difference between revisions
No edit summary |
Undid revision 1263398002 by 216.11.82.108 (talk) no |
||
(243 intermediate revisions by more than 100 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Steps in reasoning}} |
|||
{{More footnotes|date=April 2010}} |
|||
{{For multi|the 1992 album by pianist Marilyn Crispell and saxophonist Tim Berne|Inference (album){{!}}''Inference'' (album)|the process in statistics and machine learning|Statistical inference}} |
|||
'''Inference''' is the act or process of [[Formal proof|deriving]] [[Logical consequence|logical conclusions]] from [[premise]]s known or assumed to be [[Truth|true]].<ref>http://www.thefreedictionary.com/inference</ref> The conclusion drawn is also called an idiomatic. The [[Rule of inference|laws of valid inference]] are studied in the field of [[logic]]. |
|||
{{No footnotes|date=July 2023}} |
|||
{{Use dmy dates|date=November 2024}} |
|||
'''Inferences''' are steps in [[reason]]ing, moving from [[premise]]s to [[logical consequence]]s; etymologically, the word ''[[wikt:infer|infer]]'' means to "carry forward". Inference is theoretically traditionally divided into [[deductive reasoning|deduction]] and [[inductive reasoning|induction]], a distinction that in Europe dates at least to [[Aristotle]] (300s BCE). Deduction is inference [[Formal proof|deriving]] [[Logical consequence|logical conclusions]] from premises known or assumed to be [[truth|true]], with the [[Rule of inference|laws of valid inference]] being studied in [[logic]]. Induction is inference from [[particular]] evidence to a [[Universal (metaphysics)|universal]] conclusion. A third type of inference is sometimes distinguished, notably by [[Charles Sanders Peirce]], contradistinguishing [[Abductive reasoning|abduction]] from induction. |
|||
Alternatively, inference may be defined as the non-logical, but rational means, through observation of patterns of facts, to indirectly see new meanings and contexts for understanding. Of particular use to this application of inference are anomalies and symbols. Inference, in this sense, does not draw conclusions but opens new paths for inquiry. (See second set of Examples.) In this definition of inference, there are two types of inference: [[Inductive reasoning|inductive inference]] and [[Deductive reasoning|deductive inference]]. Unlike the definition of inference in the first paragraph above, meaning of word meanings are not tested but meaningful relationships are articulated. Bloop. |
|||
Human inference (i.e. how humans draw conclusions) is traditionally studied within the |
Various fields study how inference is done in practice. Human inference (i.e. how humans draw conclusions) is traditionally studied within the fields of logic, argumentation studies, and [[cognitive psychology]]; [[artificial intelligence]] researchers develop automated inference systems to emulate human inference. [[Statistical inference]] uses mathematics to draw conclusions in the presence of uncertainty. This generalizes deterministic reasoning, with the absence of uncertainty as a special case. Statistical inference uses quantitative or qualitative ([[categorical data|categorical]]) data which may be subject to random variations. |
||
==Definition== |
|||
[[Statistical inference]] uses mathematics to draw conclusions in the presence of uncertainty. This generalizes deterministic reasoning, with the absence of uncertainty as a special case. Statistical inference uses quantitative or qualitative (categorical) data which may be subject to random variation. |
|||
The process by which a conclusion is inferred from multiple [[observations]] is called [[inductive reasoning]]. The conclusion may be correct or incorrect, or correct to within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from multiple observations may be tested by additional observations. |
|||
This definition is disputable (due to its lack of clarity. Ref: Oxford English dictionary: "induction ... 3. Logic the inference of a general law from particular instances." {{clarify|reason=Some information, probably encoded by different type fonts in the dictionary, seems to have got lost. I assume, under the keyword 'induction ([[logic]])', the dictionary says 'the inference of a general law from particular instances'. Please check with the source and clarify.|date=August 2013}}) The definition given thus applies only when the "conclusion" is general. |
|||
Two possible definitions of "inference" are: |
|||
# A conclusion reached on the basis of evidence and reasoning. |
|||
# The process of reaching such a conclusion. |
|||
==Examples== |
==Examples== |
||
===Example for definition #1=== |
|||
[[Greek philosophy|Greek philosophers]] defined a number of [[syllogism]]s, correct three part inferences, that can be used as building blocks for more complex reasoning. We begin with a famous example: |
|||
[[Ancient Greek philosophy|Ancient Greek philosophers]] defined a number of [[syllogism]]s, correct three part inferences, that can be used as building blocks for more complex reasoning. We begin with a famous example: |
|||
#All men are mortal |
|||
# All humans are mortal. |
|||
#Socrates is a man |
|||
# All Greeks are humans. |
|||
#Therefore, Socrates is mortal. |
|||
# All Greeks are mortal. |
|||
The reader can check that the premises and conclusion are true, but |
The reader can check that the premises and conclusion are true, but logic is concerned with inference: does the truth of the conclusion follow from that of the premises? |
||
The validity of an inference depends on the form of the inference. That is, the word "valid" does not refer to the truth of the premises or the conclusion, but rather to the form of the inference. |
The validity of an inference depends on the form of the inference. That is, the word "valid" does not refer to the truth of the premises or the conclusion, but rather to the form of the inference. An inference can be valid even if the parts are false, and can be invalid even if some parts are true. But a valid form with true premises will always have a true conclusion. |
||
For example, consider the form of the following [[Symbology|symbological]] track: |
For example, consider the form of the following [[Symbology|symbological]] track: |
||
#All meat comes from animals. |
#All meat comes from animals. |
||
# |
#All beef is meat. |
||
#Therefore, beef comes from |
#Therefore, all beef comes from animals. |
||
If the premises are true, then the conclusion is necessarily true, too. |
If the premises are true, then the conclusion is necessarily true, too. |
||
Line 27: | Line 37: | ||
Now we turn to an invalid form. |
Now we turn to an invalid form. |
||
#All A are B. |
#All A are B. |
||
# |
#All C are B. |
||
#Therefore, |
#Therefore, all C are A. |
||
To show that this form is invalid, we demonstrate how it can lead from true premises to a false conclusion. |
To show that this form is invalid, we demonstrate how it can lead from true premises to a false conclusion. |
||
#All apples are fruit. ( |
#All apples are fruit. (True) |
||
# |
#All bananas are fruit. (True) |
||
#Therefore, bananas are apples. ( |
#Therefore, all bananas are apples. (False) |
||
A valid argument with false |
A valid argument with a false premise may lead to a false conclusion, (this and the following examples do not follow the Greek syllogism): |
||
#All tall people are |
#All tall people are French. (False) |
||
#John Lennon was tall. |
#John Lennon was tall. (True) |
||
#Therefore, John Lennon was |
#Therefore, John Lennon was French. (False) |
||
When a valid argument is used to derive a false conclusion from false |
When a valid argument is used to derive a false conclusion from a false premise, the inference is valid because it follows the form of a correct inference. |
||
A valid argument can also be used to derive a true conclusion from false |
A valid argument can also be used to derive a true conclusion from a false premise: |
||
#All tall people are musicians ( |
#All tall people are musicians. (Valid, False) |
||
#John Lennon was tall ( |
#John Lennon was tall. (Valid, True) |
||
#Therefore, John Lennon was a musician ( |
#Therefore, John Lennon was a musician. (Valid, True) |
||
In this case we have |
In this case we have one false premise and one true premise where a true conclusion has been inferred. |
||
===Example for definition #2=== |
===Example for definition #2=== |
||
Evidence: |
Evidence: It is the early 1950s and you are an American stationed in the [[Soviet Union]]. You read in the [[Moscow]] newspaper that a [[soccer]] team from a small city in [[Siberia]] starts winning game after game. The team even defeats the Moscow team. Inference: The small city in Siberia is not a small city anymore. The Soviets are working on their own nuclear or high-value secret weapons program. |
||
Knowns: The Soviet Union is a [[command economy]]: people and material are told where to go and what to do. |
Knowns: The Soviet Union is a [[command economy]]: people and material are told where to go and what to do. The small city was remote and historically had never distinguished itself; its soccer season was typically short because of the weather. |
||
Explanation: |
Explanation: In a [[command economy]], people and material are moved where they are needed. Large cities might field good teams due to the greater availability of high quality players; and teams that can practice longer (possibly due to sunnier weather and better facilities) can reasonably be expected to be better. In addition, you put your best and brightest in places where they can do the most good—such as on high-value weapons programs. It is an anomaly for a small city to field such a good team. The anomaly indirectly described a condition by which the observer inferred a new meaningful pattern—that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere? To hide them, of course. |
||
==Incorrect inference== |
==Incorrect inference== |
||
An incorrect inference is known as a [[fallacy]]. Philosophers who study [[informal logic]] have compiled large lists of them, and cognitive psychologists have documented many [[cognitive bias|biases in human reasoning]] that favor incorrect reasoning |
An incorrect inference is known as a [[fallacy]]. Philosophers who study [[informal logic]] have compiled large lists of them, and cognitive psychologists have documented many [[cognitive bias|biases in human reasoning]] that favor incorrect reasoning. |
||
==Applications== |
|||
==Automatic logical inference== |
|||
AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of [[expert system]]s and later [[business rule engine]]s. More recent work on [[automated theorem proving]] has had a stronger |
|||
===Inference engines=== |
|||
{{main|Reasoning system|Inference engine|expert system|business rule engine}} |
|||
AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of [[expert system]]s and later [[business rule engine]]s. More recent work on [[automated theorem proving]] has had a stronger |
|||
basis in formal logic. |
basis in formal logic. |
||
An inference system's job is to extend a knowledge base automatically. The knowledge base (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are [[relevance|relevant]] to its task. |
An inference system's job is to extend a knowledge base automatically. The [[knowledge base]] (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are [[relevance|relevant]] to its task. |
||
Additionally, the term 'inference' has also been applied to the process of generating predictions from trained [[Artificial neural network|neural networks]]. In this context, an 'inference engine' refers to the system or hardware performing these operations. This type of inference is widely used in applications ranging from [[image recognition]] to [[natural language processing]]. |
|||
=== |
====Prolog engine==== |
||
[[Prolog]] (for "Programming in Logic") is a [[programming language]] based on a [[subset]] of [[predicate calculus]]. Its main job is to check whether a certain proposition can be inferred from a KB (knowledge base) using an algorithm called [[backward chaining]]. |
[[Prolog]] (for "Programming in Logic") is a [[programming language]] based on a [[subset]] of [[predicate calculus]]. Its main job is to check whether a certain proposition can be inferred from a KB (knowledge base) using an algorithm called [[backward chaining]]. |
||
Line 73: | Line 89: | ||
mortal(X) :- man(X). |
mortal(X) :- man(X). |
||
man(socrates). |
man(socrates). |
||
( Here ''''':-''''' can be read as "if". |
( Here ''''':-''''' can be read as "if". Generally, if ''P {{imp}} Q'' (if P then Q) then in Prolog we would code ''Q''':-'''P'' (Q if P).)<br /> |
||
This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates: |
This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates: |
||
Line 90: | Line 106: | ||
[[Prolog]] can be used for vastly more complicated inference tasks. See the corresponding article for further examples. |
[[Prolog]] can be used for vastly more complicated inference tasks. See the corresponding article for further examples. |
||
=== |
===Semantic web=== |
||
Recently automatic reasoners found in [[semantic web]] a new field of application. Being based upon [[description logic]], knowledge expressed using one variant of [[Web Ontology Language|OWL]] can be logically processed, i.e., inferences can be made upon it. |
Recently automatic reasoners found in [[semantic web]] a new field of application. Being based upon [[description logic]], knowledge expressed using one variant of [[Web Ontology Language|OWL]] can be logically processed, i.e., inferences can be made upon it. |
||
===Bayesian statistics and probability logic=== |
===Bayesian statistics and probability logic=== |
||
{{main|Bayesian inference}} |
|||
Philosophers and scientists who follow the [[Bayesian inference|Bayesian framework]] for inference use the mathematical rules of [[probability]] to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic", following [[E. T. Jaynes]]). |
Philosophers and scientists who follow the [[Bayesian inference|Bayesian framework]] for inference use the mathematical rules of [[probability]] to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic", following [[E. T. Jaynes]]). |
||
Line 100: | Line 118: | ||
Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see [[Bayesian decision theory]]). A central rule of Bayesian inference is [[Bayes' theorem]]. |
Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see [[Bayesian decision theory]]). A central rule of Bayesian inference is [[Bayes' theorem]]. |
||
===Fuzzy logic=== |
|||
See [[Bayesian inference]] for examples. |
|||
<!-- |
|||
===Frequentist statistical inference=== (to be written) |
|||
{{main||Fuzzy logic}} |
|||
--> |
|||
{{expand section|date=October 2016}} |
|||
=== Nonmonotonic logic<ref>{{cite book |first=André |last=Fuhrmann |archiveurl=https://web.archive.org/web/20031209221248/http://www.uni-konstanz.de/FuF/Philo/Philosophie/Fuhrmann/papers/nomoLog.pdf |archivedate= 9 December 2003 |url=http://www.uni-konstanz.de/FuF/Philo/Philosophie/Fuhrmann/papers/nomoLog.pdf |title=Nonmonotonic Logic |ref=Fuhrmann, Nonmonotonic Logic}}</ref> === |
|||
=== Non-monotonic logic === |
|||
A relation of inference is monotonic if the addition of premises does not undermine previously reached conclusions; otherwise the relation is [[Nonmonotonic logic|nonmonotonic]]. |
|||
{{main|Non-monotonic logic}}<ref>{{cite book |first=André |last=Fuhrmann |archiveurl=https://web.archive.org/web/20031209221248/http://www.uni-konstanz.de/FuF/Philo/Philosophie/Fuhrmann/papers/nomoLog.pdf |archivedate= 9 December 2003 |url=http://www.uni-konstanz.de/FuF/Philo/Philosophie/Fuhrmann/papers/nomoLog.pdf |title=Nonmonotonic Logic |ref=Fuhrmann, Nonmonotonic Logic}}</ref> |
|||
A relation of inference is [[monotonicity of entailment|monotonic]] if the addition of premises does not undermine previously reached conclusions; otherwise the relation is [[Nonmonotonic logic|non-monotonic]]. |
|||
Deductive inference is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premises are added. |
Deductive inference is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premises are added. |
||
By contrast, everyday reasoning is mostly |
By contrast, everyday reasoning is mostly non-monotonic because it involves risk: we jump to conclusions from deductively insufficient premises. |
||
We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce's theory of [[abductive reasoning|abduction]], inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence. |
We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce's theory of [[abductive reasoning|abduction]], inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence. |
||
==See also== |
==See also== |
||
{{Portal|Philosophy|Psychology}} |
|||
{{div col||20em}} |
|||
* {{Annotated link|A priori and a posteriori|''A priori'' and ''a posteriori''}} |
|||
* [[Reasoning]] |
|||
* {{Annotated link|Abductive reasoning}} |
|||
* {{Annotated link|Deductive reasoning}} |
|||
* {{Annotated link|Inductive reasoning}} |
|||
* {{Annotated link|Entailment}} |
|||
** [[Retroductive reasoning]] |
|||
* {{Annotated link|Epilogism}} |
|||
* [[Reasoning System]] |
|||
* {{Annotated link|Analogy}} |
|||
* [[Entailment]] |
|||
* {{Annotated link|Axiom system}} |
|||
* [[Epilogism]] |
|||
** {{Annotated link|Axiom}} |
|||
* [[Analogy]] |
|||
* {{Annotated link|Immediate inference}} |
|||
* [[Axiom]] |
|||
* {{Annotated link|Inferential programming}} |
|||
* [[Bayesian inference]] |
|||
* {{Annotated link|Inquiry}} |
|||
* [[Frequentist inference]] |
|||
* {{Annotated link|Logic}} |
|||
* [[Business rule]] |
|||
* {{Annotated link|Logic of information}} |
|||
* [[Business rules engine]] |
|||
* {{Annotated link|Logical assertion}} |
|||
* [[Expert system]] |
|||
* {{Annotated link|Logical graph}} |
|||
* [[Fuzzy logic]] |
|||
* |
* {{Annotated link|Rule of inference}} |
||
* {{Annotated link|List of rules of inference}} |
|||
* [[Inference engine]] |
|||
* {{Annotated link|Theorem}} |
|||
* [[Inferential programming]] |
|||
* {{Annotated link|Transduction (machine learning)}} |
|||
* [[Inquiry]] |
|||
* [[Logic]] |
|||
* [[Logic of information]] |
|||
* [[Logical assertion]] |
|||
* [[Logical graph]] |
|||
* [[Nonmonotonic logic]] |
|||
* [[Rule of inference]] |
|||
* [[List of rules of inference]] |
|||
* [[Theorem]] |
|||
* [[Transduction (machine learning)]] |
|||
* [[Sherlock Holmes]] |
|||
{{div col end}} |
|||
{{Portal|Logic|Thinking}} |
|||
==References== |
==References== |
||
Line 156: | Line 163: | ||
{{refbegin}} |
{{refbegin}} |
||
* {{cite book |
* {{cite book |
||
|first=Ian |last=Hacking |
|||
|title=An Introduction to Probability and Inductive Logic |
|||
|publisher=Cambridge University Press |
|||
|year=2001 |
|||
|ref=Hacking, 2001 |
|||
|isbn=978-0-521-77501-4 |
|||
}} |
}} |
||
* {{cite book |
* {{cite book |
||
|first=Edwin Thompson |
|||
|last=Jaynes |
|||
|url=http://titles.cambridge.org/catalogue.asp?isbn=0521592712 |
|||
|title=Probability Theory: The Logic of Science |
|||
|publisher=Cambridge University Press |
|||
|year=2003 |
|||
|isbn=978-0-521-59271-0 |
|||
|ref=Jaynes, 2003 |
|||
|access-date=29 November 2004 |
|||
}} |
|||
|archive-url=https://web.archive.org/web/20041011085524/http://titles.cambridge.org/catalogue.asp?isbn=0521592712 |
|||
|archive-date=11 October 2004 |
|||
|url-status=dead |
|||
}} |
|||
* {{cite book |
* {{cite book |
||
|first=David J.C. |last=McKay |author-link=David J. C. MacKay |
|||
|title=Information Theory, Inference, and Learning Algorithms |
|||
|publisher=Cambridge University Press |
|||
|year=2003 |
|||
|isbn=978-0-521-64298-9 |
|||
|url=http://www.inference.phy.cam.ac.uk/mackay/itila/book.html |
|||
|ref=McKay, 2003 |
|||
}} |
}} |
||
* {{Russell Norvig 2003}} |
* {{Russell Norvig 2003}} |
||
* {{cite book |
* {{cite book |
||
|last=Tijms |first=Henk |author-link=Henk Tijms |
|||
|title=Understanding Probability |
|||
|publisher=Cambridge University Press |
|url=https://archive.org/details/understandingpro0000tijm |url-access=registration |publisher=Cambridge University Press |
||
|year=2004 |
|||
|ref=Tijms, 2004 |
|||
|isbn=978-0-521-70172-3 |
|||
}} |
}} |
||
{{refend}} |
{{refend}} |
||
Line 195: | Line 207: | ||
'''Inductive inference:''' |
'''Inductive inference:''' |
||
* {{cite book| title=Studies in Inductive Logic and Probability| year=1971| volume=1| publisher=The University of California Press| editor1-first=Rudolf |editor1-last=Carnap |editor2-first=Richard C. |editor2-last=Jeffrey}} |
* {{cite book| title=Studies in Inductive Logic and Probability| year=1971| volume=1| publisher=The University of California Press| editor1-first=Rudolf |editor1-last=Carnap |editor2-first=Richard C. |editor2-last=Jeffrey}} |
||
* {{cite book| title=Studies in Inductive Logic and Probability| year= |
* {{cite book| title=Studies in Inductive Logic and Probability| year=1980| volume=2| publisher=The University of California Press| editor-first=Richard C. |editor-last=Jeffrey|url=https://books.google.com/books?id=Qfe0SEazn3oC| isbn=9780520038264}} |
||
* {{cite thesis| type=Ph.D.| first=Dana |last=Angluin| title=An Application of the Theory of Computational Complexity to the Study of Inductive Inference| year=1976| publisher=University of California at Berkeley |
* {{cite thesis| type=Ph.D.| first=Dana |last=Angluin| title=An Application of the Theory of Computational Complexity to the Study of Inductive Inference| year=1976| publisher=University of California at Berkeley}} |
||
* {{cite journal| first=Dana |
* {{cite journal| first=Dana| last=Angluin| title=Inductive Inference of Formal Languages from Positive Data| journal=Information and Control| year=1980| volume=45| issue=2| pages=117–135| doi=10.1016/s0019-9958(80)90285-5| doi-access=free}} |
||
* {{cite journal| |
* {{cite journal| first1=Dana |last1=Angluin|first2= Carl H. |last2=Smith| title=Inductive Inference: Theory and Methods| journal=Computing Surveys|date=Sep 1983| volume=15| number=3| pages=237–269| url=http://users.dsic.upv.es/asignaturas/facultad/apr/AngluinSmith83.pdf| doi=10.1145/356914.356918|s2cid=3209224}} |
||
* {{cite book| title=Inductive Logic| year=2009| volume=10| publisher=Elsevier| editor1-first=Dov M. |editor1-last=Gabbay |editor2-first=Stephan |editor2-last=Hartmann |editor3-first=John |editor3-last=Woods| series=Handbook of the History of Logic}} |
* {{cite book| title=Inductive Logic| year=2009| volume=10| publisher=Elsevier| editor1-first=Dov M. |editor1-last=Gabbay |editor2-first=Stephan |editor2-last=Hartmann |editor3-first=John |editor3-last=Woods| series=Handbook of the History of Logic|isbn=978-0-444-52936-7}} |
||
* {{cite book| first=Nelson |last=Goodman| title=Fact, Fiction, and Forecast| year= |
* {{cite book| first=Nelson |last=Goodman| title=Fact, Fiction, and Forecast| year=1983| publisher=Harvard University Press|url=https://books.google.com/books?id=i97_LdPXwrAC|isbn=9780674290716}} |
||
'''Abductive inference:''' |
'''Abductive inference:''' |
||
* {{cite book| title=Automated abduction: Inference to the best explanation| year=1997| publisher=AAAI Press| editor1-first=P. |editor1-last=O'Rourke |editor2-first=J. |editor2-last=Josephson}} |
* {{cite book| title=Automated abduction: Inference to the best explanation| year=1997| publisher=AAAI Press| editor1-first=P. |editor1-last=O'Rourke |editor2-first=J. |editor2-last=Josephson}} |
||
* {{cite book| first=Stathis |last=Psillos| title=An Explorer upon Untrodden Ground: Peirce on Abduction| year=2009| volume=10| pages=117–152| publisher=Elsevier| editor1-first=Dov M. |editor1-last=Gabbay |editor2-first=Stephan |editor2-last=Hartmann |editor3-first=John |editor3-last=Woods| series=Handbook of the History of Logic| url=http://users.uoa.gr/~psillos/PapersI/11-Peirce-Abduction.pdf}} |
* {{cite book| first=Stathis |last=Psillos| title=An Explorer upon Untrodden Ground: Peirce on Abduction|chapter=An Explorer upon Untrodden Ground | year=2009| volume=10| pages=117–152| publisher=Elsevier| editor1-first=Dov M. |editor1-last=Gabbay |editor2-first=Stephan |editor2-last=Hartmann |editor3-first=John |editor3-last=Woods| series=Handbook of the History of Logic| url=http://users.uoa.gr/~psillos/PapersI/11-Peirce-Abduction.pdf|doi=10.1016/B978-0-444-52936-7.50004-5|isbn=978-0-444-52936-7 }} |
||
* {{cite thesis| type=Ph.D.| first=Oliver |last=Ray| title=Hybrid Abductive Inductive Learning|date=Dec 2005| publisher=University of London, Imperial College| |
* {{cite thesis| type=Ph.D.| first=Oliver |last=Ray| title=Hybrid Abductive Inductive Learning|date=Dec 2005| publisher=University of London, Imperial College| citeseerx = 10.1.1.66.1877}} |
||
'''Psychological investigations about human reasoning:''' |
'''Psychological investigations about human reasoning:''' |
||
* '''deductive:''' |
* '''deductive:''' |
||
**{{cite book| first1 = Philip Nicholas |last1 = Johnson-Laird| |
**{{cite book| first1 = Philip Nicholas |last1 = Johnson-Laird| author-link = Philip Johnson-Laird | first2 = Ruth M. J. | last2 = Byrne| title=Deduction| year=1992| publisher=Erlbaum}} |
||
**{{cite journal| first1 |
**{{cite journal| first1=Ruth M. J.| last1=Byrne| first2=P. N.| last2=Johnson-Laird| authorlink2=Philip Johnson-Laird| title="If" and the Problems of Conditional Reasoning| journal=Trends in Cognitive Sciences| year=2009| volume=13| number=7| pages=282–287| url=https://psych.princeton.edu/psychology/research/johnson_laird/pdfs/2009ifandtheproblemof.pdf| doi=10.1016/j.tics.2009.04.003| pmid=19540792| s2cid=657803| access-date=9 August 2013| archive-url=https://web.archive.org/web/20140407061602/https://psych.princeton.edu/psychology/research/johnson_laird/pdfs/2009ifandtheproblemof.pdf| archive-date=7 April 2014| url-status=dead}} |
||
**{{cite journal| first1 = Markus | last1 = Knauff | first2 = Thomas | last2 = Fangmeier | first3 = Christian C. | last3 = Ruff | first4 = P. N. | last4 = Johnson-Laird| authorlink4 = Philip Johnson-Laird | title=Reasoning, Models, and Images: Behavioral Measures and Cortical Activity| journal=Journal of Cognitive Neuroscience| year=2003| volume=15| number=4| pages=559–573| url=http://www.uni-giessen.de/cms/fbz/fb06/psychologie/abt/kognition/dateien/kfrjl_JOCN.pdf| doi=10.1162/089892903321662949}} |
**{{cite journal | first1 = Markus | last1 = Knauff | first2 = Thomas | last2 = Fangmeier | first3 = Christian C. | last3 = Ruff | first4 = P. N. | last4 = Johnson-Laird | authorlink4 = Philip Johnson-Laird | title = Reasoning, Models, and Images: Behavioral Measures and Cortical Activity | journal = Journal of Cognitive Neuroscience | year = 2003 | volume = 15 | number = 4 | pages = 559–573 | url = http://www.uni-giessen.de/cms/fbz/fb06/psychologie/abt/kognition/dateien/kfrjl_JOCN.pdf | doi = 10.1162/089892903321662949 | pmid = 12803967 | hdl = 11858/00-001M-0000-0013-DC8B-C | citeseerx = 10.1.1.318.6615 | s2cid = 782228 | access-date = 9 August 2013 | archive-url = https://web.archive.org/web/20150518094338/http://www.uni-giessen.de/cms/fbz/fb06/psychologie/abt/kognition/dateien/kfrjl_JOCN.pdf | archive-date = 18 May 2015 | url-status = dead }} |
||
**{{cite book| first = Philip N. | last = Johnson-Laird| |
**{{cite book| first = Philip N. | last = Johnson-Laird| author-link = Philip Johnson-Laird | title=Mental Models, Deductive Reasoning, and the Brain| year=1995| pages=999–1008| publisher=MIT Press| editor-first = M. S. | editor-last = Gazzaniga| url=http://nbu.bg/cogs/events/2002/materials/Markus/mental_models.pdf}} |
||
**{{cite book| first1 = Sangeet | last1 = Khemlani | first2 = P. N. | last2 = Johnson-Laird| authorlink2 = Philip Johnson-Laird | chapter=Illusory Inferences about Embedded Disjunctions| title=Proceedings of the 30th Annual Conference of the Cognitive Science Society. Washington/DC| year=2008| pages=2128–2133| url=http://mentalmodels.princeton.edu/papers/2008disjillusions.pdf}} |
**{{cite book| first1 = Sangeet | last1 = Khemlani | first2 = P. N. | last2 = Johnson-Laird| authorlink2 = Philip Johnson-Laird | chapter=Illusory Inferences about Embedded Disjunctions| title=Proceedings of the 30th Annual Conference of the Cognitive Science Society. Washington/DC| year=2008| pages=2128–2133| chapter-url=http://mentalmodels.princeton.edu/papers/2008disjillusions.pdf}} |
||
* '''statistical:''' |
* '''statistical:''' |
||
**{{cite journal| first1 = Rachel | last1 = McCloy | first2 = Ruth M. J. | last2 = Byrne | first3 = Philip N. | last3 = Johnson-Laird | authorlink3 = Philip Johnson-Laird | title=Understanding Cumulative Risk| journal=The Quarterly Journal of Experimental Psychology| year=2009| pages= |
**{{cite journal | first1 = Rachel | last1 = McCloy | first2 = Ruth M. J. | last2 = Byrne | first3 = Philip N. | last3 = Johnson-Laird | authorlink3 = Philip Johnson-Laird | title = Understanding Cumulative Risk | journal = The Quarterly Journal of Experimental Psychology | year = 2009 | pages = 499–515 | url = http://psych.princeton.edu/psychology/research/johnson_laird/pdfs/2009%20Understanding%20cumulative%20risk.pdf | doi = 10.1080/17470210903024784 | pmid = 19591080 | volume = 63 | issue = 3 | s2cid = 7741180 | access-date = 9 August 2013 | archive-url = https://web.archive.org/web/20150518073242/http://psych.princeton.edu/psychology/research/johnson_laird/pdfs/2009%20Understanding%20cumulative%20risk.pdf | archive-date = 18 May 2015 | url-status = dead }} |
||
**{{cite journal| first = Philip N. | last = Johnson-Laird | |
**{{cite journal| first = Philip N. | last = Johnson-Laird | author-link = Philip Johnson-Laird | title=Mental Models and Probabilistic Thinking| journal=Cognition| year=1994| volume=50| issue = 1–3 | pages=189–209| url=http://mentalmodels.princeton.edu/papers/1994probabilistic.pdf| doi=10.1016/0010-0277(94)90028-0| pmid = 8039361 | s2cid = 9439284 }}, |
||
* '''analogical:''' |
* '''analogical:''' |
||
**{{cite journal| first = B. D. | last = Burns| title=Meta-Analogical Transfer: Transfer Between Episodes of Analogical Reasoning| journal=Journal of Experimental Psychology: Learning, Memory, and Cognition| year=1996| volume=22| number=4| pages=1032–1048| doi=10.1037/0278-7393.22.4.1032}} |
**{{cite journal| first = B. D. | last = Burns| title=Meta-Analogical Transfer: Transfer Between Episodes of Analogical Reasoning| journal=Journal of Experimental Psychology: Learning, Memory, and Cognition| year=1996| volume=22| number=4| pages=1032–1048| doi=10.1037/0278-7393.22.4.1032}} |
||
* '''spatial:''' |
* '''spatial:''' |
||
**{{cite journal| first1 = Georg | last1 = Jahn | first2 = Markus | last2 = Knauff | first3 = P. N. | last3 = Johnson-Laird| authorlink3 = Philip Johnson-Laird | title=Preferred mental models in reasoning about spatial relations| journal=Memory & Cognition| year=2007| volume=35| number=8| pages=2075–2087| url=http://mentalmodels.princeton.edu/papers/2007preferredmodels.pdf| doi=10.3758/bf03192939}} |
**{{cite journal| first1 = Georg | last1 = Jahn | first2 = Markus | last2 = Knauff | first3 = P. N. | last3 = Johnson-Laird| authorlink3 = Philip Johnson-Laird | title=Preferred mental models in reasoning about spatial relations| journal=Memory & Cognition| year=2007| volume=35| number=8| pages=2075–2087| url=http://mentalmodels.princeton.edu/papers/2007preferredmodels.pdf| doi=10.3758/bf03192939| pmid = 18265622 | s2cid = 25356700 | doi-access = free}} |
||
**{{cite journal| first1 = Markus | last1 = Knauff | first2 = P. N. | last2 = Johnson-Laird | authorlink2 = Philip Johnson-Laird | title=Visual imagery can impede reasoning| journal=Memory & Cognition| year=2002| volume=30| number=3| pages=363–371| url=http://mentalmodels.princeton.edu/papers/2002imagery.pdf| doi=10.3758/bf03194937}} |
**{{cite journal| first1 = Markus | last1 = Knauff | first2 = P. N. | last2 = Johnson-Laird | authorlink2 = Philip Johnson-Laird | title=Visual imagery can impede reasoning| journal=Memory & Cognition| year=2002| volume=30| number=3| pages=363–371| url=http://mentalmodels.princeton.edu/papers/2002imagery.pdf| doi=10.3758/bf03194937| pmid = 12061757 | s2cid = 7330724 | doi-access=free}} |
||
**{{cite journal| first1 = James A. | last1 = Waltz | first2 = Barbara J. | last2 = Knowlton | first3 = Keith J. | last3 = Holyoak | first4 = Kyle B. | last4 = Boone | first5 = Fred S. | last5 = Mishkin | first6 = Marcia | last6 = de Menezes Santos | first7 = Carmen R. | last7 = Thomas | first8 = Bruce L. | last8 = Miller| title=A System for Relational Reasoning in Human Prefrontal Cortex| journal=Psychological Science|date=Mar 1999| volume=10| number=2| pages=119–125| url= |
**{{cite journal| first1 = James A. | last1 = Waltz | first2 = Barbara J. | last2 = Knowlton | first3 = Keith J. | last3 = Holyoak | first4 = Kyle B. | last4 = Boone | first5 = Fred S. | last5 = Mishkin | first6 = Marcia | last6 = de Menezes Santos | first7 = Carmen R. | last7 = Thomas | first8 = Bruce L. | last8 = Miller| title=A System for Relational Reasoning in Human Prefrontal Cortex| journal=Psychological Science|date=Mar 1999| volume=10| number=2| pages=119–125| url=https://www.researchgate.net/publication/228906574| doi=10.1111/1467-9280.00118| s2cid = 44019775 }} |
||
* '''moral:''' |
* '''moral:''' |
||
**{{cite journal| first1 = Monica | last1 = Bucciarelli | first2 = Sangeet | last2 = Khemlani | first3 = P. N. | last3 = Johnson-Laird | authorlink3 = Philip Johnson-Laird | title=The Psychology of Moral Reasoning| journal=Judgment and Decision Making|date=Feb 2008| volume=3| number=2| pages=121–139| url=http://journal.sjdm.org/jdm8105.pdf}} |
**{{cite journal| first1 = Monica | last1 = Bucciarelli | first2 = Sangeet | last2 = Khemlani | first3 = P. N. | last3 = Johnson-Laird | authorlink3 = Philip Johnson-Laird | title=The Psychology of Moral Reasoning| journal=Judgment and Decision Making|date=Feb 2008| volume=3| number=2| pages=121–139| doi = 10.1017/S1930297500001479 | s2cid = 327124 | url=http://journal.sjdm.org/jdm8105.pdf}} |
||
== External links == |
== External links == |
||
Line 230: | Line 242: | ||
* {{PhilPapers|category|inference}} |
* {{PhilPapers|category|inference}} |
||
* [http://philosophyterms.com/inference/ Inference example and definition] |
|||
* {{InPho|taxonomy|2397}} |
* {{InPho|taxonomy|2397}} |
||
{{Logic}} |
{{Logic}} |
||
{{Mathematical logic}} |
|||
{{Philosophy topics}} |
{{Philosophy topics}} |
||
{{Authority control}} |
{{Authority control}} |
||
[[Category:Logic and statistics]] |
|||
[[Category: |
[[Category:Inference| ]] |
||
[[Category: |
[[Category:Concepts in epistemology]] |
||
[[Category:Concepts in logic]] |
[[Category:Concepts in logic]] |
||
[[Category:Concepts in metaphilosophy]] |
|||
[[Category:Concepts in metaphysics]] |
|||
[[Category:Concepts in the philosophy of mind]] |
|||
[[Category:History of logic]] |
|||
[[Category:Intellectual history]] |
|||
[[Category:Logic]] |
|||
[[Category:Logic and statistics]] |
|||
[[Category:Logical consequence]] |
[[Category:Logical consequence]] |
||
[[Category:Metaphysics of mind]] |
|||
[[Category:Reasoning]] |
|||
[[Category:Semantics]] |
|||
[[Category:Sources of knowledge]] |
|||
[[Category:Thought]] |
Latest revision as of 12:49, 16 December 2024
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (July 2023) |
Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle (300s BCE). Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.
Various fields study how inference is done in practice. Human inference (i.e. how humans draw conclusions) is traditionally studied within the fields of logic, argumentation studies, and cognitive psychology; artificial intelligence researchers develop automated inference systems to emulate human inference. Statistical inference uses mathematics to draw conclusions in the presence of uncertainty. This generalizes deterministic reasoning, with the absence of uncertainty as a special case. Statistical inference uses quantitative or qualitative (categorical) data which may be subject to random variations.
Definition
[edit]The process by which a conclusion is inferred from multiple observations is called inductive reasoning. The conclusion may be correct or incorrect, or correct to within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from multiple observations may be tested by additional observations.
This definition is disputable (due to its lack of clarity. Ref: Oxford English dictionary: "induction ... 3. Logic the inference of a general law from particular instances." [clarification needed]) The definition given thus applies only when the "conclusion" is general.
Two possible definitions of "inference" are:
- A conclusion reached on the basis of evidence and reasoning.
- The process of reaching such a conclusion.
Examples
[edit]Example for definition #1
[edit]Ancient Greek philosophers defined a number of syllogisms, correct three part inferences, that can be used as building blocks for more complex reasoning. We begin with a famous example:
- All humans are mortal.
- All Greeks are humans.
- All Greeks are mortal.
The reader can check that the premises and conclusion are true, but logic is concerned with inference: does the truth of the conclusion follow from that of the premises?
The validity of an inference depends on the form of the inference. That is, the word "valid" does not refer to the truth of the premises or the conclusion, but rather to the form of the inference. An inference can be valid even if the parts are false, and can be invalid even if some parts are true. But a valid form with true premises will always have a true conclusion.
For example, consider the form of the following symbological track:
- All meat comes from animals.
- All beef is meat.
- Therefore, all beef comes from animals.
If the premises are true, then the conclusion is necessarily true, too.
Now we turn to an invalid form.
- All A are B.
- All C are B.
- Therefore, all C are A.
To show that this form is invalid, we demonstrate how it can lead from true premises to a false conclusion.
- All apples are fruit. (True)
- All bananas are fruit. (True)
- Therefore, all bananas are apples. (False)
A valid argument with a false premise may lead to a false conclusion, (this and the following examples do not follow the Greek syllogism):
- All tall people are French. (False)
- John Lennon was tall. (True)
- Therefore, John Lennon was French. (False)
When a valid argument is used to derive a false conclusion from a false premise, the inference is valid because it follows the form of a correct inference.
A valid argument can also be used to derive a true conclusion from a false premise:
- All tall people are musicians. (Valid, False)
- John Lennon was tall. (Valid, True)
- Therefore, John Lennon was a musician. (Valid, True)
In this case we have one false premise and one true premise where a true conclusion has been inferred.
Example for definition #2
[edit]Evidence: It is the early 1950s and you are an American stationed in the Soviet Union. You read in the Moscow newspaper that a soccer team from a small city in Siberia starts winning game after game. The team even defeats the Moscow team. Inference: The small city in Siberia is not a small city anymore. The Soviets are working on their own nuclear or high-value secret weapons program.
Knowns: The Soviet Union is a command economy: people and material are told where to go and what to do. The small city was remote and historically had never distinguished itself; its soccer season was typically short because of the weather.
Explanation: In a command economy, people and material are moved where they are needed. Large cities might field good teams due to the greater availability of high quality players; and teams that can practice longer (possibly due to sunnier weather and better facilities) can reasonably be expected to be better. In addition, you put your best and brightest in places where they can do the most good—such as on high-value weapons programs. It is an anomaly for a small city to field such a good team. The anomaly indirectly described a condition by which the observer inferred a new meaningful pattern—that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere? To hide them, of course.
Incorrect inference
[edit]An incorrect inference is known as a fallacy. Philosophers who study informal logic have compiled large lists of them, and cognitive psychologists have documented many biases in human reasoning that favor incorrect reasoning.
Applications
[edit]Inference engines
[edit]AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of expert systems and later business rule engines. More recent work on automated theorem proving has had a stronger basis in formal logic.
An inference system's job is to extend a knowledge base automatically. The knowledge base (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are relevant to its task.
Additionally, the term 'inference' has also been applied to the process of generating predictions from trained neural networks. In this context, an 'inference engine' refers to the system or hardware performing these operations. This type of inference is widely used in applications ranging from image recognition to natural language processing.
Prolog engine
[edit]Prolog (for "Programming in Logic") is a programming language based on a subset of predicate calculus. Its main job is to check whether a certain proposition can be inferred from a KB (knowledge base) using an algorithm called backward chaining.
Let us return to our Socrates syllogism. We enter into our Knowledge Base the following piece of code:
mortal(X) :- man(X). man(socrates).
( Here :- can be read as "if". Generally, if P Q (if P then Q) then in Prolog we would code Q:-P (Q if P).)
This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates:
?- mortal(socrates).
(where ?- signifies a query: Can mortal(socrates). be deduced from the KB using the rules) gives the answer "Yes".
On the other hand, asking the Prolog system the following:
?- mortal(plato).
gives the answer "No".
This is because Prolog does not know anything about Plato, and hence defaults to any property about Plato being false (the so-called closed world assumption). Finally
?- mortal(X) (Is anything mortal) would result in "Yes" (and in some implementations: "Yes": X=socrates)
Prolog can be used for vastly more complicated inference tasks. See the corresponding article for further examples.
Semantic web
[edit]Recently automatic reasoners found in semantic web a new field of application. Being based upon description logic, knowledge expressed using one variant of OWL can be logically processed, i.e., inferences can be made upon it.
Bayesian statistics and probability logic
[edit]Philosophers and scientists who follow the Bayesian framework for inference use the mathematical rules of probability to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic", following E. T. Jaynes).
Bayesians identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that "it's going to rain tomorrow" has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely.
Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see Bayesian decision theory). A central rule of Bayesian inference is Bayes' theorem.
Fuzzy logic
[edit]This section needs expansion. You can help by adding to it. (October 2016) |
Non-monotonic logic
[edit]A relation of inference is monotonic if the addition of premises does not undermine previously reached conclusions; otherwise the relation is non-monotonic. Deductive inference is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premises are added.
By contrast, everyday reasoning is mostly non-monotonic because it involves risk: we jump to conclusions from deductively insufficient premises. We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce's theory of abduction, inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence.
See also
[edit]- A priori and a posteriori – Two types of knowledge, justification, or argument
- Abductive reasoning – Inference seeking the simplest and most likely explanation
- Deductive reasoning – Form of reasoning
- Inductive reasoning – Method of logical reasoning
- Entailment – Relationship where one statement follows from another
- Epilogism
- Analogy – Cognitive process of transferring information or meaning from a particular subject to another
- Axiom system – Mathematical term; concerning axioms used to derive theorems
- Axiom – Statement that is taken to be true
- Immediate inference – Logical inference from a single statement
- Inferential programming
- Inquiry – Any process that has the aim of augmenting knowledge, resolving doubt, or solving a problem
- Logic – Study of correct reasoning
- Logic of information
- Logical assertion – Statement in a metalanguage
- Logical graph – Type of diagrammatic notation for propositional logic
- Rule of inference – Systematic logical process capable of deriving a conclusion from hypotheses
- List of rules of inference
- Theorem – In mathematics, a statement that has been proven
- Transduction (machine learning) – Type of statistical inference
References
[edit]- ^ Fuhrmann, André. Nonmonotonic Logic (PDF). Archived from the original (PDF) on 9 December 2003.
Further reading
[edit]- Hacking, Ian (2001). An Introduction to Probability and Inductive Logic. Cambridge University Press. ISBN 978-0-521-77501-4.
- Jaynes, Edwin Thompson (2003). Probability Theory: The Logic of Science. Cambridge University Press. ISBN 978-0-521-59271-0. Archived from the original on 11 October 2004. Retrieved 29 November 2004.
- McKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN 978-0-521-64298-9.
- Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2
- Tijms, Henk (2004). Understanding Probability. Cambridge University Press. ISBN 978-0-521-70172-3.
Inductive inference:
- Carnap, Rudolf; Jeffrey, Richard C., eds. (1971). Studies in Inductive Logic and Probability. Vol. 1. The University of California Press.
- Jeffrey, Richard C., ed. (1980). Studies in Inductive Logic and Probability. Vol. 2. The University of California Press. ISBN 9780520038264.
- Angluin, Dana (1976). An Application of the Theory of Computational Complexity to the Study of Inductive Inference (Ph.D.). University of California at Berkeley.
- Angluin, Dana (1980). "Inductive Inference of Formal Languages from Positive Data". Information and Control. 45 (2): 117–135. doi:10.1016/s0019-9958(80)90285-5.
- Angluin, Dana; Smith, Carl H. (September 1983). "Inductive Inference: Theory and Methods" (PDF). Computing Surveys. 15 (3): 237–269. doi:10.1145/356914.356918. S2CID 3209224.
- Gabbay, Dov M.; Hartmann, Stephan; Woods, John, eds. (2009). Inductive Logic. Handbook of the History of Logic. Vol. 10. Elsevier. ISBN 978-0-444-52936-7.
- Goodman, Nelson (1983). Fact, Fiction, and Forecast. Harvard University Press. ISBN 9780674290716.
Abductive inference:
- O'Rourke, P.; Josephson, J., eds. (1997). Automated abduction: Inference to the best explanation. AAAI Press.
- Psillos, Stathis (2009). "An Explorer upon Untrodden Ground". In Gabbay, Dov M.; Hartmann, Stephan; Woods, John (eds.). An Explorer upon Untrodden Ground: Peirce on Abduction (PDF). Handbook of the History of Logic. Vol. 10. Elsevier. pp. 117–152. doi:10.1016/B978-0-444-52936-7.50004-5. ISBN 978-0-444-52936-7.
- Ray, Oliver (December 2005). Hybrid Abductive Inductive Learning (Ph.D.). University of London, Imperial College. CiteSeerX 10.1.1.66.1877.
Psychological investigations about human reasoning:
- deductive:
- Johnson-Laird, Philip Nicholas; Byrne, Ruth M. J. (1992). Deduction. Erlbaum.
- Byrne, Ruth M. J.; Johnson-Laird, P. N. (2009). ""If" and the Problems of Conditional Reasoning" (PDF). Trends in Cognitive Sciences. 13 (7): 282–287. doi:10.1016/j.tics.2009.04.003. PMID 19540792. S2CID 657803. Archived from the original (PDF) on 7 April 2014. Retrieved 9 August 2013.
- Knauff, Markus; Fangmeier, Thomas; Ruff, Christian C.; Johnson-Laird, P. N. (2003). "Reasoning, Models, and Images: Behavioral Measures and Cortical Activity" (PDF). Journal of Cognitive Neuroscience. 15 (4): 559–573. CiteSeerX 10.1.1.318.6615. doi:10.1162/089892903321662949. hdl:11858/00-001M-0000-0013-DC8B-C. PMID 12803967. S2CID 782228. Archived from the original (PDF) on 18 May 2015. Retrieved 9 August 2013.
- Johnson-Laird, Philip N. (1995). Gazzaniga, M. S. (ed.). Mental Models, Deductive Reasoning, and the Brain (PDF). MIT Press. pp. 999–1008.
- Khemlani, Sangeet; Johnson-Laird, P. N. (2008). "Illusory Inferences about Embedded Disjunctions" (PDF). Proceedings of the 30th Annual Conference of the Cognitive Science Society. Washington/DC. pp. 2128–2133.
- statistical:
- McCloy, Rachel; Byrne, Ruth M. J.; Johnson-Laird, Philip N. (2009). "Understanding Cumulative Risk" (PDF). The Quarterly Journal of Experimental Psychology. 63 (3): 499–515. doi:10.1080/17470210903024784. PMID 19591080. S2CID 7741180. Archived from the original (PDF) on 18 May 2015. Retrieved 9 August 2013.
- Johnson-Laird, Philip N. (1994). "Mental Models and Probabilistic Thinking" (PDF). Cognition. 50 (1–3): 189–209. doi:10.1016/0010-0277(94)90028-0. PMID 8039361. S2CID 9439284.,
- analogical:
- Burns, B. D. (1996). "Meta-Analogical Transfer: Transfer Between Episodes of Analogical Reasoning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 22 (4): 1032–1048. doi:10.1037/0278-7393.22.4.1032.
- spatial:
- Jahn, Georg; Knauff, Markus; Johnson-Laird, P. N. (2007). "Preferred mental models in reasoning about spatial relations" (PDF). Memory & Cognition. 35 (8): 2075–2087. doi:10.3758/bf03192939. PMID 18265622. S2CID 25356700.
- Knauff, Markus; Johnson-Laird, P. N. (2002). "Visual imagery can impede reasoning" (PDF). Memory & Cognition. 30 (3): 363–371. doi:10.3758/bf03194937. PMID 12061757. S2CID 7330724.
- Waltz, James A.; Knowlton, Barbara J.; Holyoak, Keith J.; Boone, Kyle B.; Mishkin, Fred S.; de Menezes Santos, Marcia; Thomas, Carmen R.; Miller, Bruce L. (March 1999). "A System for Relational Reasoning in Human Prefrontal Cortex". Psychological Science. 10 (2): 119–125. doi:10.1111/1467-9280.00118. S2CID 44019775.
- moral:
- Bucciarelli, Monica; Khemlani, Sangeet; Johnson-Laird, P. N. (February 2008). "The Psychology of Moral Reasoning" (PDF). Judgment and Decision Making. 3 (2): 121–139. doi:10.1017/S1930297500001479. S2CID 327124.