Jump to content

Longtermism: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Fate of other sentient creatures: Reverted poorly sourced, good faith WP:GF edits. - Please see WP:FORBESCON
Modified the links
 
(32 intermediate revisions by 19 users not shown)
Line 1: Line 1:
{{Short description|Philosophical view which prioritises the long-term future}}
{{Short description|Philosophical view which prioritises the long-term future}}
[[File:Illustration of contemporary and past human populations Our World in Data.png|thumb|Comparing the number of human lives in the past and present]]
{{multiple image
'''Longtermism''' is the [[ethics|ethical view]] that positively influencing the long-term [[future]] is a key moral priority of our time. It is an important concept in [[effective altruism]] and a primary motivation for efforts that aim to reduce [[Global catastrophic risk|existential risks]] to humanity.<ref name=":10">{{Cite web |last=Steele |first=Katie |date=2022-12-19 |title=Longtermism – why the million-year philosophy can't be ignored |url=http://theconversation.com/longtermism-why-the-million-year-philosophy-cant-be-ignored-193538 |access-date=2024-07-22 |website=The Conversation |language=en-US}}</ref><ref name=":9" />
| align = right
| total_width = 460
| image1 = Illustration of contemporary and past human populations Our World in Data.png
| caption1 = Comparing the number of human lives in the past and present
| image2 = Illustration of past, present and future population sizes (Our World in Data).png
| caption2 = Illustrating the potential number of future human lives
}}
'''Longtermism''' is the [[ethics|ethical view]] that positively influencing the long-term [[future]] is a key moral priority of our time.<ref>{{Cite web|last=Moorhouse|first=Fin|date=2021-01-27|title=Introduction to Longtermism|url=https://www.effectivealtruism.org/articles/longtermism/|work=Effective Altruism|accessdate=2023-06-02}}</ref> It is an important concept in [[effective altruism]] and serves as a primary motivation for efforts that claim to reduce [[Global catastrophic risk|existential risks]] to humanity.<ref>{{Cite web|last=Moorhouse|first=Fin|date=2021-01-27|title=Introduction to Longtermism|url=https://www.effectivealtruism.org/articles/longtermism/|work=Effective Altruism|accessdate=2023-06-02}}</ref>


The key argument for longtermism has been summarized as follows: "[[Future generations|future people]] matter morally just as much as people alive today;{{Nbsp}}... there may well be more people alive in the future than there are in the present or have been in the past; and{{Nbsp}}... we can positively affect future peoples' lives."<ref>{{Cite web|last=Samuel|first=Sigal|date=2021-11-03|title=Would you donate to a charity that won't pay out for centuries?|url=https://www.vox.com/future-perfect/2021/11/3/22760718/patient-philanthropy-fund-charity-longtermism|work=Vox|accessdate=2021-11-13}}</ref> These three ideas taken together suggest, to those advocating longtermism, that it is the responsibility of those living now to ensure that future generations get to survive and flourish.<ref name=":4">{{Cite journal |last1=Greaves |first1=Hilary |last2=MacAskill |first2=William |date=2021 |title=The case for strong longtermism |url=https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/ |url-status=live |journal=Global Priorities Institute Working Paper |volume=5 |archive-url=https://web.archive.org/web/20220709010826/https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/ |archive-date=9 July 2022}}</ref>
The key argument for longtermism has been summarized as follows: "[[Future generations|future people]] matter morally just as much as people alive today;{{Nbsp}}... there may well be more people alive in the future than there are in the present or have been in the past; and{{Nbsp}}... we can positively affect future peoples' lives."<ref>{{Cite web|last=Samuel|first=Sigal|date=2021-11-03|title=Would you donate to a charity that won't pay out for centuries?|url=https://www.vox.com/future-perfect/2021/11/3/22760718/patient-philanthropy-fund-charity-longtermism|work=Vox|accessdate=2021-11-13}}</ref><ref name=":11">{{Cite news |last=Setiya |first=Kieran |date=August 15, 2022 |title=The New Moral Mathematics |url=https://www.bostonreview.net/articles/the-new-moral-mathematics/ |work=Boston Review}}</ref> These three ideas taken together suggest, to those advocating longtermism, that it is the responsibility of those living now to ensure that future generations get to survive and flourish.<ref name=":11" />


== Definition ==
== Definition ==
Philosopher [[William MacAskill]] defines ''longtermism'' in his book ''[[What We Owe the Future]]'' as "the view that positively influencing the longterm future is a key moral priority of our time".<ref>{{Cite book |last=MacAskill |first=William |title=What We Owe the Future |title-link=What We Owe the Future |publisher=Basic Books |year=2022 |isbn=978-1-5416-1862-6 |location=New York |author-link=William MacAskill}}</ref>{{Rp|page=4}} He distinguishes it from ''strong longtermism'', "the view that positively influencing the longterm future is ''the'' key moral priority of our time".<ref name=":1">{{Cite journal |last=MacAskill |first=William |date=2019-07-25 |title=Longtermism |url=https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism |journal=Effective Altruism Forum}}</ref>
Philosopher [[William MacAskill]] defines ''longtermism'' as "the view that positively influencing the longterm future is a key moral priority of our time".<ref name=":10" /><ref name=":04" />{{Rp|page=4}} He distinguishes it from ''strong longtermism'', "the view that positively influencing the longterm future is ''the'' key moral priority of our time".<ref name=":1">{{Cite journal |last=MacAskill |first=William |date=2019-07-25 |title=Longtermism |url=https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism |journal=Effective Altruism Forum}}</ref><ref name=":9">{{Cite web |last=Samuel |first=Sigal |date=2022-09-06 |title=Effective altruism's most controversial idea |url=https://www.vox.com/future-perfect/23298870/effective-altruism-longtermism-will-macaskill-future |access-date=2024-07-14 |website=Vox |language=en-US}}</ref>


In his book ''[[The Precipice: Existential Risk and the Future of Humanity]]'', philosopher [[Toby Ord]] describes longtermism as follows: "longtermism{{Nbsp}}... is especially concerned with the impacts of our actions upon the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity's potential is one avenue for such a lasting impact and there may be others too."<ref name="precipice">{{Cite book |last=Ord |first=Toby |url=https://theprecipice.com/ |title=The Precipice: Existential Risk and the Future of Humanity |publisher=Bloomsbury Publishing |year=2020 |isbn=978-1-5266-0021-9 |location=London |oclc=1143365836 |author-link=Toby Ord}}</ref>{{rp|pages=52-53}} In addition, Ord notes that "longtermism is animated by a moral re-orientation toward the vast future that existential risks threaten to foreclose."<ref name="precipice" />{{rp|pages=52-53}}
In his book ''[[The Precipice: Existential Risk and the Future of Humanity]]'', philosopher [[Toby Ord]] describes longtermism as follows: "longtermism{{Nbsp}}... is especially concerned with the impacts of our actions upon the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity's potential is one avenue for such a lasting impact and there may be others too."<ref name="precipice">{{Cite book |last=Ord |first=Toby |url=https://theprecipice.com/ |title=The Precipice: Existential Risk and the Future of Humanity |publisher=Bloomsbury Publishing |year=2020 |isbn=978-1-5266-0021-9 |location=London |oclc=1143365836 |author-link=Toby Ord}}</ref>{{rp|pages=52–53}} In addition, Ord notes that "longtermism is animated by a moral re-orientation toward the vast future that existential risks threaten to foreclose."<ref name="precipice" />{{rp|pages=52–53}}


Because it is generally infeasible to use traditional research techniques such as randomized controlled trials to analyze existential risks, researchers such as [[Nick Bostrom]] have used methods such as expert opinion elicitation to estimate their importance.<ref name="xrisk methods">{{cite journal |last1=Rowe |first1=Thomas |last2=Beard |first2=Simon |title=Probabilities, methodologies and the evidence base in existential risk assessments |journal=Working Paper, Centre for the Study of Existential Risk |date=2018 |url=http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf |access-date=26 August 2018 |archive-date=27 August 2018 |archive-url=https://web.archive.org/web/20180827075254/http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf |url-status=live }}</ref> Ord offered probability estimates for a number of existential risks in ''The Precipice''.<ref name="precipice" />{{Rp|page=167}}
Because it is generally infeasible to use traditional research techniques such as randomized controlled trials to analyze existential risks, researchers such as [[Nick Bostrom]] have used methods such as expert opinion elicitation to estimate their importance.<ref name="xrisk methods">{{cite journal |last1=Rowe |first1=Thomas |last2=Beard |first2=Simon |title=Probabilities, methodologies and the evidence base in existential risk assessments |journal=Working Paper, Centre for the Study of Existential Risk |date=2018 |url=http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf |access-date=26 August 2018 |archive-date=27 August 2018 |archive-url=https://web.archive.org/web/20180827075254/http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf |url-status=live }}</ref> Ord offered probability estimates for a number of existential risks in ''The Precipice''.<ref name="precipice" />{{Rp|page=167}}


== History ==
== History ==
The term "longtermism" was coined around 2017 by Oxford philosophers William MacAskill and Toby Ord. The view draws inspiration from the work of Nick Bostrom, Nick Beckstead, and others.<ref name=":1" /> While its coinage is relatively new, some aspects of longtermism have been thought about for centuries. The oral constitution of the Iroquois Confederacy, the [[Great Law of Peace|Gayanashagowa]], encourages all decision-making to “have always in view not only the present but also the coming generations”.<ref>[https://sourcebooks.fordham.edu/mod/iroquois.asp Constitution of the Iroquois Nations], 1910.</ref> This has been interpreted to mean that decisions should be made so as to be of benefit to the [[Seven generation sustainability|seventh generation in the future]].<ref>{{Cite web |last=Lyons |first=Oren |date=October 2004 |title=The Ice is Melting |url=https://centerforneweconomics.org/publications/the-ice-is-melting/ |url-status=live |archive-url=https://web.archive.org/web/20220511094507/https://centerforneweconomics.org/publications/the-ice-is-melting/ |archive-date=11 May 2022 |website=Center for New Economics}}</ref> These ideas have re-emerged in contemporary thought with thinkers such as [[Derek Parfit]] in his 1984 book ''[[Reasons and Persons]]'', and [[Jonathan Schell]] in his 1982 book ''[[The Fate of the Earth]]''.
The term "longtermism" was coined around 2017 by Oxford philosophers William MacAskill and Toby Ord. The view draws inspiration from the work of Nick Bostrom, Nick Beckstead, and others.<ref name=":1" /><ref name=":10" /> While its coinage is relatively new, some aspects of longtermism have been thought about for centuries. The oral constitution of the Iroquois Confederacy, the [[Great Law of Peace|Gayanashagowa]], encourages all decision-making to “have always in view not only the present but also the coming generations”.<ref>[https://sourcebooks.fordham.edu/mod/iroquois.asp Constitution of the Iroquois Nations], 1910.</ref> This has been interpreted to mean that decisions should be made so as to be of benefit to the [[Seven generation sustainability|seventh generation in the future]].<ref>{{Cite web |last=Lyons |first=Oren |date=October 2004 |title=The Ice is Melting |url=https://centerforneweconomics.org/publications/the-ice-is-melting/ |url-status=live |archive-url=https://web.archive.org/web/20220511094507/https://centerforneweconomics.org/publications/the-ice-is-melting/ |archive-date=11 May 2022 |website=Center for New Economics}}</ref> These ideas have re-emerged in contemporary thought with thinkers such as [[Derek Parfit]] in his 1984 book ''[[Reasons and Persons]]'', and [[Jonathan Schell]] in his 1982 book ''[[The Fate of the Earth]]''.


== Community ==
== Community ==
Longtermist ideas have given rise to a community of individuals and organizations working to protect the interests of future generations.<ref>{{Cite web |last=Samuel |first=Sigal |date=2021-07-02 |title=What we owe to future generations |url=https://www.vox.com/future-perfect/22552963/how-to-be-a-good-ancestor-longtermism-climate-change |work=Vox |accessdate=2021-11-27}}</ref> Organizations working on longtermist topics include Cambridge University's [[Centre for the Study of Existential Risk]],<ref>{{Cite web |title=Our Mission |url=https://www.cser.ac.uk/about-us/our-mission/ |access-date=2021-11-28 |website=www.cser.ac.uk}}</ref> Oxford University's [[Future of Humanity Institute]]<ref>{{Cite web |last= |first= |title=About Us: Future of Humanity Institute |url=http://www.fhi.ox.ac.uk/ |access-date=2021-11-28 |website=The Future of Humanity Institute |language=en-GB}}</ref> and Global Priorities Institute, [[80,000 Hours]],<ref>{{Cite web |title=About us: what do we do, and how can we help? |url=https://80000hours.org/about/ |access-date=2021-11-28 |website=80,000 Hours |language=en-US}}</ref> [[Open Philanthropy (organization)|Open Philanthropy]],<ref>{{Cite web |date=2016-03-02 |title=Global Catastrophic Risks |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks |access-date=2021-11-28 |website=Open Philanthropy |language=en}}</ref> the [[Future of Life Institute]],<ref>{{Cite web |title=Team |url=https://futureoflife.org/team/ |access-date=2021-11-28 |website=Future of Life Institute |language=en-US}}</ref> The Forethought Foundation,<ref>{{Cite web |date=2022 |title=About Us – Forethought Foundation |url=https://www.forethought.org/about-us |website=Forethought Foundation for Global Priorities Research}}</ref> and Longview Philanthropy.<ref>{{Cite web |last=Matthews |first=Dylan |date=2022-08-08 |title=How effective altruism went from a niche movement to a billion-dollar force |url=https://www.vox.com/future-perfect/2022/8/8/23150496/effective-altruism-sam-bankman-fried-dustin-moskovitz-billionaire-philanthropy-crytocurrency |access-date=2022-08-27 |website=Vox |language=en}}</ref>
Longtermist ideas have given rise to a community of individuals and organizations working to protect the interests of future generations.<ref>{{Cite web |last=Samuel |first=Sigal |date=2021-07-02 |title=What we owe to future generations |url=https://www.vox.com/future-perfect/22552963/how-to-be-a-good-ancestor-longtermism-climate-change |work=Vox |accessdate=2021-11-27}}</ref> Organizations working on longtermist topics include Cambridge University's [[Centre for the Study of Existential Risk]], the [[Future of Life Institute]], the Global Priorities Institute, the Stanford Existential Risks Initiative,<ref>{{Cite web |last=MacAskill |first=William |date=8 August 2022 |title=What is longtermism? |url=https://www.bbc.com/future/article/20220805-what-is-longtermism-and-why-does-it-matter |access-date=2024-07-14 |website=BBC |language=en-GB}}</ref> [[80,000 Hours]],<ref>{{Cite web |title=About us: what do we do, and how can we help? |url=https://80000hours.org/about/ |access-date=2021-11-28 |website=80,000 Hours |language=en-US}}</ref> [[Open Philanthropy (organization)|Open Philanthropy]],<ref>{{Cite web |date=2016-03-02 |title=Global Catastrophic Risks |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks |access-date=2021-11-28 |website=Open Philanthropy |language=en}}</ref> The Forethought Foundation,<ref>{{Cite web |date=2022 |title=About Us – Forethought Foundation |url=https://www.forethought.org/about-us |website=Forethought Foundation for Global Priorities Research}}</ref> and Longview Philanthropy.<ref>{{Cite web |last=Matthews |first=Dylan |date=2022-08-08 |title=How effective altruism went from a niche movement to a billion-dollar force |url=https://www.vox.com/future-perfect/2022/8/8/23150496/effective-altruism-sam-bankman-fried-dustin-moskovitz-billionaire-philanthropy-crytocurrency |access-date=2022-08-27 |website=Vox |language=en}}</ref>


== Implications for action ==
== Implications for action ==
Researchers studying longtermism believe that we can improve the long-term future in two ways: "by averting [[Global catastrophic risk|permanent catastrophes]], thereby ensuring civilisation’s survival; or by changing civilisation’s trajectory to make it better while it lasts.{{Nbsp}}Broadly, ensuring survival increases the quantity of future life; trajectory changes increase its quality".<ref name=":02" />{{Rp|page=|pages=35-36}}<ref name=":3">{{Cite book |last=Beckstead |first=Nick |url=https://drive.google.com/file/d/0B8P94pg6WYCIc0lXSUVYS1BnMkE/view?resourcekey=0-nk6wM1QIPl0qWVh2z9FG4Q |title=On The Overwhelming Importance of Shaping the Far Future |publisher=New Brunswick Rutgers, The State University of New Jersey |year=2013 |chapter=Chapter 1.1.2: How could we affect the far future?}}</ref>
Researchers studying longtermism believe that we can improve the long-term future in two ways: "by averting [[Global catastrophic risk|permanent catastrophes]], thereby ensuring civilisation’s survival; or by changing civilisation’s trajectory to make it better while it lasts.{{Nbsp}}Broadly, ensuring survival increases the quantity of future life; trajectory changes increase its quality".<ref name=":04"/>{{Rp|page=|pages=35–36}}<ref name=":3">{{Cite book |last=Beckstead |first=Nick |url=https://drive.google.com/file/d/0B8P94pg6WYCIc0lXSUVYS1BnMkE/view?resourcekey=0-nk6wM1QIPl0qWVh2z9FG4Q |title=On The Overwhelming Importance of Shaping the Far Future |publisher=New Brunswick Rutgers, The State University of New Jersey |year=2013 |chapter=Chapter 1.1.2: How could we affect the far future?}}</ref>


=== Existential risks ===
=== Existential risks ===
{{Main|Global catastrophic risk|Global catastrophe scenarios}}
{{Main|Global catastrophic risk|Global catastrophe scenarios}}
An [[Global catastrophic risk|existential risk]] is "a risk that threatens the destruction of humanity’s longterm potential",<ref name="precipice" />{{rp|page=59}} including risks which cause [[human extinction]] or permanent [[societal collapse]]. Examples of these risks include [[Nuclear warfare|nuclear war]], natural and engineered [[Pandemic|pandemics]], [[Climate apocalypse|extreme climate change]], stable global [[totalitarianism]], and emerging technologies like [[Existential risk from artificial general intelligence|artificial intelligence]] and [[nanotechnology]].<ref name="precipice" />{{rp|pages=213-214}} Reducing any of these risks may significantly improve the future over long timescales by increasing the number and quality of future lives.<ref name=":3" /><ref>{{Cite journal|last=Bostrom|first=Nick|date=2013|title=Existential Risk Prevention as Global Priority|url=https://doi.org/10.1111/1758-5899.12002|journal=Global Policy|volume=4|issue=1|pages=15–31|doi=10.1111/1758-5899.12002 }}</ref> Consequently, advocates of longtermism argue that humanity is at a crucial moment in its history where the choices made this century may shape its entire future.<ref name="precipice" />{{Rp|pages=3-4}}
An [[Global catastrophic risk|existential risk]] is "a risk that threatens the destruction of humanity’s longterm potential",<ref name="precipice" />{{rp|page=59}} including risks which cause [[human extinction]] or permanent [[societal collapse]]. Examples of these risks include [[Nuclear warfare|nuclear war]], natural and engineered [[pandemic]]s, [[climate change and civilizational collapse]], stable global [[totalitarianism]], and emerging technologies like [[Existential risk from artificial general intelligence|artificial intelligence]] and [[nanotechnology]].<ref name="precipice" />{{rp|pages=213–214}} Reducing any of these risks may significantly improve the future over long timescales by increasing the number and quality of future lives.<ref name=":3" /><ref>{{Cite journal|last=Bostrom|first=Nick|date=2013|title=Existential Risk Prevention as Global Priority|url=https://doi.org/10.1111/1758-5899.12002|journal=Global Policy|volume=4|issue=1|pages=15–31|doi=10.1111/1758-5899.12002 }}</ref> Consequently, advocates of longtermism argue that humanity is at a crucial moment in its history where the choices made this century may shape its entire future.<ref name="precipice" />{{Rp|pages=3–4}}


Proponents of longtermism have pointed out that humanity spends less than 0.001% of the gross world product annually on longtermist causes (i.e., activities explicitly meant to positively influence the long-term future of humanity).<ref>{{Cite web |last=Moorhouse |first=Fin |date=2021 |title=Longtermism - Frequently Asked Questions: Is longtermism asking that we make enormous sacrifices for the future? Isn't that unreasonably demanding? |url=https://longtermism.com/faq#is-longtermism-asking-that-we-make-enormous-sacrifices-for-the-future-isnt-that-unreasonably-demanding |url-status=live |archive-url=https://web.archive.org/web/20220714205821/https://longtermism.com/faq |archive-date=14 July 2022 |website=Longtermism.com}}</ref> This is less than 5% of the amount that is spent annually on ice cream in the U.S., leading Toby Ord to argue that humanity “start by spending more on protecting our future than we do on ice cream, and decide where to go from there”.<ref name="precipice" />{{Rp|page=58, 63}}
Proponents of longtermism have pointed out that humanity spends less than 0.001% of the gross world product annually on longtermist causes (i.e., activities explicitly meant to positively influence the long-term future of humanity).<ref>{{Cite web |last=Moorhouse |first=Fin |date=2021 |title=Longtermism - Frequently Asked Questions: Is longtermism asking that we make enormous sacrifices for the future? Isn't that unreasonably demanding? |url=https://longtermism.com/faq#is-longtermism-asking-that-we-make-enormous-sacrifices-for-the-future-isnt-that-unreasonably-demanding |url-status=live |archive-url=https://web.archive.org/web/20220714205821/https://longtermism.com/faq |archive-date=14 July 2022 |website=Longtermism.com}}</ref> This is less than 5% of the amount that is spent annually on ice cream in the U.S., leading Toby Ord to argue that humanity “start by spending more on protecting our future than we do on ice cream, and decide where to go from there”.<ref name="precipice" />{{Rp|page=58, 63}}
Line 37: Line 30:
Existential risks are extreme examples of what researchers call a "trajectory change".<ref name=":3" /> However, there might be other ways to positively influence how the future will unfold. Economist [[Tyler Cowen]] argues that increasing the rate of [[economic growth]] is a top moral priority because it will make future generations wealthier.<ref>{{Cite book|last=Cowen|first=Tyler|url=https://press.stripe.com/stubborn-attachments|title=Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals|publisher=Stripe Press|year=2018|isbn=978-1732265134}}</ref> Other researchers think that improving institutions like [[Sovereign state|national governments]] and [[international governance]] bodies could bring about positive trajectory changes.<ref>{{Cite book|last1=John|first1=Tyler M.|url=https://philpapers.org/go.pl?id=CARTLV-2&u=https%3A%2F%2Fphilpapers.org%2Farchive%2FCARTLV-2.pdf|title=The Long View|last2=MacAskill|first2=William|publisher=FIRST Strategic Insight|year=2021|isbn=978-0-9957281-8-9|editor-last=Cargill|editor-first=Natalie|location=UK|chapter=Longtermist institutional reform|editor-last2=John|editor-first2=Tyler M.}}</ref>
Existential risks are extreme examples of what researchers call a "trajectory change".<ref name=":3" /> However, there might be other ways to positively influence how the future will unfold. Economist [[Tyler Cowen]] argues that increasing the rate of [[economic growth]] is a top moral priority because it will make future generations wealthier.<ref>{{Cite book|last=Cowen|first=Tyler|url=https://press.stripe.com/stubborn-attachments|title=Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals|publisher=Stripe Press|year=2018|isbn=978-1732265134}}</ref> Other researchers think that improving institutions like [[Sovereign state|national governments]] and [[international governance]] bodies could bring about positive trajectory changes.<ref>{{Cite book|last1=John|first1=Tyler M.|url=https://philpapers.org/go.pl?id=CARTLV-2&u=https%3A%2F%2Fphilpapers.org%2Farchive%2FCARTLV-2.pdf|title=The Long View|last2=MacAskill|first2=William|publisher=FIRST Strategic Insight|year=2021|isbn=978-0-9957281-8-9|editor-last=Cargill|editor-first=Natalie|location=UK|chapter=Longtermist institutional reform|editor-last2=John|editor-first2=Tyler M.}}</ref>


Another way to bring about a trajectory change is by changing societal values.<ref name=":7">{{Cite journal |last=Reese |first=Jacy |date=20 February 2018 |title=Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment |url=https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial |website=Effective Altruism Forum}}</ref> William MacAskill argues that humanity should not expect positive value changes to come about by default.<ref name=":5">{{Cite web |last=MacAskill |first=William |author-link=William MacAskill |date=23 May 2022 |title=Will MacAskill on balancing frugality with ambition, whether you need longtermism, and mental health under pressure |url=https://80000hours.org/podcast/episodes/will-macaskill-ambition-longtermism-mental-health/ |website=80,000 Hours}}</ref> For example, most historians now believe that the abolition of slavery was not morally or economically inevitable.<ref name=":6">{{Cite book |last=Leslie Brown |first=Christopher |title=Moral Capital: Foundations of British Abolitionism |publisher=UNC Press |year=2006 |isbn=978-0-8078-5698-7}}</ref> Christopher Leslie Brown instead argues in his 2006 book ''Moral Capital'' that a moral revolution made slavery unacceptable at a time when it was otherwise still hugely profitable.<ref name=":6" /> MacAskill argues that abolition may be a turning point in the entirety of human history, with the practice unlikely to return.<ref name=":5" /> For this reason, bringing about positive value changes in society may be one way in which the present generation can positively influence the long-run future.
Another way to achieve a trajectory change is by changing societal values.<ref name=":7">{{Cite journal |last=Reese |first=Jacy |date=20 February 2018 |title=Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment |url=https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial |website=Effective Altruism Forum}}</ref> William MacAskill argues that humanity should not expect positive value changes to happen by default.<ref name=":11" /> He uses the abolition of slavery as an example, which historians like Christopher Leslie Brown consider to be a historical contingency rather than an inevitable event.<ref name=":11" /> Brown has argued that a moral revolution made slavery unacceptable at a time when it was still hugely profitable.<ref name=":6">{{Cite book |last=Leslie Brown |first=Christopher |title=Moral Capital: Foundations of British Abolitionism |publisher=UNC Press |year=2006 |isbn=978-0-8078-5698-7}}</ref> MacAskill suggests that abolition may be a turning point in the entirety of human history, with the practice unlikely to return.<ref name=":5">{{Cite web |last=MacAskill |first=William |author-link=William MacAskill |date=23 May 2022 |title=Will MacAskill on balancing frugality with ambition, whether you need longtermism, and mental health under pressure |url=https://80000hours.org/podcast/episodes/will-macaskill-ambition-longtermism-mental-health/ |website=80,000 Hours}}</ref> For this reason, bringing about positive value changes in society may be one way in which the present generation can positively influence the long-run future.


=== Living at a pivotal time ===
=== Living at a pivotal time ===
Longtermists argue that we live at a pivotal moment in human history. [[Derek Parfit]] wrote that we "live during the hinge of history"<ref>{{Cite book |last=Parfit |first=Derek |title=On What Matters, Volume 2 |publisher=Oxford University Press |year=2011 |location=Oxford |pages=611 |author-link=Derek Parfit}}</ref> and William MacAskill states that "the world’s long-run fate depends in part on the choices we make in our lifetimes"<ref name=":03">{{Cite book |last=MacAskill |first=William |title=What We Owe the Future |title-link=What We Owe the Future |publisher=Basic Books |year=2022 |isbn=978-1-5416-1862-6 |location=New York |author-link=William MacAskill}}</ref>{{Rp|page=6|pages=}} since "society has not yet settled down into a stable state, and we are able to influence which stable state we end up in".<ref name=":03" />{{Rp|page=28|pages=}}
Longtermists argue that we live at a pivotal moment in human history. [[Derek Parfit]] wrote that we "live during the hinge of history"<ref>{{Cite book |last=Parfit |first=Derek |title=On What Matters, Volume 2 |publisher=Oxford University Press |year=2011 |location=Oxford |pages=611 |author-link=Derek Parfit}}</ref> and William MacAskill states that "the world’s long-run fate depends in part on the choices we make in our lifetimes"<ref name=":04">{{Cite book |last=MacAskill |first=William |title=What We Owe the Future |title-link=What We Owe the Future |publisher=Basic Books |year=2022 |isbn=978-1-5416-1862-6 |location=New York |author-link=William MacAskill}}</ref>{{Rp|page=6|pages=}} since "society has not yet settled down into a stable state, and we are able to influence which stable state we end up in".<ref name=":04"/>{{Rp|page=28|pages=}}


For most of human history, it was not clear how to positively influence the very long-run future.<ref name=":2">{{Cite web |last=Moorhouse |first=Fin |date=2021 |title=Longtermism - Frequently Asked Questions: Isn't much of longtermism obvious? Why are people only just realising all this? |url=https://longtermism.com/faq#isnt-much-of-longtermism-obvious-why-are-people-only-just-realising-all-this |url-status=live |archive-url=https://web.archive.org/web/20220714205821/https://longtermism.com/faq |archive-date=14 July 2022 |website=Longtermism.com}}</ref> However, two relatively recent developments may have changed this. Developments in technology, such as [[Nuclear weapon|nuclear weapons]], have, for the first time, given humanity the power to annihilate itself, which would impact the long-term future by preventing the existence and flourishing of future generations.<ref name=":2" /> At the same time, progress made in the physical and social sciences has given humanity the ability to more accurately predict (at least some) of the long-term effects of the actions taken in the present.<ref name=":2" />
According to Fin Moorhouse, for most of human history, it was not clear how to positively influence the very long-run future.<ref name=":2">{{Cite web |last=Moorhouse |first=Fin |date=2021 |title=Longtermism - Frequently Asked Questions: Isn't much of longtermism obvious? Why are people only just realising all this? |url=https://longtermism.com/faq#isnt-much-of-longtermism-obvious-why-are-people-only-just-realising-all-this |url-status=live |archive-url=https://web.archive.org/web/20220714205821/https://longtermism.com/faq |archive-date=14 July 2022 |website=Longtermism.com}}</ref> However, two relatively recent developments may have changed this. Developments in technology, such as [[nuclear weapon]]s, have, for the first time, given humanity the power to annihilate itself, which would impact the long-term future by preventing the existence and flourishing of future generations.<ref name=":2" /> At the same time, progress made in the physical and social sciences has given humanity the ability to more accurately predict (at least some) of the long-term effects of the actions taken in the present.<ref name=":2" />


MacAskill also notes that our present time is highly unusual in that "we live in an era that involves an extraordinary amount of change"<ref name=":03" />{{Rp|page=26}}—both relative to the past (where rates of economic and technological progress were very slow) and to the future (since current growth rates cannot continue for long before hitting physical limits).<ref name=":03" />{{Rp|page=|pages=26-28}}
MacAskill also notes that our present time is highly unusual in that "we live in an era that involves an extraordinary amount of change"<ref name=":04"/>{{Rp|page=26}}—both relative to the past (where rates of economic and technological progress were very slow) and to the future (since current growth rates cannot continue for long before hitting physical limits).<ref name=":04"/>{{Rp|page=|pages=26–28}}


== Theoretical considerations ==
== Theoretical considerations ==


=== Moral theory ===
=== Moral theory ===
Longtermism has been defended by appealing to various moral theories.<ref>{{Cite web|last=Crary|first=Alice|date=2023|title=The Toxic Ideology of Longtermism|url=https://www.radicalphilosophy.com/commentary/the-toxic-ideology-of-longtermism|accessdate=2023-04-11|website=Radical Philosophy 214 |language=en}}</ref> [[Utilitarianism]] may motivate longtermism given the importance it places on pursuing the greatest good for the greatest number, with future generations expected to be the vast majority of all people to ever exist.<ref>{{Cite web |last1=MacAskill |first1=William |author-link=William MacAskill |last2=Meissner |first2=Darius |date=2022 |title=Utilitarianism and Practical Ethics – Longtermism: Expanding the Moral Circle Across Time |url=https://www.utilitarianism.net/utilitarianism-and-practical-ethics#longtermism |url-status=live |archive-url=https://web.archive.org/web/20220616031428/https://www.utilitarianism.net/utilitarianism-and-practical-ethics |archive-date=16 June 2022 |website=An Introduction to Utilitarianism}}</ref> [[Consequentialism|Consequentialist moral theories]]—of which utilitarianism is just one example—may generally be sympathetic to longtermism since whatever the theory considers morally valuable, there is likely going to be much more of it in the future than in the present.<ref>{{Cite web |last=Todd |first=Benjamin |date=2017 |title=Longtermism: the moral significance of future generations |url=https://80000hours.org/articles/future-generations/ |website=80,000 Hours}}</ref>
Longtermism has been defended by appealing to various moral theories.<ref>{{Cite web|last=Crary|first=Alice|date=2023|title=The Toxic Ideology of Longtermism|url=https://www.radicalphilosophy.com/commentary/the-toxic-ideology-of-longtermism|accessdate=2023-04-11|website=Radical Philosophy 214 |language=en}}</ref> [[Utilitarianism]] may motivate longtermism given the importance it places on pursuing the greatest good for the greatest number, with future generations expected to be the vast majority of all people to ever exist.<ref name=":9" /><ref>{{Cite web |last1=MacAskill |first1=William |author-link=William MacAskill |last2=Meissner |first2=Darius |date=2022 |title=Utilitarianism and Practical Ethics – Longtermism: Expanding the Moral Circle Across Time |url=https://www.utilitarianism.net/utilitarianism-and-practical-ethics#longtermism |url-status=live |archive-url=https://web.archive.org/web/20220616031428/https://www.utilitarianism.net/utilitarianism-and-practical-ethics |archive-date=16 June 2022 |website=An Introduction to Utilitarianism}}</ref> [[Consequentialism|Consequentialist moral theories]] such as utilitarianism may generally be sympathetic to longtermism since whatever the theory considers morally valuable, there is likely going to be much more of it in the future than in the present.<ref>{{Cite web |last=Todd |first=Benjamin |date=2017 |title=Longtermism: the moral significance of future generations |url=https://80000hours.org/articles/future-generations/ |website=80,000 Hours}}</ref>


However, other non-consequentialist moral frameworks may also inspire longtermism. For instance, Toby Ord considers the responsibility that the present generation has towards future generations as grounded in the hard work and sacrifices made by past generations.<ref name="precipice" /> He writes:<ref name="precipice" />{{Rp|page=42}} <blockquote>Because the arrow of time makes it so much easier to help people who come after you than people who come before, the best way of understanding the partnership of the generations may be asymmetrical, with duties all flowing forwards in time—paying it forwards. On this view, our duties to future generations may thus be grounded in the work our ancestors did for us when we were future generations.</blockquote>
However, other non-consequentialist moral frameworks may also inspire longtermism. For instance, Toby Ord considers the responsibility that the present generation has towards future generations as grounded in the hard work and sacrifices made by past generations.<ref name="precipice" /> He writes:<ref name="precipice" />{{Rp|page=42}} <blockquote>Because the arrow of time makes it so much easier to help people who come after you than people who come before, the best way of understanding the partnership of the generations may be asymmetrical, with duties all flowing forwards in time—paying it forwards. On this view, our duties to future generations may thus be grounded in the work our ancestors did for us when we were future generations.</blockquote>


=== Evaluating effects on the future ===
=== Evaluating effects on the future ===
In his book ''[[What We Owe the Future]]'', William MacAskill discusses how individuals can shape the course of history. He introduces a three-part framework for thinking about effects on the future, which states that the long-term value of an outcome we may bring about depends on its ''significance'', ''persistence'', and ''contingency''.<ref name=":02">{{Cite book |last=MacAskill |first=William |title=What We Owe the Future |title-link=What We Owe the Future |publisher=Basic Books |year=2022 |isbn=978-1-5416-1862-6 |location=New York |author-link=William MacAskill}}</ref>{{Rp|page=|pages=31-33}} He explains that significance "is the average value added by bringing about a certain state of affairs", persistence means "how long that state of affairs lasts, once it has been brought about", and contingency "refers to the extent to which the state of affairs depends on an individual’s action".<ref name=":02" />{{Rp|page=32|pages=}} Moreover, MacAskill acknowledges the pervasive uncertainty, both moral and empirical, that surrounds longtermism and offers four lessons to help guide attempts to improve the long-term future: taking robustly good actions, building up options, learning more, and avoiding causing harm.<ref name=":04">{{Cite book |last=MacAskill |first=William |title=What We Owe the Future |title-link=What We Owe the Future |publisher=Basic Books |year=2022 |isbn=978-1-5416-1862-6 |location=New York |author-link=William MacAskill}}</ref>
In his book ''[[What We Owe the Future]]'', William MacAskill discusses how individuals can shape the course of history. He introduces a three-part framework for thinking about effects on the future, which states that the long-term value of an outcome we may bring about depends on its ''significance'', ''persistence'', and ''contingency''.<ref name=":04"/>{{Rp|page=|pages=31–33}} He explains that significance "is the average value added by bringing about a certain state of affairs", persistence means "how long that state of affairs lasts, once it has been brought about", and contingency "refers to the extent to which the state of affairs depends on an individual’s action".<ref name=":04"/>{{Rp|page=32|pages=}} Moreover, MacAskill acknowledges the pervasive uncertainty, both moral and empirical, that surrounds longtermism and offers four lessons to help guide attempts to improve the long-term future: taking robustly good actions, building up options, learning more, and avoiding causing harm.<ref name=":04"/>


=== Population ethics ===
=== Population ethics ===
[[File:Illustration of past, present and future population sizes (Our World in Data).png|thumb|Illustrating the potential number of future human lives]]
[[Population ethics]] plays an important part in longtermist thinking. Many advocates of longtermism accept the [[Average and total utilitarianism|total view]] of population ethics, on which bringing more happy people into existence is good, all other things being equal.<ref name=":4" /> Accepting such a view makes the case for longtermism particularly strong because the fact that there could be huge numbers of future people means that improving their lives and, crucially, ensuring that those lives happen at all, has enormous value.<ref name=":4" /><ref>{{Cite journal |last=Malde |first=Jack |date=9 March 2021 |title=Possible misconceptions about (strong) longtermism: "Longtermists must be total utilitarians" |url=https://forum.effectivealtruism.org/posts/ocmEFL2uDSMzvwL8P/possible-misconceptions-about-strong-longtermism#_Longtermists_must_be_total_utilitarians_ |website=Effective Altruism Forum}}</ref>

[[Population ethics]] plays an important part in longtermist thinking. Many advocates of longtermism accept the [[Average and total utilitarianism|total view]] of population ethics, on which bringing more happy people into existence is good, all other things being equal. Accepting such a view makes the case for longtermism particularly strong because the fact that there could be huge numbers of future people means that improving their lives and, crucially, ensuring that those lives happen at all, has enormous value.<ref name=":9" /><ref name=":4">{{Cite journal |last1=Greaves |first1=Hilary |last2=MacAskill |first2=William |date=2021 |title=The case for strong longtermism |url=https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/ |url-status=live |journal=Global Priorities Institute Working Paper |volume=5 |archive-url=https://web.archive.org/web/20220709010826/https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/ |archive-date=9 July 2022}}</ref>


=== Other sentient beings ===
=== Other sentient beings ===
{{See also|Speciesism|Animal welfare}}
{{See also|Speciesism|Animal welfare}}
Longtermism is often discussed in relation to the interests of future generations of humans. However, some proponents of longtermism also put high moral value on the interests of non-human beings.<ref>{{Cite web |last=Baumann |first=Tobias |date=2020 |title=Longtermism and animal advocacy |url=https://centerforreducingsuffering.org/longtermism-and-animal-advocacy/ |url-status=live |archive-url=https://web.archive.org/web/20220714215808/https://centerforreducingsuffering.org/longtermism-and-animal-advocacy/ |archive-date=14 July 2022 |website=Center for Reducing Suffering}}</ref> From this perspective advocating for animal welfare may be an extremely important longtermist cause area because a moral norm of caring about the suffering of non-human life might persist for a very long time if it became widespread.<ref name=":7" />


Longtermism is often discussed in relation to the interests of future generations of humans. However, some proponents of longtermism also put high moral value on the interests of non-human beings.<ref>{{Cite web |last=Baumann |first=Tobias |date=2020 |title=Longtermism and animal advocacy |url=https://centerforreducingsuffering.org/longtermism-and-animal-advocacy/ |url-status=live |archive-url=https://web.archive.org/web/20220714215808/https://centerforreducingsuffering.org/longtermism-and-animal-advocacy/ |archive-date=14 July 2022 |website=Center for Reducing Suffering}}</ref> From this perspective, [[Moral circle expansion|expanding humanity's moral circle]] to other [[sentient beings]] may be a particularly important longtermist cause area, notably because a moral norm of caring about the suffering of non-human life might persist for a very long time if it becomes widespread.<ref name=":7" />
=== Discount rate ===
Longtermism implies that we should use a relatively small [[social discount rate]] when considering the moral value of the far future. In the standard Ramsey model used in economics, the social discount rate <math>\rho</math> is given by:


=== Time preference ===
Effective altruism promotes the idea of moral impartiality, suggesting that people’s worth does not diminish simply because they live in a different location. Longtermists like MacAskill extend this principle by proposing that "distance in time is like distance in space".<ref>{{Cite web |last=Godfrey-Smith |first=Peter |date=2024-09-24 |title=Is Longtermism Such a Big Deal? |url=https://foreignpolicy.com/2022/11/12/longtermism-william-macaskill-book-elon-musk-philosophy-ethics/ |access-date=2024-09-23 |website=Foreign Policy |language=en-US}}</ref><ref>{{Cite news |date=November 22, 2022 |title=What is long-termism? |url=https://www.economist.com/the-economist-explains/2022/11/22/what-is-long-termism |access-date=2024-09-23 |work=The Economist |issn=0013-0613}}</ref> Longtermists generally reject the notion of a [[pure time preference]], which values future benefits less simply because they occur later.

When evaluating future benefits, economists typically use the concept of a [[social discount rate]], which posits that the value of future benefits decreases [[Exponential distribution|exponentially]] with how far they are in time. In the standard Ramsey model used in economics, the social discount rate <math>\rho</math> is given by:
: <math>\rho = \eta g + \delta,</math>
: <math>\rho = \eta g + \delta,</math>


where <math>\eta</math> is the [[elasticity of marginal utility of consumption]], <math>g</math> is the [[Economic growth|growth rate]], and <math>\delta</math> is a quantity combining the "catastrophe rate" (discounting for the risk that future benefits won't occur) and [[pure time preference]] (valuing future benefits intrinsically less than present ones). Ord argues that nonzero pure time preference is illegitimate, since future generations matter morally as much as the present generation. Furthermore, <math>\eta</math> only applies to monetary benefits, not moral benefits, since it is based on [[diminishing marginal utility|diminishing marginal utility of consumption]]. Thus, the only factor that should affect the discount rate is the catastrophe rate, or the background level of existential risk.<ref name="precipice" />{{rp|pages=241-245}}
where <math>\eta</math> is the [[elasticity of marginal utility of consumption]], <math>g</math> is the [[Economic growth|growth rate]], and <math>\delta</math> combines the "catastrophe rate" (discounting for the risk that future benefits won't occur) and [[pure time preference]] (valuing future benefits intrinsically less than present ones).<ref name="precipice" />{{rp|pages=240–245}}

Toby Ord argues that a nonzero pure time preference applied to [[normative ethics]] is arbitrary and illegitimate. Economist [[Frank Ramsey (mathematician)|Frank Ramsey]], who devised the discounting model, also believed that while pure time preference might describe how people behave (favoring immediate benefits), it does not offer normative guidance on what they should value ethically. Furthermore, <math>\eta</math> only applies to monetary benefits, not moral benefits, since it is based on [[diminishing marginal utility|diminishing marginal utility of consumption]]. Ord also considers that modeling the uncertainty that the benefit will occur with an exponential decrease poorly reflects the reality of changing risks over time, particularly as some catastrophic risks may diminish or be mitigated in the long term.<ref name="precipice" />{{rp|pages=240–245}}


In contrast, Andreas Mogensen argues that a positive rate of pure time preference <math>\delta</math> can be justified on the basis of kinship. That is, common-sense morality allows us to be partial to those more closely related to us, so "we can permissibly weight the welfare of each succeeding generation less than that of the generation preceding it."<ref name="mogensen2019">{{cite web |last=Mogensen |first=Andreas |date=October 2019 |title='The only ethical argument for positive 𝛿'? |url=https://globalprioritiesinstitute.org/wp-content/uploads/Andreas-Mogensen_the-only-ethical-argument-for-positive-delta.pdf |access-date=30 December 2021 |publisher=Global Priorities Institute}}</ref>{{rp|page=9}} This view is called temporalism and states that "temporal proximity (...) strengthens certain moral duties, including the duty to save".<ref>{{Cite journal |last=Lloyd |first=Harry |date=2021 |title=Time discounting, consistency and special obligations: a defence of Robust Temporalism |url=https://globalprioritiesinstitute.org/time-discounting-consistency-and-special-obligations-a-defence-of-robust-temporalism-harry-r-lloyd-yale-university/ |journal=Global Priorities Institute Working Paper |volume=11}}</ref>
In contrast, Andreas Mogensen argues that a positive rate of pure time preference <math>\delta</math> can be justified on the basis of kinship. That is, common-sense morality allows us to be partial to those more closely related to us, so "we can permissibly weight the welfare of each succeeding generation less than that of the generation preceding it."<ref name="mogensen2019">{{cite web |last=Mogensen |first=Andreas |date=October 2019 |title='The only ethical argument for positive 𝛿'? |url=https://globalprioritiesinstitute.org/wp-content/uploads/Andreas-Mogensen_the-only-ethical-argument-for-positive-delta.pdf |access-date=30 December 2021 |publisher=Global Priorities Institute}}</ref>{{rp|page=9}} This view is called temporalism and states that "temporal proximity (...) strengthens certain moral duties, including the duty to save".<ref>{{Cite journal |last=Lloyd |first=Harry |date=2021 |title=Time discounting, consistency and special obligations: a defence of Robust Temporalism |url=https://globalprioritiesinstitute.org/time-discounting-consistency-and-special-obligations-a-defence-of-robust-temporalism-harry-r-lloyd-yale-university/ |journal=Global Priorities Institute Working Paper |volume=11}}</ref>
Line 75: Line 74:


=== Unpredictability ===
=== Unpredictability ===
One objection to longtermism is that it relies on predictions of the effects of our actions over very long time horizons, which is difficult at best and impossible at worst.<ref>{{Cite journal|last=Tarsney|first=Christian|date=2019|title=The epistemic challenge to longtermism|url=https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/|journal=GPI Working Paper|volume=No. 10-2019}}</ref> In response to this challenge, researchers interested in longtermism have sought to identify "value lock in" events—events, such as human extinction, which we may influence in the near-term but that will have very long-lasting, predictable future effects.<ref>{{Cite journal|last=MacAskill|first=William|date=2021|editor-last=McMahan|editor-first=Jeff|editor2-last=Campbell|editor2-first=Tim|editor3-last=James|editor3-first=Goodrich|title=Are we living at the hinge of history?|url=https://globalprioritiesinstitute.org/william-macaskill-are-we-living-at-the-hinge-of-history/|journal=Ethics and Existence: The Legacy of Derek Parfit|publisher=Oxford University Press|pages=}}</ref>
One objection to longtermism is that it relies on predictions of the effects of our actions over very long time horizons, which is difficult at best and impossible at worst.<ref>{{Cite journal|last=Tarsney|first=Christian|date=2019|title=The epistemic challenge to longtermism|url=https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/|journal=GPI Working Paper|number=10–2019}}</ref> In response to this challenge, researchers interested in longtermism have sought to identify "value lock in" events—events, such as human extinction, which we may influence in the near-term but that will have very long-lasting, predictable future effects.<ref name=":9" />


=== Deprioritization of immediate issues ===
=== Deprioritization of immediate issues ===
Another concern is that longtermism may lead to deprioritizing more immediate issues. For example, some critics have argued that considering humanity's future in terms of the next 10,000 or 10 million years might lead to downplaying the nearer-term effects of [[climate change]].<ref name=":0">{{Cite web|last=Torres|first=Phil|date=2021-10-19|editor-last=Dresser|editor-first=Sam|title=Why longtermism is the world's most dangerous secular credo|url=https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo|access-date=2021-11-02|website=Aeon|language=en}}</ref> They also worry that by specifying the end-goal of human development as "technological maturity," or the subjugation of nature and maximization of economic productivity, the longtermist worldview could worsen the environmental crisis and justify atrocities in the name of attaining "astronomical" amounts of future value. Anthropologist [[Vincent Ialenti]] has argued that avoiding this will require societies to adopt a "more textured, multifaceted, multidimensional longtermism that defies insular information silos and disciplinary echo chambers."<ref>{{Cite web |title=Deep Time Reckoning |url=https://mitpress.mit.edu/9780262539265/deep-time-reckoning/ |access-date=2022-08-14 |website=MIT Press |language=en-US}}</ref>
Another concern is that longtermism may lead to deprioritizing more immediate issues. For example, some critics have argued that considering humanity's future in terms of the next 10,000 or 10 million years might lead to downplaying the nearer-term effects of [[climate change]].<ref name=":0">{{Cite web|last=Torres|first=Phil|date=2021-10-19|editor-last=Dresser|editor-first=Sam|title=Why longtermism is the world's most dangerous secular credo|url=https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo|access-date=2021-11-02|website=Aeon|language=en}}</ref> They also worry that the most radical forms of strong longtermism could in theory justify atrocities in the name of attaining "astronomical" amounts of future value.<ref name=":9" /> Anthropologist [[Vincent Ialenti]] has argued that avoiding this will require societies to adopt a "more textured, multifaceted, multidimensional longtermism that defies insular information silos and disciplinary echo chambers".<ref>{{Cite web |title=Deep Time Reckoning |url=https://mitpress.mit.edu/9780262539265/deep-time-reckoning/ |access-date=2022-08-14 |website=MIT Press |language=en-US}}</ref>


Advocates of longtermism reply that the kinds of actions that are good for the long-term future are often also good for the present.<ref name=":4" /> An example of this is pandemic preparedness. Preparing for the worst case pandemics—those which could threaten the survival of humanity—may also help to improve public health in the present. For example, funding research and innovation in antivirals, vaccines, and personal protective equipment, as well as lobbying governments to prepare for pandemics, may help prevent smaller scale health threats for people today.<ref>{{Cite web |last=Lewis |first=Gregory |date=2020 |title=Risks of catastrophic pandemics |url=https://80000hours.org/problem-profiles/global-catastrophic-biological-risks/ |website=[[80,000 Hours]]}}</ref>
Advocates of longtermism reply that the kinds of actions that are good for the long-term future are often also good for the present.<ref name=":4" /> An example of this is pandemic preparedness. Preparing for the worst case pandemics—those which could threaten the survival of humanity—may also help to improve public health in the present. For example, funding research and innovation in antivirals, vaccines, and personal protective equipment, as well as lobbying governments to prepare for pandemics, may help prevent smaller scale health threats for people today.<ref>{{Cite web |last=Lewis |first=Gregory |date=2020 |title=Risks of catastrophic pandemics |url=https://80000hours.org/problem-profiles/global-catastrophic-biological-risks/ |website=[[80,000 Hours]]}}</ref>


=== Reliance on small probabilities of large payoffs ===
=== Reliance on small probabilities of large payoffs ===
A further objection to longtermism is that it relies on accepting low probability bets of extremely big payoffs rather than more certain bets of lower payoffs (provided that the expected value is higher).<ref name=":8">{{Cite journal |last=Wilkinson |first=Hayden |date=2020 |title=In defence of fanaticism |url=https://globalprioritiesinstitute.org/hayden-wilkinson-in-defence-of-fanaticism/ |journal=Global Priorities Institute Working Paper |volume=4}}</ref><ref name=":9">{{Cite journal |last=Kokotajlo |first=Daniel |date=2018 |title=Tiny Probabilities of Vast Utilities: Concluding Arguments |url=https://forum.effectivealtruism.org/s/MJKgevWYc6digKLux/p/zjbxdJbTTmTvrWAX9#Unusually_high_utilities_should_not_count_against_a_project |website=Effective Altruism Forum}}</ref> From a longtermist perspective, it seems that if the probability of some existential risk is very low, and the value of the future is very high, then working to reduce the risk, even by tiny amounts, has extremely high expected value. An illustration of this problem is [[Pascal's mugging|Pascal’s mugging]], which involves the exploitation of an expected value maximizer via their willingness to accept such low probability bets of large payoffs.<ref>{{Cite journal |last=Bostrom |first=Nick |date=2009 |title=Pascal's mugging |url=https://nickbostrom.com/papers/pascal.pdf |journal=Analysis |volume=69 |issue=3 |pages=443–445 |doi=10.1093/analys/anp062}}</ref>
A further objection to longtermism is that it relies on accepting low probability bets of extremely big payoffs rather than more certain bets of lower payoffs (provided that the expected value is higher). From a longtermist perspective, it seems that if the probability of some existential risk is very low, and the value of the future is very high, then working to reduce the risk, even by tiny amounts, has extremely high expected value.<ref name=":9" /> An illustration of this problem is [[Pascal's mugging|Pascal’s mugging]], which involves the exploitation of an expected value maximizer via their willingness to accept such low probability bets of large payoffs.<ref>{{Cite journal |last=Bostrom |first=Nick |date=2009 |title=Pascal's mugging |url=https://nickbostrom.com/papers/pascal.pdf |journal=Analysis |volume=69 |issue=3 |pages=443–445 |doi=10.1093/analys/anp062}}</ref>


Advocates of longtermism have adopted a variety of responses to this concern, ranging from biting the bullet,<ref name=":8" />{{Clarification needed|reason=What does "biting the bullet" mean in this context?|date=July 2023}} to those arguing that longtermism need not rely on tiny probabilities as the probabilities of existential risks are within the normal range of risks that people seek to mitigate against—for example wearing a seatbelt in case of a car crash.<ref name=":9" />
Advocates of longtermism have adopted a variety of responses to this concern. Some argue that, while unintuitive, it is ethically correct to favor infinitesimal probabilities of arbitrarily high-impact outcomes over moderate probabilities with moderately impactful outcomes.<ref name=":8">{{Cite journal |last=Wilkinson |first=Hayden |date=2020 |title=In defence of fanaticism |url=https://globalprioritiesinstitute.org/hayden-wilkinson-in-defence-of-fanaticism/ |journal=Global Priorities Institute Working Paper |volume=4}}</ref> Others argue that longtermism need not rely on tiny probabilities as the probabilities of existential risks are within the normal range of risks that people seek to mitigate against— for example, wearing a seatbelt in case of a car crash.<ref name=":9" />


== See also ==
== See also ==
* [[AI safety]]
* [[Consequentialism]]
* [[Effective altruism]]
* [[Effective altruism]]
* [[Futures studies]]
* [[Intergenerational equity]]
* [[Intergenerational equity]]
* [[Population ethics]]
* [[Population ethics]]
* [[Social discount rate]]
* [[Social discount rate]]
* [[Space governance]]
* [[Suffering risks]]
* [[Timeline of the far future]]
* [[Timeline of the far future]]
* ''[[What We Owe the Future]]''
* ''[[What We Owe the Future]]''
Line 125: Line 129:
* {{Cite web| last = Todd| first = Benjamin| title = Longtermism: the moral significance of future generations| work = 80,000 Hours| accessdate = 2022-05-11| date = 2017
* {{Cite web| last = Todd| first = Benjamin| title = Longtermism: the moral significance of future generations| work = 80,000 Hours| accessdate = 2022-05-11| date = 2017
| url = https://80000hours.org/articles/future-generations/}}
| url = https://80000hours.org/articles/future-generations/}}
* {{Cite web| last = Torres| first = Emile| title = Against longtermism| work = Aeon| accessdate = 2022-12-20| date = 2021
| url = https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo/}}
* {{Cite web| last = Townsend| first = Michael| title = Safeguarding the long-term future: Cause area profile| work = Giving What We Can | accessdate = 2022-08-08 | date = 2021
* {{Cite web| last = Townsend| first = Michael| title = Safeguarding the long-term future: Cause area profile| work = Giving What We Can | accessdate = 2022-08-08 | date = 2021
| url = https://www.givingwhatwecan.org/cause-areas/long-term-future}}
| url = https://www.givingwhatwecan.org/cause-areas/long-term-future}}
* {{Cite journal |last=O’Brien |first=Gary David |date=2024-01-19 |title=The Case for Animal-Inclusive Longtermism |url=https://brill.com/view/journals/jmp/aop/article-10.1163-17455243-20234296/article-10.1163-17455243-20234296.xml |journal=Journal of Moral Philosophy |volume=-1 |issue=aop |pages=1–24 |doi=10.1163/17455243-20234296 |issn=1740-4681|doi-access=free }}


== External links ==
== External links ==


* [https://longtermism.com/ Longtermism.com], Online introduction and resource compilation on longtermism
* [https://longtermism.com/ Longtermism.com], Online introduction and resource compilation on longtermism
* [https://www.youtube.com/watch?v=LEENEFaVUzU The Last Human – A Glimpse Into the Far Future], a YouTube video by [[Kurzgesagt]] covering key longtermist ideas
* [https://www.youtube.com/watch?v=LEENEFaVUzU The Last Human – A Glimpse Into the Far Future], a YouTube video by [[Kurzgesagt]] covering key longtermist ideas


{{Effective altruism}}
{{Effective altruism}}
{{Sustainability}}


[[Category:Effective altruism]]
[[Category:Effective altruism]]

Latest revision as of 20:38, 21 November 2024

Comparing the number of human lives in the past and present

Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.[1][2]

The key argument for longtermism has been summarized as follows: "future people matter morally just as much as people alive today; ... there may well be more people alive in the future than there are in the present or have been in the past; and ... we can positively affect future peoples' lives."[3][4] These three ideas taken together suggest, to those advocating longtermism, that it is the responsibility of those living now to ensure that future generations get to survive and flourish.[4]

Definition

[edit]

Philosopher William MacAskill defines longtermism as "the view that positively influencing the longterm future is a key moral priority of our time".[1][5]: 4  He distinguishes it from strong longtermism, "the view that positively influencing the longterm future is the key moral priority of our time".[6][2]

In his book The Precipice: Existential Risk and the Future of Humanity, philosopher Toby Ord describes longtermism as follows: "longtermism ... is especially concerned with the impacts of our actions upon the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity's potential is one avenue for such a lasting impact and there may be others too."[7]: 52–53  In addition, Ord notes that "longtermism is animated by a moral re-orientation toward the vast future that existential risks threaten to foreclose."[7]: 52–53 

Because it is generally infeasible to use traditional research techniques such as randomized controlled trials to analyze existential risks, researchers such as Nick Bostrom have used methods such as expert opinion elicitation to estimate their importance.[8] Ord offered probability estimates for a number of existential risks in The Precipice.[7]: 167 

History

[edit]

The term "longtermism" was coined around 2017 by Oxford philosophers William MacAskill and Toby Ord. The view draws inspiration from the work of Nick Bostrom, Nick Beckstead, and others.[6][1] While its coinage is relatively new, some aspects of longtermism have been thought about for centuries. The oral constitution of the Iroquois Confederacy, the Gayanashagowa, encourages all decision-making to “have always in view not only the present but also the coming generations”.[9] This has been interpreted to mean that decisions should be made so as to be of benefit to the seventh generation in the future.[10] These ideas have re-emerged in contemporary thought with thinkers such as Derek Parfit in his 1984 book Reasons and Persons, and Jonathan Schell in his 1982 book The Fate of the Earth.

Community

[edit]

Longtermist ideas have given rise to a community of individuals and organizations working to protect the interests of future generations.[11] Organizations working on longtermist topics include Cambridge University's Centre for the Study of Existential Risk, the Future of Life Institute, the Global Priorities Institute, the Stanford Existential Risks Initiative,[12] 80,000 Hours,[13] Open Philanthropy,[14] The Forethought Foundation,[15] and Longview Philanthropy.[16]

Implications for action

[edit]

Researchers studying longtermism believe that we can improve the long-term future in two ways: "by averting permanent catastrophes, thereby ensuring civilisation’s survival; or by changing civilisation’s trajectory to make it better while it lasts. Broadly, ensuring survival increases the quantity of future life; trajectory changes increase its quality".[5]: 35–36 [17]

Existential risks

[edit]

An existential risk is "a risk that threatens the destruction of humanity’s longterm potential",[7]: 59  including risks which cause human extinction or permanent societal collapse. Examples of these risks include nuclear war, natural and engineered pandemics, climate change and civilizational collapse, stable global totalitarianism, and emerging technologies like artificial intelligence and nanotechnology.[7]: 213–214  Reducing any of these risks may significantly improve the future over long timescales by increasing the number and quality of future lives.[17][18] Consequently, advocates of longtermism argue that humanity is at a crucial moment in its history where the choices made this century may shape its entire future.[7]: 3–4 

Proponents of longtermism have pointed out that humanity spends less than 0.001% of the gross world product annually on longtermist causes (i.e., activities explicitly meant to positively influence the long-term future of humanity).[19] This is less than 5% of the amount that is spent annually on ice cream in the U.S., leading Toby Ord to argue that humanity “start by spending more on protecting our future than we do on ice cream, and decide where to go from there”.[7]: 58, 63 

Trajectory changes

[edit]

Existential risks are extreme examples of what researchers call a "trajectory change".[17] However, there might be other ways to positively influence how the future will unfold. Economist Tyler Cowen argues that increasing the rate of economic growth is a top moral priority because it will make future generations wealthier.[20] Other researchers think that improving institutions like national governments and international governance bodies could bring about positive trajectory changes.[21]

Another way to achieve a trajectory change is by changing societal values.[22] William MacAskill argues that humanity should not expect positive value changes to happen by default.[4] He uses the abolition of slavery as an example, which historians like Christopher Leslie Brown consider to be a historical contingency rather than an inevitable event.[4] Brown has argued that a moral revolution made slavery unacceptable at a time when it was still hugely profitable.[23] MacAskill suggests that abolition may be a turning point in the entirety of human history, with the practice unlikely to return.[24] For this reason, bringing about positive value changes in society may be one way in which the present generation can positively influence the long-run future.

Living at a pivotal time

[edit]

Longtermists argue that we live at a pivotal moment in human history. Derek Parfit wrote that we "live during the hinge of history"[25] and William MacAskill states that "the world’s long-run fate depends in part on the choices we make in our lifetimes"[5]: 6  since "society has not yet settled down into a stable state, and we are able to influence which stable state we end up in".[5]: 28 

According to Fin Moorhouse, for most of human history, it was not clear how to positively influence the very long-run future.[26] However, two relatively recent developments may have changed this. Developments in technology, such as nuclear weapons, have, for the first time, given humanity the power to annihilate itself, which would impact the long-term future by preventing the existence and flourishing of future generations.[26] At the same time, progress made in the physical and social sciences has given humanity the ability to more accurately predict (at least some) of the long-term effects of the actions taken in the present.[26]

MacAskill also notes that our present time is highly unusual in that "we live in an era that involves an extraordinary amount of change"[5]: 26 —both relative to the past (where rates of economic and technological progress were very slow) and to the future (since current growth rates cannot continue for long before hitting physical limits).[5]: 26–28 

Theoretical considerations

[edit]

Moral theory

[edit]

Longtermism has been defended by appealing to various moral theories.[27] Utilitarianism may motivate longtermism given the importance it places on pursuing the greatest good for the greatest number, with future generations expected to be the vast majority of all people to ever exist.[2][28] Consequentialist moral theories such as utilitarianism may generally be sympathetic to longtermism since whatever the theory considers morally valuable, there is likely going to be much more of it in the future than in the present.[29]

However, other non-consequentialist moral frameworks may also inspire longtermism. For instance, Toby Ord considers the responsibility that the present generation has towards future generations as grounded in the hard work and sacrifices made by past generations.[7] He writes:[7]: 42 

Because the arrow of time makes it so much easier to help people who come after you than people who come before, the best way of understanding the partnership of the generations may be asymmetrical, with duties all flowing forwards in time—paying it forwards. On this view, our duties to future generations may thus be grounded in the work our ancestors did for us when we were future generations.

Evaluating effects on the future

[edit]

In his book What We Owe the Future, William MacAskill discusses how individuals can shape the course of history. He introduces a three-part framework for thinking about effects on the future, which states that the long-term value of an outcome we may bring about depends on its significance, persistence, and contingency.[5]: 31–33  He explains that significance "is the average value added by bringing about a certain state of affairs", persistence means "how long that state of affairs lasts, once it has been brought about", and contingency "refers to the extent to which the state of affairs depends on an individual’s action".[5]: 32  Moreover, MacAskill acknowledges the pervasive uncertainty, both moral and empirical, that surrounds longtermism and offers four lessons to help guide attempts to improve the long-term future: taking robustly good actions, building up options, learning more, and avoiding causing harm.[5]

Population ethics

[edit]
Illustrating the potential number of future human lives

Population ethics plays an important part in longtermist thinking. Many advocates of longtermism accept the total view of population ethics, on which bringing more happy people into existence is good, all other things being equal. Accepting such a view makes the case for longtermism particularly strong because the fact that there could be huge numbers of future people means that improving their lives and, crucially, ensuring that those lives happen at all, has enormous value.[2][30]

Other sentient beings

[edit]

Longtermism is often discussed in relation to the interests of future generations of humans. However, some proponents of longtermism also put high moral value on the interests of non-human beings.[31] From this perspective, expanding humanity's moral circle to other sentient beings may be a particularly important longtermist cause area, notably because a moral norm of caring about the suffering of non-human life might persist for a very long time if it becomes widespread.[22]

Time preference

[edit]

Effective altruism promotes the idea of moral impartiality, suggesting that people’s worth does not diminish simply because they live in a different location. Longtermists like MacAskill extend this principle by proposing that "distance in time is like distance in space".[32][33] Longtermists generally reject the notion of a pure time preference, which values future benefits less simply because they occur later.

When evaluating future benefits, economists typically use the concept of a social discount rate, which posits that the value of future benefits decreases exponentially with how far they are in time. In the standard Ramsey model used in economics, the social discount rate is given by:

where is the elasticity of marginal utility of consumption, is the growth rate, and combines the "catastrophe rate" (discounting for the risk that future benefits won't occur) and pure time preference (valuing future benefits intrinsically less than present ones).[7]: 240–245 

Toby Ord argues that a nonzero pure time preference applied to normative ethics is arbitrary and illegitimate. Economist Frank Ramsey, who devised the discounting model, also believed that while pure time preference might describe how people behave (favoring immediate benefits), it does not offer normative guidance on what they should value ethically. Furthermore, only applies to monetary benefits, not moral benefits, since it is based on diminishing marginal utility of consumption. Ord also considers that modeling the uncertainty that the benefit will occur with an exponential decrease poorly reflects the reality of changing risks over time, particularly as some catastrophic risks may diminish or be mitigated in the long term.[7]: 240–245 

In contrast, Andreas Mogensen argues that a positive rate of pure time preference can be justified on the basis of kinship. That is, common-sense morality allows us to be partial to those more closely related to us, so "we can permissibly weight the welfare of each succeeding generation less than that of the generation preceding it."[34]: 9  This view is called temporalism and states that "temporal proximity (...) strengthens certain moral duties, including the duty to save".[35]

Criticism

[edit]

Unpredictability

[edit]

One objection to longtermism is that it relies on predictions of the effects of our actions over very long time horizons, which is difficult at best and impossible at worst.[36] In response to this challenge, researchers interested in longtermism have sought to identify "value lock in" events—events, such as human extinction, which we may influence in the near-term but that will have very long-lasting, predictable future effects.[2]

Deprioritization of immediate issues

[edit]

Another concern is that longtermism may lead to deprioritizing more immediate issues. For example, some critics have argued that considering humanity's future in terms of the next 10,000 or 10 million years might lead to downplaying the nearer-term effects of climate change.[37] They also worry that the most radical forms of strong longtermism could in theory justify atrocities in the name of attaining "astronomical" amounts of future value.[2] Anthropologist Vincent Ialenti has argued that avoiding this will require societies to adopt a "more textured, multifaceted, multidimensional longtermism that defies insular information silos and disciplinary echo chambers".[38]

Advocates of longtermism reply that the kinds of actions that are good for the long-term future are often also good for the present.[30] An example of this is pandemic preparedness. Preparing for the worst case pandemics—those which could threaten the survival of humanity—may also help to improve public health in the present. For example, funding research and innovation in antivirals, vaccines, and personal protective equipment, as well as lobbying governments to prepare for pandemics, may help prevent smaller scale health threats for people today.[39]

Reliance on small probabilities of large payoffs

[edit]

A further objection to longtermism is that it relies on accepting low probability bets of extremely big payoffs rather than more certain bets of lower payoffs (provided that the expected value is higher). From a longtermist perspective, it seems that if the probability of some existential risk is very low, and the value of the future is very high, then working to reduce the risk, even by tiny amounts, has extremely high expected value.[2] An illustration of this problem is Pascal’s mugging, which involves the exploitation of an expected value maximizer via their willingness to accept such low probability bets of large payoffs.[40]

Advocates of longtermism have adopted a variety of responses to this concern. Some argue that, while unintuitive, it is ethically correct to favor infinitesimal probabilities of arbitrarily high-impact outcomes over moderate probabilities with moderately impactful outcomes.[41] Others argue that longtermism need not rely on tiny probabilities as the probabilities of existential risks are within the normal range of risks that people seek to mitigate against— for example, wearing a seatbelt in case of a car crash.[2]

See also

[edit]

References

[edit]
  1. ^ a b c Steele, Katie (2022-12-19). "Longtermism – why the million-year philosophy can't be ignored". The Conversation. Retrieved 2024-07-22.
  2. ^ a b c d e f g h Samuel, Sigal (2022-09-06). "Effective altruism's most controversial idea". Vox. Retrieved 2024-07-14.
  3. ^ Samuel, Sigal (2021-11-03). "Would you donate to a charity that won't pay out for centuries?". Vox. Retrieved 2021-11-13.
  4. ^ a b c d Setiya, Kieran (August 15, 2022). "The New Moral Mathematics". Boston Review.
  5. ^ a b c d e f g h i MacAskill, William (2022). What We Owe the Future. New York: Basic Books. ISBN 978-1-5416-1862-6.
  6. ^ a b MacAskill, William (2019-07-25). "Longtermism". Effective Altruism Forum.
  7. ^ a b c d e f g h i j k Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury Publishing. ISBN 978-1-5266-0021-9. OCLC 1143365836.
  8. ^ Rowe, Thomas; Beard, Simon (2018). "Probabilities, methodologies and the evidence base in existential risk assessments" (PDF). Working Paper, Centre for the Study of Existential Risk. Archived (PDF) from the original on 27 August 2018. Retrieved 26 August 2018.
  9. ^ Constitution of the Iroquois Nations, 1910.
  10. ^ Lyons, Oren (October 2004). "The Ice is Melting". Center for New Economics. Archived from the original on 11 May 2022.
  11. ^ Samuel, Sigal (2021-07-02). "What we owe to future generations". Vox. Retrieved 2021-11-27.
  12. ^ MacAskill, William (8 August 2022). "What is longtermism?". BBC. Retrieved 2024-07-14.
  13. ^ "About us: what do we do, and how can we help?". 80,000 Hours. Retrieved 2021-11-28.
  14. ^ "Global Catastrophic Risks". Open Philanthropy. 2016-03-02. Retrieved 2021-11-28.
  15. ^ "About Us – Forethought Foundation". Forethought Foundation for Global Priorities Research. 2022.
  16. ^ Matthews, Dylan (2022-08-08). "How effective altruism went from a niche movement to a billion-dollar force". Vox. Retrieved 2022-08-27.
  17. ^ a b c Beckstead, Nick (2013). "Chapter 1.1.2: How could we affect the far future?". On The Overwhelming Importance of Shaping the Far Future. New Brunswick Rutgers, The State University of New Jersey.
  18. ^ Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority". Global Policy. 4 (1): 15–31. doi:10.1111/1758-5899.12002.
  19. ^ Moorhouse, Fin (2021). "Longtermism - Frequently Asked Questions: Is longtermism asking that we make enormous sacrifices for the future? Isn't that unreasonably demanding?". Longtermism.com. Archived from the original on 14 July 2022.
  20. ^ Cowen, Tyler (2018). Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals. Stripe Press. ISBN 978-1732265134.
  21. ^ John, Tyler M.; MacAskill, William (2021). "Longtermist institutional reform". In Cargill, Natalie; John, Tyler M. (eds.). The Long View (PDF). UK: FIRST Strategic Insight. ISBN 978-0-9957281-8-9.
  22. ^ a b Reese, Jacy (20 February 2018). "Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment". Effective Altruism Forum.
  23. ^ Leslie Brown, Christopher (2006). Moral Capital: Foundations of British Abolitionism. UNC Press. ISBN 978-0-8078-5698-7.
  24. ^ MacAskill, William (23 May 2022). "Will MacAskill on balancing frugality with ambition, whether you need longtermism, and mental health under pressure". 80,000 Hours.
  25. ^ Parfit, Derek (2011). On What Matters, Volume 2. Oxford: Oxford University Press. p. 611.
  26. ^ a b c Moorhouse, Fin (2021). "Longtermism - Frequently Asked Questions: Isn't much of longtermism obvious? Why are people only just realising all this?". Longtermism.com. Archived from the original on 14 July 2022.
  27. ^ Crary, Alice (2023). "The Toxic Ideology of Longtermism". Radical Philosophy 214. Retrieved 2023-04-11.
  28. ^ MacAskill, William; Meissner, Darius (2022). "Utilitarianism and Practical Ethics – Longtermism: Expanding the Moral Circle Across Time". An Introduction to Utilitarianism. Archived from the original on 16 June 2022.
  29. ^ Todd, Benjamin (2017). "Longtermism: the moral significance of future generations". 80,000 Hours.
  30. ^ a b Greaves, Hilary; MacAskill, William (2021). "The case for strong longtermism". Global Priorities Institute Working Paper. 5. Archived from the original on 9 July 2022.
  31. ^ Baumann, Tobias (2020). "Longtermism and animal advocacy". Center for Reducing Suffering. Archived from the original on 14 July 2022.
  32. ^ Godfrey-Smith, Peter (2024-09-24). "Is Longtermism Such a Big Deal?". Foreign Policy. Retrieved 2024-09-23.
  33. ^ "What is long-termism?". The Economist. November 22, 2022. ISSN 0013-0613. Retrieved 2024-09-23.
  34. ^ Mogensen, Andreas (October 2019). "'The only ethical argument for positive 𝛿'?" (PDF). Global Priorities Institute. Retrieved 30 December 2021.
  35. ^ Lloyd, Harry (2021). "Time discounting, consistency and special obligations: a defence of Robust Temporalism". Global Priorities Institute Working Paper. 11.
  36. ^ Tarsney, Christian (2019). "The epistemic challenge to longtermism". GPI Working Paper (10–2019).
  37. ^ Torres, Phil (2021-10-19). Dresser, Sam (ed.). "Why longtermism is the world's most dangerous secular credo". Aeon. Retrieved 2021-11-02.
  38. ^ "Deep Time Reckoning". MIT Press. Retrieved 2022-08-14.
  39. ^ Lewis, Gregory (2020). "Risks of catastrophic pandemics". 80,000 Hours.
  40. ^ Bostrom, Nick (2009). "Pascal's mugging" (PDF). Analysis. 69 (3): 443–445. doi:10.1093/analys/anp062.
  41. ^ Wilkinson, Hayden (2020). "In defence of fanaticism". Global Priorities Institute Working Paper. 4.

Further reading

[edit]
[edit]