Press "Enter" to skip to content

LONGTERMISM: The Disturbing Ideology of Tech-Billionaires

Have you ever wondered what “belief” system the well known tech-billionaires subscribe to? What motivates them? Where their moral and ethical values stand? What is their ideology? More importantly, with the wealth and power they possess, what beliefs or ideologies are they using to shape the future of all humanity, including you and I?

A small group of theorists mostly based in Oxford have been busy promoting and working out the details of a new moral worldview called longtermism, which emphasizes how our actions affect the very long-term future of humans (thousands, millions, billions, and even trillions of years from now).

Simply put, their philosophy is to spread the human race throughout the cosmos and convert humanity to techno-biological machines and conscious digital avatars in interstellar simulations, as quickly as possible before we become “extinct”!

Their belief is that trillions of humans (even conscious simulated humans in metaworlds spread throughout the universe) have as much rights to coming into existence as we are existing. The belief is that “more good” can be done to the totality of the human race long-term if we measure our actions today using surveillance technology and with the aide of AI controlling and adjusting our actions. Their belief is in “maximizing” our “potential” as a species by subjugating all of nature, maximizing its output, colonizing all the planets and stars and maximizing our individual outputs as part of a collective.

What makes this most frightening is that our tech leaders are on board with this philosophy and have been donating billions of dollars to see it come true. Add to that the fact that the United Nations is holding a “Summit of the Future” in Sept 2023 addressing this issue and with plans to create a council that will stand as representatives of these unborn, uncreated, earth-originating “people”. Which means that funds that normally would be allocated to living breathing humans today will have to compete for funds going to hypothetical beings and entities of the very far future.

Remember from Star Trek’s Wrath of Khan movie the scene in which Spock sacrifices himself for the crew of the ship and has an exchange between himself and Kirk and he says, “Logic clearly dictates that the needs of the many outweigh the needs of the few.” Captain Kirk answers, “Or the one.” Now imagine that the entirety of humanity going into the very far future, as calculated by mathematicians and scientists and engineers specify that human existence in the future is at an “existential risk” or threat if, say, AI or space colonization is not advanced? And that current funding allocated for living people today (that would feed 1-billion starving people) would best be served advancing space travel and AI. This is the kind of insane logic we are dealing with.

 

Longtermism and existential risk are particularly influential ideologies among those who made fortunes in technology and in elite institutions. Elon Musk has cited the work of Nick Bostrom (who coined the term existential risk in 2002), saying “This is a close match for my philosophy” and has donated millions to the Future of Humanity Institute and Future of Life Institute, sister organizations based out of Oxford. Jean Tallinn, a founder of Skype worth an estimated $900 million in 2019, also cofounded the Center for the Study of Existential Risk at Cambridge, and has donated more than a million dollars to the Machine Intelligence Research Institute (MIRI). Vitalik Buterin, a cofounder of the Ethereum cryptocurrency, has donated extensively to MIRI as well. Peter Thiel, co-founder of PayPal and Palantir Technologies delivered the keynote address at the 2013 Effective Altruism summit.

The longtermist Toby Ord has “advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science.” A recent report from the Secretary-General of the United Nations, which Ord contributed to, discusses “existential risks” and specifically references “long-termism.” The FTX crypto-billionaire Sam Bankman-Fried, a committed longtermist, funded the 2022 congressional campaign of a longtermist, Carrick Flynn and has said (bfore the collpase of FTX) that he could donate $1 billion to influence the outcome of the 2024 US presidential election. And the Effective Altruism movement itself has around $46.1 billion in committed funding.

In other words, we should not worry about the poor around the world, since they have no “impact” on future human existence, and instead place our focus on advancing humanity so that it does not go extinct. Even if it means committing genocide. They have declared that it’s their number one moral duty. Giving themselves a green light to committing genocide for the sake of making sure that trillions of digital conscious beings can live in simulations throughout the cosmos. This is THEIR WORDS!

Their idea is to advance the best of humanity, using technology, and extend it forward into the future and in space. It sounds like eugenics!

Oxford academic and leading “long-termist” Nick Bostrom proposes that everyone should permanently wear an Orwellianly-named “freedom tag”: a device that would monitor everything that you do, 24/7 for the remainder of your life to guard against the minuscule possibility that you might be part of a plot to destroy humanity.

They worry about human extinction but never consider that transhumanism or post-humanism is another form of extinction.

Longtermism is eugenics in disguise. And transhumanism is universal predators in action. That is what we are looking at. Subjugating all of nature for the false illusion of becoming gods! Pure insanity!

Below are excerpts and references. Please familiarize yourself with this ideology because it’s real and becoming more and more prominent in this “Great Reset” and “Fourth Industrial Revolution”.


What is longtermism?

Longtermism is an ideology that emerged from the so-called Effective Altruism movement over the past decade, and which claims that influencing the future—hundreds, thousands, millions, and even billions of years from now—is a key moral priority of our time, if not the key priority. The reason is that, as William MacAskill and Hilary Greaves argue, there could be vast numbers of people—perhaps 10^45, that’s a 1 followed by 45 zeros (1000000000000000000000000000000000000000000000) – living in giant computer simulations running on planet-sized computers powered by Dyson spheres spread throughout the Milky Way galaxy, or beyond. Hence, if one wants to “do the most good,” it would be better to focus on these possible future people—e.g., by making sure that they exist in the first place—rather than on, say, helping the approx 1.3 billion people currently living in poverty today…

READ FULL ARTICLE HERE…– Bantam Joe

Home | Caravan to Midnight (zutalk.com)

We Need Your Help To Keep Caravan To Midnight Going,

Please Consider Donating To Help Keep Independent Media Independent

Breaking News: