THAT'S FU%*ING SMART: Why Elon Musk Donated $10 Million To The Future of Life Institute

Enlight19739e.jpg

Welcome to the first installment of a new series titled, THAT'S FUCKING SMART.

You'll never be sure what topic I'll cover, but you'll be absolutely sure that I won't cover this one topic: life hackery. This series will offer zero life tips, zero business how-to's, zero top 10 ways to cover up your bs hacks.

So, what is The Future of Life Institute? Why did Elon give $10 million to it?

eye-428390_1280fec05.jpg

You may remember reading in the news a year ago that Elon Musk donated $10 million to keep humans safe from “killer robots”.

You might also remember that Bill Gates, Stephen Hawking, Steve Wozniak, Morgan Freeman and many leading AI researchers from Google’s DeepMind project also joined Musk in raising public awareness about the existential threats posed by the future possibility of superintelligent AI.

Much of the world had no idea what Musk and friends were talking about because very few people, outside of scientists who research AI, know that the current breakthroughs in AI are happening way before experts predicted them to occur. If you follow the news in AI, it seems that a new breakthrough occurs every month or week. Here’s a few of the most recent breakthroughs: AI beats a top human player in the ancient game of Go, AI lawyer named Ross has been hired by law firms, students didn’t know their teacher was an AI assistant, AI can automatically write captions from complex images, robot receptionist gets job at Belgian hospital, AI becomes first to pass Turing test and that is just to name a few.

So, what’s wrong with any of that? Why are some of the brightest minds worried about AI?

ai8c181.jpg

Image Source: Pixabay

Let’s now turn to a founding member of The Future of Life Institute, Max Tegmark, who is a professor of physics at MIT. After viewing a video in which he lays out why he created The Future of Life Institute, I believe his reasoning makes a lot of sense.

According to Max Tegmark,

“We are experiencing a race. A race between the growing power of technology and the growing wisdom with which we manage this technology. And it’s really crucial that the wisdom win this race.” - Max Tegmark, The Future of Life With AI

Max continues on to illustrate that humans are doing a pathetic job of managing existential risks, in the context of hundreds, thousands and even millions of years into the future. He uses the case study of nuclear weapons to demonstrate how poor of a job we’re doing. He then decides to give humans a mid-term grade on risk management.

He gives us humans a D-.

You may be asking, “Why did we get a D-?”

Here’s your answer in the form of a question:

Which person is more famous?

futurelife8bb3b.jpg

Image Source: Bieber public domain + Arkhipov from Alchetron

Which person should we thank for allowing us to even exist here, reading leisurely on Steemit today?

No one knows the correct answer because no one knows the name of the guy on the right. I certainly had no idea who he was until I watched Max’s video yesterday.

His name is Василий Александрович Архипов.

Vasili Alexandrovich Arkhipov (30 January 1926 – 19 August 1998) was a Soviet Navy officer. ...Only Arkhipov, as flotilla commander and second-in-command of the nuclear-missile submarine B-59, refused to authorize the captain's use of nuclear torpedoes against the United States Navy, a decision requiring the agreement of all three senior officers aboard. In 2002 Thomas Blanton, who was then director of the National Security Archive, said that "Vasili Arkhipov saved the world". -Wikipedia

Apparently a Russian guy named Arkhipov saved the world from nuclear war but nobody knows about it.

That was one reason Max gave us humans a D- for existential risk management. Of course the biggest cause for our low grade was because of our past history with nuclear weapons. When nuclear weapons were first developed, no one knew that a bigger devastation would come in the form of a ten year nuclear winter, wherein the earth would receive only scant amounts of sunlight and most living things would die off as a result. This scenario was only communicated by scientists in the 1980’s, long after the bombs were built. Hmmmm...not a very smart thing to do: build something that could wipe out humanity and most living creatures, but not even be aware of this secondary causal existential threat.

As Max points out, we humans don’t have a very good track record when it comes to understanding the full implications of the powerful technology we are unknowingly unleashing.

If you want to watch the full video, it’s completely worth your time. You could be much smarter afterwards:

Now, Max describes himself as a cheerful guy and he’s not proposing that we halt the march towards creating superintelligent AI.

What he is proposing, with the formation of The Future of Life Institute is that we research and build AI in a maximum beneficial way for the continuation of humanity. Before Max organized this group, there were no guiding principles directing the scientists who were actively building AI systems. Smarter, faster were the only words dictating their actions.

Many may still not be aware of the potential risks associated with AI. Here are two potential AI scenarios that are outlined from their website:

The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.

The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met. -The Future of Life Institute

The Future of Life Institute researches four areas of potential existential threats: artificial intelligence, nuclear weapons, climate change and biotechnology. Elon Musk has massively supported this initiative and also helped to develop OpenAI, whose mission “is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.” OpenAI was created in response to the fact that the primary ones designing the AI future were the tech giants like Google and Facebook. Musk was concerned that artificial intelligence in the future would only be in the hands of the powerful few.

Here’s Elon and Max in a video interview that explores why Musk gave Max’s foundation $10 million:

What do you think the next breakthrough in AI will be? And when will it be?

Enlight1ed91d.md.jpg

follow8f3ef.jpg

H2
H3
H4
Upload from PC
Video gallery
3 columns
2 columns
1 column
11 Comments