Very thorough read on the implications of artificial intelligence. Starts out by providing solid terminology about what life and intelligence is, and then asks many questions of what we want out of our future. Amazingly thourough in describing what type of scenarios could be awaiting. What will life be for humans in the age of artifical intelligence?

Notes

  • Cosmic history is about 13.8 billion years old.
  • Life first appeared on earth about 4 billion years ago.
  • Human life appeared around 100 000 years ago.

The three stages of life

Life is a process which can retain its complexity and replicate.

What is replicated is not atoms, but information, which specifies how those atoms are arranged.

  • Life 1.0—where both hardware and software are evolved, rather than designed. The biological evolution.
  • Life 2.0—where hardware is evolved, but software is largely designed. The cultural evolution.
  • Life 3.0—which can design not only its software, but also its hardware. The technological evolution.

Life 3.0 is the master of its own destiny. Free from its evolutionary shackles.


Intelligence enables control

By installing a software module that enables us to communicate through spoken language, we ensured that the most valuable information in one person’s brain could get copied over to other brains, potentially surviving the original brain’s death.

  • History is full of technological over-hyping.
  • Intelligence enables control: humans control tigers not because we’re stronger, but because we’re smarter.
  • A computation is a transformation of one memory state to another. It takes information and transforms it. It’s what mathematicians calls functions.
  • If you can implement highly complex functions, then you can build an intelligent machine that’s able to accomplish highly complex goals.

Intelligence doesn’t require flesh, blood, or carbon atoms. Computation is a pattern in the space time arrangement of particles, and it’s not the particles themselves but the pattern that really matters. Hardware is the matter and software is the pattern.

The ability to learn is arguably the most fascinating aspect of general intelligence.


Future scenarios

  • Libertarian utopia—Humans, cyborgs, uploads and superintelligences coexist peacefully thanks to property rights.
  • Benevolent dictator—Everybody knows that the AI runs society and enforces strict rules, but most people view this as a good thing.
  • Egalitarian utopia—Humans, cyborgs, and uploads coexist peacefully thanks to property abolition and guaranteed income.
  • Gatekeeper—A superintelligent AI is created with the goal of interfering as little as necessary to prevent the creation of another superintelligence. As a result, helper robots with slightly subhuman intelligence abound, and human-machine cyborgs exist, but technological progress is forever stymied.
  • Protector god—Essentially omniscient and omnipotent AI maximizes human happiness by intervening only in ways that preserve our feeling of control of our own destiny and hides well enough that many humans even doubt the AI’s existence.
  • Enslaved god—A superintelligent AI is confined by humans, who use it to produce unimaginable technology and wealth that can be used for good or bad depending on the human controllers.
  • Conquerors—AI takes control, decides that humans are a threat/nuisance/waste of resources, and gets rid of us by a method that we don’t even understand.
  • Descendants—AIs replace humans, but give us a graceful exit, making us view them as our worthy descendants, much as parents feel happy and proud to have a child who’s smarter than them, who learns from them and then accomplishes what they could only dream of—even if they can’t live to see it all.
  • Zookeeper—An omnipotent AI keeps some humans around, who feel treated like zoo animals and lament their fate.
  • 1984—Technological progress toward superintelligence is permanently curtailed not by an AI but by a human-led Orwellian surveillance state where certain kinds of AI research are banned.
  • Reversion—Technological progress toward superintelligence is prevented by reverting to a pre-technological society in the style of the Amish.
  • Self-destruction—Superintelligence is never created because humanity drives itself extinct by other means (say nuclear and/or biotech mayhem fueled by climate crisis).

Can a self-driving car hold car insurance? And if so, should machines then also be able to own money and property? If AI systems eventually get better than humans at investing it can lead to a situation where most of the economy is owned and controlled by machines.

  • Encourage children to go into professions that machines are currently bad at. Those involving people, unpredictability, and creativity.
  • AI can make our legal systems more fair, consistent, and efficient if we can figure out how to make robojudges transparent and unbiased.
  • If we one day succeed in building human-level AGI, this may trigger an intelligence explosion, leaving us far behind.
  • The Catholic Church is the most successful organization in human history in the sense that it’s the only one to have survived for two millennia.

Our cosmic endowment

Humans could meet all our current global energy needs by harvesting the sunlight striking an area smaller than 0.5% of the Sahara desert.

Dyson sphere—An artificial biosphere forming a shell surrounding the sun, where people could live, flourish and enjoy 100 billion times more biomass and a trillion times more energy than humanity uses today.

  • It is possible to harvest, or “milk”, black holes on energy.
  • The speed of light limits not only the spread of life, but also the nature of life. It puts constraints on communication, consciousness, and control.

A fly reacts faster than you because it’s smaller, so it requires less time for information to travel between its eyes, brain and muscles.

If life engulfs our cosmos, which form will it choose: simple and fast, or complex and slow?

The Great Filter—an evolutionary roadblock somewhere along the developmental part from the non-living matter to space-settling life. Maybe almost all advanced civilizations self-destruct before they’re able to go cosmic?

Goals

Can machines have goals? Yes, because we design them that way. A mouse trap has the goal of catching mice for instance.

Most of what we’ve built so far exhibits only goal-oriented design, not goal-oriented behavior: a highway doesn’t behave, it just sits there.

A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

If we ever end up creating superintelligence, we should make sure it’s a “friendly AI”, an AI whose goals are aligned with ours.

Consciousness

A conscious person is simply food, rearranged. So why is one arrangement conscious, and not the other?

Food is simply a large number of quarks and electrons, arranged in a certain way. So which particle arrangements are conscious, and which aren’t?

The power of our technology is growing faster than the wisdom with which we manage it.

Highlights

It’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.

Location 5195

From this perspective, we see that although we’ve focused on the future of intelligence in this book, the future of consciousness is even more important, since that’s what enables meaning.

Location 5219

Do you want to be someone who interrupts all their conversations by checking their smartphone, or someone who feels empowered by using technology in a planned and deliberate way?

Location 5560