Connect with us

Future

How Robots Can Help Create A More Inclusive Future

Published

Stephen Hawking, Bill Gates and Elon Musk have all said that machine intelligence is an existential threat to humanity’s future. Tristan Harris warns that technology is already co-opting humanity’s present. And the Stanford Institute for Human-Centered Artificial Intelligence is enlisting the brightest human minds to peek inside AI’s Pandora’s jar. These concerns are part of a larger cultural debate about humanity’s shared and potentially dystopian future with robots.

Our disaffection toward machines has been going on far longer than you think. Just as technologies like video calling were anticipated in technicolor in The Jetsons, today’s concerns about machine domination have been foreshadowed by stories such as Mary Shelley’s The Modern Prometheus, Philip Dick’s Do Androids Dream of Electric Sheep? and the Wachowskis’ The Matrix. There is nothing new under the black hole sun.

Actually, there is. The most important story never told is the story of how our family values did not scale as we globalized. Once upon a time, all humans lived in kin hives where mutual kin skin in the game fostered social cohesion. After humans harnessed the Promethean fire of energy, human social entropy increased, accelerating the diaspora until we melded into a global village. Without the mutual kin skin in the game to protect against extractive behaviors, domestication of others became the rule in post-tribal communities. Extractive AI is nothing more than the latest incarnation in the long line of avatars—extractive governors, extractive capitalists and extractive technologists—who stand in for our prehistoric tribal stewards and prey upon the community.

Our fear of robots, too, is the latest incarnation of our long-standing fear of abuse at the hands of self-serving systems, humans or otherwise. The Terminator is the updated, industrial-alloy version of the beasts and monsters of ancient stories. The Matrix is the digital version of Animal Farm, an institutional superorganism that domesticates humanity’s bioalgorithms.

Yet, humans have long imagined good robots, too—ones that are kind, helpful and maybe even have a little sense of humor.

We are at a crossroad when we can choose between these futures. The most important question to ask is not, “What will intelligent bots do?” but, “Whom will intelligent bots serve?” If the bots are trained to the maximization of corporate profits, then the marketplace could favor the selection of algorithms that maximize benefit to the corporation even if doing so harms users. On the other hand, imagine algorithms trained to nurture the success of users—not unlike the way mothers nurture their children—and overall society. Imagine robots of unconditional love, a mom bot.

Let’s pop up to a higher plane. Sentient robots could very well be part of our future. Our greatest responsibility as humans—for ourselves, for living creatures and even for robots—is to set up the first principles of ethics by which all sentient systems operate and cooperate. That principle is the principle of inclusive stakeholding, the mutually vested interest in each other’s success that mirrors the genetic inclusive fitness of kin skin in the game.

This is a radical departure from the prevailing wisdom of roboethics—a field which traces its roots to at least the time of Isaac Asimov’s Three Laws and is based on rules (for example, the Prime Directive in Star Trek). These instincts are not unlike those that inspired the Code of Hammurabi and the never-ending variants and amendments that govern human conduct.

The principle of inclusive stakeholding presupposes no such rules.

It merely provides an understanding of the importance of mutually vested interest in deterring extractive behaviors and incentivizing altruistic ones.

It is said that the one thing still remaining in Pandora’s jar is hope. The question of how we relate to robots is a fractal of larger questions about how all of us, in the broadest sense—including animals and robots—will live together in the future. We are now sentient about the reality that kin altruism has scaled poorly as the operating algorithm of human sociality in the global era. Yet, the good news is, this has become addressable because of technological progress. We make the case that blockchain is among the many emerging technologies that can be harnessed in service of the inclusive stakeholding revolution.

That is to say, rather than being our punishment for harnessing the Promethean fire, Pandora’s jar could turn out to be our gift. In a more optimistic vision of the future, the bioalgorithms of inclusive fitness—the genetic code of mutual vested interest—will be updated with more generalized social and technological algorithms of inclusive stakeholding to build a much better future for everyone.

Even robots.

This article originally appeared on Worth.com

Read more from Worth:

Dr. Joon Yun is president and managing partner of Palo Alto Investors LP, a hedge fund founded in 1989 with $2 billion in assets invested in healthcare. Board certified in radiology, Yun served on the clinical faculty at Stanford from 2000 to 2006. Yun has served on numerous boards, and he is currently a trustee of the Salk Institute. Joon and his wife Kimberly launched the $1 million Palo Alto Longevity Prize and donated $2 million to support the National Academy of Medicine’s Longevity Grand Challenge. He received his M.D. from Duke Medical School and B.A. from Harvard College.

Top 10

Copyright © 2019