Technology vs. Humanity
Gerd Leonhard

Technology vs. Humanity

books

56 highlights

I argue that we must place human happiness and well-being at the heart of the decision making and governance processes that will shape future investments in scientific and technological research, development, and commercialization because, in the end, technology is not what we seek, but how we seek.

Humanity will change more in the next 20 years than in the previous 300 years.

We can no longer adopt a wait-and-see attitude if we want to remain in control of our destiny and the developments that could shape it.

We must imagine an exponentially different tomorrow, and we must become stewards of a future whose complexity may well go far beyond current human understanding. In a way, we must become exponentially imaginative.

To safeguard humanity’s future, we must invest as much energy in furthering humanity as we do in developing technology.

Pretty much everything that can be digitized, automated, virtualized, and robotized probably will be, yet there are some things we should not attempt to digitize or automate—because they define what we are as humans.

The idea of giving machines the ability to “be” might well qualify as a crime against humanity.

the purpose of all technology and business in general should be to promote human flourishing.

Akin to the NRA’s “Guns don’t kill people, people kill people” stance, I think this is just a really cheap way of denying responsibility for what they facilitate.

We are witnessing a general lack of foresight and caution around the use and impact of technology. This is primarily because responsibility for what technology makes possible is still largely considered an externality by those who create and sell it—and that is a totally unsustainable attitude towards the future.

Technology, no matter how magical, is simply a tool that we use to achieve something: Technology is not what we seek, but how we seek!

We should not attempt to mend, fix, upgrade, or even eradicate what makes us human; rather, we should design technology to know and respect these differences—and protect them.

In other words, will we eventually prefer relationships with machines rather than with people?37

If you think a Facebook “like” already gets your dopamine going, then how much deeper could the visual high become?

If it can be done direct and/or peer-to-peer, it will be. Technology is making it a certainty.

Some pundits are calling this “platform capitalism” and “digital feudalism” because of the way Uber is treating its drivers as highly expendable commodities—a clear downside of the gig economy.59

The bottom line is that, as we head into exponential change, we must also collaborate to address ethics, culture, and values. Otherwise, it is certain that technology will gradually then suddenly become the purpose of our lives, rather than the tool to discover the purpose.

The US Bureau of Labor Statistics reports that—since 2011—overall US productivity increased significantly but employment and wages did not.75 As a result, corporate profits have risen since 2000.76

the richest 62 people on the planet now have amassed more wealth than 50% of the world’s entire population.77

A 2013 Oxford Martin School study suggests that up to 50% of jobs could be automated away in the next two decades.

Enterprise profits could then skyrocket because firms can decrease the number of people they employ globally,

We will eventually need to separate money from occupation, and that shift will challenge some very central assumptions about how we define our own values and identities.

Should businesses that invest aggressively in replacing humans with technology pay some kind of automation tax that goes to benefit those that no longer have a job?

Going along takes a lot less effort than going alone.

We don’t attempt to eat differently; instead, we take medication to help us deal with high blood pressure.

It is also becoming quite likely that, on such fully automated news and media platforms, we will no longer see things that another, possibly more knowledgeable person thought we should see. Instead, content will be selected by a bot, an AI approximating what we should see, based on hundreds of millions of facts and data crumbs, analyzed in real time.

My conversations with IoT proponents around the world suggest that, if it delivers on its promise, we could realize savings of 30–50% on global logistics and shipping costs; 30–70% of the costs of personal mobility and transportation; 40–50% of energy, heating, and air-conditioning expenses—and that’s just for starters.

My reply is always the same: Technology is neither good nor bad; it simply is. We must—now and here—decide and agree which exact use is evil or not.

Should we just yield to such a development and—as many technologists suggest—embrace the inevitable and complete convergence of man and machine, or should we take a more proactive role and really shape what we do or don’t create?

Welcome to pre-crime, the idea of being able to prevent crimes because our bots would know when intent emerges, even if it would not be obvious to the person involved.

Let’s make no mistake about this: Many of these devices, services, and platforms—whether openly and intentionally or inadvertently—seek to diminish or completely eradicate the difference between us (human nature) and them (second nature), because achieving that would make them utterly indispensable and extremely valuable in commercial terms.

In other words, we have increasingly more options at lower cost, but we are more worried about missing out, about “what we could have done”—all the time. Where is this going?

At the same time that our minds are gaining a kind of warp-speed because they are powered by Google et al., our arteries are clogged with all the junk that comes with these nonstop digital feasts, and our hearts are heavy with too many meaningless relationships and mediated connections that only exist on screens.

Do we now live inside the machine, or does the machine live inside of us?

Cisco predicts that by 2020, 52% of the global population will be connected to the Internet—around four billion human users.151

One fears it is almost certain that technology will eventually trump humanity if we merely follow the proactive approach as set forth today.

The more we pretend our data (and the artificial intelligence (AI) that learns from it) is 100% complete in a truly human way, the more misguided the system’s conclusions.

Back in 1968, US Senator Robert Kennedy was already flagging GDP as an ill-guided metric which “measures everything except that which makes life worthwhile.”166

In particular, humans seem happiest when they have: Pleasure (tasty food, warm baths) Engagement (or flow, the absorption within an enjoyed yet challenging activity) Relationships (social ties have turned out to be an extremely reliable indicator of happiness) Meaning (a perceived quest or belonging to something bigger) Accomplishments (having realized tangible goals).

However, the key difference is that machines will never have a sense of being. They cannot be compassionate, they can only ever hope to simulate it well.

Right now the focus is very much on the wonders of efficiency and hyperconnectivity, while the unintended consequences and negative externalities don’t seem to be anybody’s concern.

The key message here is that technology, like money, is neither good nor bad. It merely exists as a means.

Pleasure is, and must remain, a side-effect or by-product, and is destroyed and spoiled to the degree to which it is made a goal in itself.180

I often wonder whether exponential technological progress will generate exponential human happiness, beyond the 1% of those who will create, own, and profit from such brilliant miracle machines.

Technology has no ethics—but humanity depends on them.

Imagine an AI that drives your autonomous vehicle not knowing when it is and when it isn’t OK to kill an animal that’s on the road.

So now people assume that religion and morality have a necessary connection. But the basis of morality is really very simple and doesn’t require religion at all.195

We need to define a set of bottom-line digital ethics—ethics that are fit for the Digital Age: open enough not to put the brakes on progress or hamper innovation, yet strong enough to protect our humanness.

Today, decisions about implementing technology are made largely on the basis of profitability and efficiency. What is needed is a new moral calculus.197

Anonymity, mystery, serendipity, and mistakes are crucial human attributes we should not seek to remove

What will we do about emotions, surprise, hesitation, uncertainty, contemplation, mystery, mistakes, accidents, serendipity, and other distinct human traits? Would they become undesirable because algorithms and machines are perfect, programmed not to make mistakes, work 24/7/365, don’t have unions, and by and large will do as they are told? (Well, at least the non-thinking kind will…).

exogenesis—pregnancy outside the womb, babies born in labs.

Going forward, the primary question in technology won’t be about if something can be done, but why, when, where and by whom it should be done.

With machines doing all the hard work, increasing numbers of people are doing what they want to do rather than what pays the bills. The BIG has become a key factor in societal happiness, fueling a new boom in arts and crafts, entrepreneurship, and public intellectualism.

believe humanity is likely to change more in the next 20 years than the previous 300 years.

Embrace technology but don’t become it.