How Changing Technology Impacts Executive Search: An Expert Q&A With Gerd Leonhard

A Discussion on Automation, Artificial Intelligence (AI) & Executive Assessments

Gerd Leonhard is a futurist who is listed by Wired Magazine as one of the top 100 most influential people in Europe. His work is concerned with the future of humanity and its relationship with technology, as outlined in his new book Technology vs Humanity. 

AESC spoke with Gerd in 2016 ahead of our European Conference titled Hybrid 2016: The Art & Science of Strategic Advising. In this Q&A discussion, Gerd explores how changing technology, including automation, artificial intelligence and executive assessments, affects the executive search profession.

Gerd Leonhard dressed in a black button down stands in front of the white AESC signAESC: Executive search and leadership consulting is increasingly a balance of art and science, a theme that runs throughout your new book Technology vs Humanity. How is technology changing society and what does it mean for our profession?

Gerd Leonhard: Technology now is basically able to do whatever we want it to do. The question isn’t so much whether technology can do something – which was a big question up until now because the technology wasn’t good enough – the question is about purpose: what do we want technology to do? To judge and assess people it is useful to use algorithms, but there are also so many things between the lines. If you treat the algorithm’s findings as human findings, then you can overdo the automation process.

AESC: The science behind executive assessments – of leadership, cultural fit, psychometrics etc. – is becoming more and more sophisticated. How much more can we understand about people through assessments?

GL: Well, it depends on what you believe. IBM’s CEO always says that intuition and knowledge will be replaced by data and analytics because that’s what they sell. The world-leading psychologist and Nobel Prize winner Daniel Kahneman says that the best cognition is embodied. We don’t think just with the brain, we think with the body. Emotions aren’t data and shouldn’t be treated as such. If we ignore assessments we definitely won’t be successful because we don’t have enough information to make decisions, but if we only pay attention to data then we will become a machine.

Learn more about executive assessments: The Role of Executive Assessments in the Search Process | AESC

AESC: If it’s true that we’re not too far away from a future where machines can talk like us, work like us and think like us, what makes us human?

GL: That’s the ultimate question. What humans do is to a very high degree, not data. Basically, it will be a long time before we can understand the workings of the giant machine that is the human body. Things like compassion, values, feelings and emotions make us human. We can meet somebody for one second in a hallway and we will immediately know who that person is, which computers can’t do. 99% of what we are is not specific or algorithmic – at least to the point that we can understand it.

Computers are very good at simulations. You have 62 facial muscles and the computer can read all of them and learn all of their hundred million combinations and tell if you are lying. But the computer cannot actually lie. It could simulate it but it cannot be angry or lie.

The difference between a man and a machine is that whatever is very simple for humans is complicated for computers and vice versa. That’s called the Moravec paradox. That’s where the value comes for us. We need to give the machines the things that are easy for machines. It helps efficiency – it helps our business. Machines will be very good at anything that is routine.

AESC: How soon can we expect automation to have a significant impact on the labor market?

GL: We’re looking at the point where in roughly ten years all the things that were science fiction become possible. Nanobots in my bloodstream cleaning cholesterol, for example. It means we will find ways to bring more machines to the labor market. For example, rather than calling your assistant, you just speak to a device and it will book the flight when you want it to.

But this is also where my colleague Paul Saffo, who is a futurist in San Francisco, likes to say “we should not mistake a clear view for a short distance”. So yes all of these things will be possible but I think it will take longer for them to be commercially relevant. For example, people get confused with self-driving cars. We’re not too far away from being about to travel around cities at 20mph but being in a self-driving car on a motorway is a long way away.

AESC: Technological unemployment and human de-skilling are two slightly scary-sounding phrases that come in your book several times. Is there a corporate responsibility to consider how technological growth could begin to impact humanity on a wide scale? Will this require a shift in mindset for most business leaders?

GL: The decision-making process of business leaders is currently impacted most by financial concerns. Most companies are driven by revenue and efficiency. If you continue in that direction, most companies will become giant machines. So you would then say you’re going to automate 95% of the workforce because you can. Then you would become a machine run by people. Every business process and success is about culture, not technology. Having fantastic technology and culture with it is the winning combination.

The problem with technology is that people are barking up the wrong tree by saying it is all about efficiency and margins. It’s really not about that. It’s that once you’ve reached efficiency, what do you reach next?

If you don’t question ‘why?’ then you can say we will stop aging and cure cancer and have superhumans and brain implants; you end up in a place that is in technological overdrive and humans don’t have any purpose. So how do we deal with that power we have and put it into context? How do we create rules about what is acceptable and who is in charge? There is no way that we can roll back technology. We invented technology. We’re not going to go back and make it illegal to use AI. If we don’t regulate it and have a social conduct it could end up very ugly. The job facing us is to create that framework, not to forbid it.

For more information on how to leverage emerging technologies and lead transformation in the new era, download Leading Transformation: Shaping the C-Suite for Business 4.0 Innovation.