HEC Debate Wards Off Fear of Robotization
HEC Paris Professor Augustin Landier and Afiniti Managing Director for Global Business Development Jérôme de Castries (H17) were at the heart of a lively debate, broadcast live, on the role of Big Data and Artificial Intelligence in the financial world. A full-house of entrepreneurs, journalists and students were treated to rare insights into this digital sector of the future and its relationship with current and future members of this business community.
Four years ago, the world-famous theoretical physicist Stephen Hawking warned BBC viewers that “the development of full artificial intelligence could spell the end of the human race…” Last year, The Smithsonian publication published a long study warning readers that almost half of US jobs will be fully automated “in a decade or two ”, leading to mass unemployment.
Such alarmist-style forecasts were quickly dispelled by the two guest speakers at the HEC Paris Roland Garros debate, held in the first week of the 2018 Grand Slam tournament. The concerns over the dynamic development of AI-Big Data and its links to the robotization of industry was the first topic they wished to tackle. “Although this fear of technological transformations are understandable and recurrent,” said the youthful Jérôme de Castries, “we must overcome them, in the same way the business community did in the 19th century. At the time, industry resisted the pressure from the Luddites who feared automation and destroyed weaving machines as a way to protest. Yet, like today, the system was able to create employment and free itself from the most menial of tasks.”
“The ratio between humans to machines, and the humans to money invested ratio, clearly indicate that robotization has not led to a disappearance of the human element in industry,” insisted Augustin Landier in his characteristically dynamic and didactic explanations. “Humans are needed to build the robots, to constantly work on their evolution. Robots are here to multiply our ability to act fast and to understand just as rapidly. In other words, there is a craftsmanship sprouting up alongside automation which makes one thing clear: it’s not because you robotize a sector that the human becomes peripheral to our needs.”
Designing New Students for the Digital World
The tone was thus set for a comprehensive panel review of the impact AI has had over the past decades, in particular following its coupling with Big Data. For the hour-long debate, De Castries drew on the remarkable success story he has enjoyed at the helm of the one of the most promising global AI companies on the market, Afiniti, to illustrate his human-centered approach to robotization. Meanwhile, Landier’s 15 years of research and teaching in some of world’s top institutions have been a springboard to analyze and publish works on the development of robotization and his calls for its automation to “save industry and therefore save jobs.”
“Nowadays,” he explained, “this evolution has opened up the sector to a new profile of worker who combines all the computer science tools available to leverage and increase our capacity of analysis. We at HEC Paris are preparing our students for this new profession. Us professors blend their knowledge so they emerge with an education geared for management, finance but also for engineering and a lot of behavioral economics.” The academic is known for his unwavering analysis of economic clichés, as unveiled in the book he co-wrote “Ten Ideas Which Are Sinking France”. He pursued his line of thinking on a new type of student: “We have to leave behind us the stereotype whereby it’s only the engineer who gets his/her hands dirty. Nowadays, there are plenty of tools available to analyze Big Data. So we as professors are encouraging students to have groundings both in managerial and economic skills and to be competent in data analysis and psychology.”
“As for us professors,” he went on, “these are exciting times. There is a huge amount of open-source programs available, and we all have access to this incredible library on the Internet. So, in fact, there are very few of us who are at the frontier of this discipline. It’s no longer the top-down paradigm with professors using old software. That disappeared in the Nineties, to be replaced by Python, R of econometrics, and so on. It makes teaching so much more concrete and collaborative.”
Ethical and Normative Regulation
Jérôme de Castries urged business schools to teach the future managers of AI companies three managerial skills: a focus on the use of AI technology for “simple” tasks; the discovery and exploitation of appropriate tools to measure the success rate of a business; and a guarantee that it is the supplier and not the client which takes business risks in any venture. Equally important for the 26-year-old CEO, however, is the ethical responsibility which comes with robotization: “Students must not under-estimate the impact on society which this new technology can have, and the responsibility that goes with it.” He elaborated: “Take, for example, the development of autonomous weapons which can execute people without direct human intervention. These must never be developed! In the past, we voted in Geneva Conventions banning chemical weapons. A similar debate and action must be instigated to save us from catastrophic harm.”
Augustin Landier complemented Castries’ point in terms on the regulatory measures necessary to prevent abusive use of Big Data in business. But he played down the panicked approach certain commentators adopted to the issue. “We’ve seen debates in the past over the use of DNA or genetics by insurance companies . Remember, there have been polemical discussions over the abusive use of predictive information on an individual’s DNA which would encourage companies to offer lower or higher premiums, according to the readings. Like that issue, there is need for normative regulation to be imposed in the use of IA and Big Data. And the academic world must provide the groundwork on which such regulation can be built. This is an interesting question but it should be treated with a strong degree of serenity, there is no apocalyptic message behind such a debate.”
Artificial Intelligence Becomes Human Intelligence
The Roland Garros tournament was a more than appropriate location for the HEC exchange at several levels. Firstly, the ace pilot was a former HEC graduate (H1908), whose name has graced the event for 91 years. Garros had famously defied the odds in the aviation industry by modernizing his engines to break records worldwide, including the first-ever crossing of the Mediterranean in 1913.
Secondly, the tournament has been a permanent AI and Big Data laboratory for IBM these past 33 years. Indeed, the multinational has been testing its Watson, Jeopardy and most recently, its Gary programs, on the event ever since it arrived in Paris in 1986. These computer tools analyze and synthesize data to serve not only the insatiable appetite of the media and players for information on the tennis, but are also exploited by the Roland Garros officials in order to ward off cyber-attacks or to monitor traffic.
The pace of change has excited some, frightened others. It inspired the IBM CEO Ginni Rometty to famously say: “Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.” But for American business magnate Elon Musk, the risks are there, as he wrote on Edge.org: “The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.”
The jury is still out on which tendency will dominate the coming decade.