20th Innovation Forum, November 4, 2019
Economy and Innovation in the Age of AI
Artificial intelligence (AI) is a topic of high priority for many decision-makers not only in companies, but also in the political arena. Against this background, around 30 managers, scientists and representatives of several charitable organizations came together for this anniversary Innovation Forum – the 20th such event – to discuss the consequences of this technology. In his welcoming address Prof. Dr. Eckard Minx, Chairman of the Board of the Daimler and Benz Foundation, pointed to the need for clarification in connection with the term AI: The notions and definitions of AI, and of its inherent nature and its capabilities, are highly divergent. Everything seems conceivable – from a cure for all woes to a recipe for disaster. The lack of a precise definition, and the widely divergent expectations in view of the countless algorithm-based systems we encounter today, generate a hype that diverts attention from the important issues. “The promise of being able to eliminate all uncertainty and unforeseen circumstances by means of gigantic data volumes and large computer capacity is causing more and more decisions to be assigned to algorithms. Pure volumes of data are acquiring the status of facts.” A problem in this regard, Minx continued, was that these decisions are invariably founded on statistical correlations the basis of which lies in the past.
In his presentation “Artificial intelligence – What are we talking about, and what should we be talking about?” Matthias Spielkamp, Executive Director of AlgorithmWatch and Scientific Director of the Innovation Forum, first of all clarified some key concepts encountered in discussions of AI. He contended that in the discussion of the societal consequences of methods based on the statistical evaluation of large volumes of data, the term “artificial intelligence” diverts attention from specific challenges. For this reason, AlgorithmWatch uses the alternative expression “algorithmic decision-making” (ADM). First, the organization thereby endeavors to counter the impression that such methods are comparable with human intelligence. Second, it must be made clear that decisions by natural and legal persons are modeled here and encoded in software, rather than a machine “deciding” for itself, even though data processing systems ultimately execute actions without – or at least without decisive – human intervention. However, the fundamental decision is made at the time when it is determined that an automation system is to be used for a specific purpose, and how the model is designed accordingly. This is all decided by humans, who must then ultimately bear the responsibility. It is not widely known that in many European countries, systems are already in use that classify people and make far-reaching decisions on this basis: for example whether therapies are approved, social benefits granted, or families monitored for possible child neglect. Although attempts to use data analysis in order to decide on future actions are justified in principle, Spielkamp continued, it is decisive that adequate structures be in place to ensure that these methods are employed with legitimate intentions and that the methods themselves fulfill appropriate quality requirements. For the large part, supervisory authorities have not yet been able to meet these criteria for lack of both capacity and expertise. In a first step, AlgorithmWatch is therefore calling for a requirement for public administration to publish registers of what ADM systems are to be used for what purposes, who develops them, and what logic the methods use.
In her presentation “Human Resources Analytics – Fair and Free of Discrimination?” Dr. Katharina Simbeck, Professor of Business Informatics and Dean of the Faculty of Computing, Communication, and Business at the University of Applied Sciences HTW in Berlin, pointed out that the personnel departments of companies are using automated methods already today to an increasing extent, for example in the preselection of applicants. They use various elements of text analysis for this purpose. The underlying routines are not transparent, however, and are developing into a black box that can inadvertently appear discriminatory. “Machine learning relies on large volumes of data acquired in the past; this can reproduce and at times even reinforce prejudices,” said Simbeck. A team of researchers from HTW analyzed automated evaluations of letters of application to the four suppliers Amazon, Google, Microsoft, and IBM. The supposedly objective assessment of the applicants’ aptitude at times turned out to be highly diverse. Depending on the supplier and the selected name, gender, or aristocratic title, identical letters of application were often evaluated very differently. The assumption that data-based assessments are fair and objective was thus shown to be erroneous. A major difficulty for the users was that they did not have any information as to the suppliers’ criteria and algorithms. As these systems are relatively new, however, it is highly likely that adjustments will be made and customized solutions for individual companies will become more widespread. These systems are also expected to become more influential with increased use.
Dr. Julia Borggräfe, Manager of the “Digitalization and the Labour Market” department of the Federal Ministry of Labour and Social Affairs (BMAS), first of all presented her department as a think tank that enables the ministry to actively shape the digital transformation and its effects on the world of work and society rather than merely reacting to developments. With new formats, from the lab to fellowship programs and co-creation processes, the think tank sets out to benefit from exchange with as many protagonists as possible in order to develop strategies for meeting the challenges of digitalization; after all, “in actual fact we do not have any rules when it comes to discrimination in technical systems, but we urgently need them, because acceptance of AI systems is also a matter of trust," said Borggräfe. To draw up these rules in a way that meets with general acceptance, it is necessary to build up one's own expertise – this too is a central task of the think tank, she continued, and it is all the more important since it is also urgently needed in Europe. It is also necessary to promote large-scale training programs in AI, in the fields of both technical and social competence. The think tank thus sees itself as an all-encompassing unit that makes its expertise available to other departments as well.
Peter Froeschle, Managing Director of ARENA2036, gave a presentation on “AI in Production.” He started by introducing the research campus co-founded by Daimler AG, Robert Bosch GmbH, and the University of Stuttgart and subsidized by the Federal Ministry of Education and Research (BMBF). This is an innovation platform on which more than 30 partners from industry and science can exchange, and above all try out, new ideas in a pre-competitive setting. In the 10,000 m2 research factory the risks of developments can be distributed, and topics can be investigated that lie between fundamental and application-oriented research. Added value is thus generated for all involved, said Froeschle. To tap further potential, the start-up accelerator STARTUP AUTOBAHN was also established under the umbrella of the research campus so that the founder scene can also be promoted, enabling this innovation potential to be smoothly coupled with high-technology research. For the application of artificial intelligence, fluid production is particularly suitable in view of its high complexity. Nevertheless, two fundamental considerations would have to be taken into account here: Could the problem not be solved more simply on the basis of a model; and is the optimizing AI approach transparent, so that humans can intervene at all times in order to optimize processes? On the other hand, it is also clear that artificial intelligence will be increasingly used for support in individual cases.
At the conclusion of the Innovation Forum, the science journalist Dr. Manuela Lenzen spoke about “Confusion Machines.” In her presentation she pointed out that the use of helpful and even intelligent machines was an age-old dream of humankind. “When these machines then roll their eyes and speak to us, we are very easily tempted to ascribe human qualities to them,” said the philosopher, describing the confusion that can arise from this form of anthropomorphism in our dealings with robots and AI systems. But even though AI systems already act more effectively than humans in many areas, genuinely autonomous behavior – and human knowledge in particular – is still a long way off, because the application of such computer systems is limited to highly specific, narrowly outlined tasks. “While many people are still waiting for the breakthrough, artificial intelligence is slowly but surely making inroads into our everyday lives,” said Lenzen. This is giving rise to fear among many people that machines will take power. Not only for this reason must society be able to determine what functions should be taught to machines. There is currently a lack of transparency as to how computers and algorithms arrive at their conclusions, which fuels precisely this uncertainty. But this can be put to good use: “There are also good sides to confusion, because it arouses intensive discussion of the decision-making processes of humans and machines. This reveals differences, but above all elements of uncertainty, in decision-making processes. So confusion machines can become precision machines,” said Lenzen in conclusion.
Matthias Spielkamp, Algorithm Watch
Prof. Dr. Eckard Minx in conversation
Dr. Katharina Simbeck
Dr. Manuela Lenzen
Dr. Julia Borggräfe