
Visions, needs and requirements for Future Research Environments: An Exploration with Economist and Science Fiction Author Karl von Wendt
Katharina Flicker (TU Wien), Florina Piroi (TU Wien), Andreas Rauber (TU Wien), Karl von Wendt
This interview is also available for download: DOI 10.5281/zenodo.4506912
We live in remarkable times: the world is changing at an increasing pace, our societies face challenges that extend across national and geographical borders, and we are flooded with (dis)information. The scientific process has already changed extraordinarily in the past half century with research environments evolving from isolated and loosely connected islands to dense networks of researcher and institutional cooperation.
Still the world is changing and we need to ensure that science remains a global effort. Building a global network and infrastructures to support that aim, however, takes time. We need to start such building processes now and – most importantly – we need to develop and explore visions for research, science and society that give us ways into desirable futures. Thus, we launched an exploration series to elaborate visions on how research will be conducted in the future and to explore different perspectives on research.
“The dialogue between science and the public is essential in order to establish open exchange of information”
TU Wien: Thank you for taking the time to talk to us. The reason for this series of discussions is to look at future research environments from as many different perspectives as possible. We believe we need some sort of broadening that allows us to explore various future scenarios.
KW: I studied business administration and did my PhD on artificial intelligence applications, but I am a writer and see my task as thinking a few steps further, dragging possible future horror scenarios into the light of day. Such scenarios include, for example, increasing inequality of knowledge and power, which is amplified by AI, lack of understanding and knowledge about AI, algorithms, technology that we use all the time and that massively interfere with our lives, or the impossibility of complete data transparency. The amount of data are comprehensible for the human brain, which is why citizens, as the GDPR [General Data Protection Regulation] stipulates, cannot be responsible for their data. It is unrealistic to expect individuals to understand what is happening with all their data.
It is unrealistic to expect individuals to understand what is happening with all their data
TU Wien: Let us explore your thoughts on the GDPR, then. What would we need, instead of – or in addition to – these regulations, to make individuals understand?
KW: For that, I think we could look to other areas such as the pharmaceutical or food sectors. For a long time now, there have been not only labeling requirements, but also regulations about what you are (not) allowed to do. Thus, we already have a relatively sophisticated system of safety measures. We need something similar in the information sector. We need to control proactively what a company is allowed to do with data. I do not think the industry will do it on its own. Then, of course, there are the bad guys: Internet criminals, dictators, or people who abuse such systems for whatever reason.
TU Wien: You also mentioned at the beginning the lack of understanding AIs, algorithms and technology, which affects large parts of the population. Which methods are best to counteract this somehow?
KW: I see two possibilities here. One is to inform people, which I think is a very important approach. What should be communicated, however, are not details of how AI works technically. People need to understand how AIs are being used and how technology actually deals with them. We also need to emphasize where information comes from and how it is used.
In the pharmaceutical industry, for example, testing phases are well established. Such procedures do not exist for all research infrastructures
The second approach is to establish regulations and restrictions on what companies are allowed to do with our data. Trust is also important in this context. Ultimately, AI means automating decisions. If I leave it to Google to select the search results, then I also trust Google to make this decision better than I do and most of the time, that's true. Thus, the industry actually has an interest in maintaining that sort of trust. If we were to get into a situation, where this trust was massively limited because it suddenly became clear that companies simply wanted to trick us, then trust could be lost quickly. Accordingly, I believe that you can tell companies to play along, if they want to stay in business. Then you have to make sure that certain things simply don't happen anymore, but there has to be pressure from the outside. Otherwise, it won't happen.
TU Wien: The same applies to researchers and research results. Here, too, there must be a certain basis of trust.
KW: Yes, that is a very important point, because we are currently seeing that the opposite is happening. People trust science less and less. Instead, it seems they are suddenly starting to believe fairy tales just as it suits them. Therefore, researchers naturally also have a great interest in counteracting fake news and disinformation. Against this background, science communication is crucial. Especially in a democracy, I believe, we need to communicate as much as possible. The dialogue between science and the public is essential in order to establish open exchange of information. Science communication could be promoted via public relation workers, events, shows, perhaps also via YouTubers, or other individuals. We cannot rely on our educational systems, only. Instead, we must be aware that there are movements that risk to destroy everything that science has built up over two or three centuries, and we must stand up to them. Science can play a particularly important role here by making it clear that it serves humankind.
TU Wien: Having said all this, how would we achieve or support this via future research infrastructures that we are building?
KW: I would start with analyzing actual infrastructures and related systems. In the pharmaceutical industry, for example, testing phases are well established. Such procedures do not exist for all research infrastructures, or the digital market. Of course, there are individual studies, but as far as I know, there is no systematic survey in this regard. Nevertheless, we would first have to understand exactly what the risks and side effects of technologies are. Then we could look at legal rules and regulations and take measures to limit risks and side effects.
We would first have to understand exactly what the risks and side effects of technologies are
About Karl von Wendt
Karl von Wendt studied Economy at the University of Münster in Germany, and is most reknown for the Boy in a White Room series, written under his pen name Karl Olsberg. He was nominated for the Kurd-Laßwitz-Award and the German Youth Literature Award.