
Visions, needs and requirements for Future Research Environments: An Exploration with Zoologist and Psychologist and Science Fiction Author Adrian Tchaikovsky
Katharina Flicker (TU Wien), Florina Piroi (TU Wien), Andreas Rauber (TU Wien), Adrian Tchaikovsky, Andrew Treloar (ARDC)
This interview is also available for download: DOI 10.5281/zenodo.4730660
We live in remarkable times: the world is changing at an increasing pace, our societies face challenges that extend across national and geographical borders, and we are flooded with (dis)information. The scientific process has already changed extraordinarily in the past half century with research environments evolving from isolated and loosely connected islands to dense networks of researcher and institutional cooperation.
Still the world is changing and we need to ensure that science remains a global effort. Building a global network and infrastructures to support that aim, however, takes time. We need to start such building processes now and – most importantly – we need to develop and explore visions for research, science and society that give us ways into desirable futures. Thus, we launched an exploration series to elaborate visions on how research will be conducted in the future and to explore different perspectives on research.
“Communicate the science from the very technical and difficult down to a level where the average person can understand it”
TU Wien: What do you think is most relevant in order to perform cutting-edge research now and in the future?
AT: The bigger question is what do we think the purpose of people in society is? I think we are bound to the notion that this purpose is to work. Your work, the thing that you are employed to do is the thing that gives you validity as a person. When we are looking at people purely from a point of employment, we don’t have a purpose for the majority of people. The majority of people probably do jobs that don’t really need to be done. Especially now, we are effectively automating intellectual tasks as well as many physical tasks. We will hit a societal crash point, where there will be a vast number of people who have literally nothing to do because what they were previously employed to do doesn’t need to be done by people anymore. At the same time, we are still trying to define people by their job. These two things are completely incompatible. Unless we get to the point of accepting that you don’t have to work, then I don’t think we get to survive that point.
As long as you are asking questions such as who is the lead researcher, who owns the research that in itself is setting up a hierarchy
TU Wien: How do we avoid that problem to manifest itself in a catastrophic way and manage to survive such a societal crash point?
We need to re-prioritize how people see themselves and how they are defined by society
AT: I think that in order to make a change, we need to re-prioritize how people see themselves and how they are defined by society. You’re kind of going against the idea that people live to serve and if they are not serving they are not doing anything. Additionally, a remarkable selfishness culture has built up. I think it is tied to the hierarchical nature of our western society – this being the society that I really have experience of – and the balance between our worth entirely as dictated from our internal indicators and our worth as dictated by society. The first would make you more selfish than the second, you’d think. But weirdly, it seems to work out the other way: When you look at your worth as measured by society then you are already measuring yourself against other people, updating your internal indicators. I mean as long as you are asking questions such as who is the lead researcher, who owns the research, etc., that in itself is setting up a hierarchy where some people are better than other people.
Moreover, once you have that [set of indicators] an awful lot of your energy and time is going into the meta-game of fighting for position within that hierarchy. People who are doing the actual research will always lose out in the meta-game of climbing the hierarchy because their effort is being spent in other places. That is also going to cripple any attempt to make any kind of improved future society. All of that effort pushing it another direction will be evaporated in this meta-game and it is because of the nature of how hierarchies work. Whoever wins will be the one least suited to be an up-position authority.
Another thing we have seen recently is that little knowledge and horribly misconstrued pop science are dangerous. In our interconnected society, you can claim almost anything and it will attract people – particularly when you have a demagogue spreading it. A huge number of people will simply join a cause believing what they are saying is science, because their understanding of science is just enough to know that science adds some weight to the credibility of a cause, but not enough to look into the genuine research. You just need an idea that turns up dressed vaguely like a scientist and people will give it a huge amount of credence and never inquire. So the actual basic science diligence of “Oh! Let us look at this” never occurs. It only becomes a faith-based approach to science. I think that – if you could get science communication to a point where you are able to communicate the science from the very technical and difficult to grasp down to a level where the average person can understand it – that would be doing a huge amount of good.
TU Wien: Against this background, what kind of communication tools would be of help?
AT: What if an AI system was theoretically complex enough to be ready to take in this knowledge and present it to the public. A friendly sort of AI that is telling you what science is saying today and inquires whether you would like to know more. If I were writing a world where this [idea] was handled, I would have an AI that would somehow be able to communicate science in a way that everyone can get the accurate information, on the level they are comfortable with, and the more you want to ask the more information it can provide you with. That would be glorious. That would be sort of scientific utopia, and AIs are a world away from that as far as I can work out. We are nowhere near that level of analytical process.
Their understanding of science is just enough to know that science adds some weight to the credibility of a cause, but not enough to look at the genuine research
TU Wien: Getting back to what you said before concerning the dangers of little knowledge and horribly misconstrued pop science, what mechanisms would we need to support a society that bases decisions on facts and science?
AT: I think you might have to change the way that teaching in general, and science teaching in particular, happen in school and then also very much change the way that scientific issues are reported on by the media. The way scientific subjects are presented in the media and the way they are presented at all levels of education is probably the most powerful tool you have for influencing the way the next generation of people will actually approach things like science. Hopefully, the next generation of people are at least growing up with the idea of asking questions and finding out about issues of interest to them as a baseline approach to the world, rather than simply sitting there being told about things.
The way scientific subjects are presented in the media and the way they are presented at all levels of education is probably the most powerful tool
However, I think you might run into another problem in relation to science communication. Our society places disproportional value on certainty. We’ll absolutely believe in someone who is very confident and who seems to know a subject, in contrast to someone who knows a great deal and is in a role – a scientist, a medical advisor – where their opinion should carry quite a lot of weight but who is tentative about what they are saying. Obviously, people are aware of this: Persuasion techniques are a thousand years old. The idea of making yourself sound very sure and confident is an age-old way of getting people to follow you and make them do what you want them to do. I therefore think you potentially have to work against something very fundamental in people’s psychological makeups. I am not saying this is not doable by any means, but I think it is a struggle. If you want people to approach life from a point of view that uncertainty is a good thing and uncertainty is in itself a valuable thing, it leaves room for development and change.
TU Wien: This is a mostly difficult matter, since, with very few exceptions, there is hardly ever a situation where you have one solid truth. After all, a concept is valid for the time being, and then somebody comes along with another idea or a different concept that replaces the old one.
AT: I absolutely agree. That is in my perspective an indisputable facet of societies. The only thing we know is that we do not necessarily know.
About Adrian Tchaikovsky
Adrian Tchaikovsky studied Zoology and Psychology at the University of Reading in the UK, and is most reknown for the Shadows of the Apt-Series. Tchaikovsky won several awards for his work, including the Arthur C. Clarke Award and the British Fantasy Award.