2024.12.5., csütörtök 11:32

AI and brain-machine interfaces, the allure of a career in science and social harmony – interview with Dr Yanan Sui

The World Science Forum 2024, held in Budapest, brought together leading scientists and policymakers from around the globe to address pressing challenges facing the scientific community. While one of the central themes emerging from the discussions was the apparent decline in trust in science, this trend, as the conversations revealed, might not be as universal as often portrayed in Western media. This complexity of the global scientific-trust landscape became particularly evident in an interview with Dr Yanan Sui, Associate Professor at Tsinghua University in Beijing, who offered a counterpoint to the prevailing narrative of eroding trust in science.

 

From Dr Sui’s perspective, trust in science in China remains remarkably robust, with scientific authority carrying significant weight in public discourse. “My observation and my feeling is that trust in science is at a very high level, and I think that’s true even compared to other times historically,” he observes, noting that this elevated status is also apparent in relation to other countries. The influence of scientific thinking extends even to the highest levels of governance, where policymakers actively invoke scientific principles to validate their decisions. 

 

Unlike in many Western countries, where scientists often become scapegoats for unpopular policies implemented under the banner of “following the science”, the Chinese public’s respect for scientific authority appears more resilient. According to Dr Sui, even when policies do not achieve their intended outcomes, scientists are not typically blamed. Instead, the mere association with scientific methodology lends credibility to decision-making processes: “People have the general feeling that if you are doing things under the guidance of science, you are doing things in a scientifically correct way, then it has this legitimacy to be [the] correct [approach].”

 

This cultural reverence for science is deeply embedded in education and career aspirations. “For my generation, and still for many in the younger generations, we had a fair scientific education, and a lot of the young people have their aspirations to make scientific contributions to the society,” Dr Sui notes. Unlike in some other cultures where young people might dream of becoming athletes or entertainers, a career in science maintains its allure in China. Despite relatively modest financial compensation, scientists and educators command substantial respect for their roles in advancing knowledge and nurturing future generations.

 

AI, brain-machine interfaces and digital models of our muscles

 

The conversation with Dr Sui then shifted to his own research at the intersection of artificial intelligence and brain-machine interfaces – fields that have experienced dramatic transformations in recent years. Reflecting on the evolution of these disciplines, Dr Sui recalls a time when they were relatively niche areas. “When I was an undergraduate student, and even at the beginning of my PhD studies, those fields were not hot at all,” he remembers. The brain-machine interface community was particularly small, only gaining widespread attention in the past decade as its potential for understanding and treating nervous system disorders became more apparent.

 

The trajectory of AI research has been even more remarkable. Dr Sui describes how researchers in the field once avoided using the term “artificial intelligence”, preferring to identify their work with specific subfields like machine learning, computer vision or natural language processing. “Today we see that all those are sub-areas of AI, but we never said at that time that we do research in AI,” he notes. This reluctance stemmed from the field’s historical ups and downs, with AI falling in and out of favour with funding bodies and the broader scientific community over the past decades.

 

Dr Yanan Sui, Associate Professor, Tsinghua University

 

Dr Sui’s current research focuses on modelling the human neuro-musculo-skeletal system to create dynamically controllable digital representations of human beings. This work aims to understand fundamental aspects of human movement that remain surprisingly mysterious. “We still know very little about why we can, for example, sit still without any burden on our brain. And when we are standing, when we are walking, we are very stable. We are not falling. But that is actually a non-trivial control problem from the learning and control point of view,” he explains.

 

The goal of this research is to develop detailed digital models that can help understand how individual muscles work together to create stable movement. This digital representation offers unprecedented opportunities for research that would be impossible with human subjects. “With real humans we cannot do a lot of experiments to test how our muscles are behaving, how they are performing specific tasks. But if we have a digital representation, a digital model of our system, then we actually can learn a lot,” he notes. These models could help understand various aspects of human movement, from basic actions like sitting and standing to differences between healthy movement patterns and those affected by neurological disorders.

 

In the field of machine learning, Dr Sui’s work primarily focuses on reinforcement learning, which historically developed largely in parallel with supervised learning algorithms and the early versions of large language models. And while current language models operate purely in the virtual world, Dr Sui’s work bridges the digital and physical realms through human modelling and human-robot interaction.

One of his most significant contributions has been the development of “safe exploration” algorithms, particularly crucial for enhancing the use of brain-machine interfaces during medical treatments.

 

These algorithms address a fundamental challenge in treating paralysed patients: how to safely control high-dimensional stimulating arrays implanted in patients’ spinal cords to restore movement. “Our collaborating physicians can implant a high dimensional stimulating array into the spinal cord of [paralysed] patients, but how to control that high dimensional array to provide information, to provide energy, to help the physical system [of the patient] to move again? That is largely unknown, and we need to develop efficient algorithms to learn that from scratch,” he explains.

 

The safety risks in such experiments are significant. Arbitrary stimulation of the spinal cord could cause discomfort or even injury to patients’ muscles, bones or joints. This led Dr Sui and his colleagues to develop their “safe exploration” algorithm about twelve years ago, which has become standard teaching material at prestigious institutions like Stanford University and Caltech, and has been incorporated into several university textbooks.

 

The algorithm’s core principle is conservative optimisation. “We are not doing arbitrary things or arbitrary search. We’re doing a type of conservative search and conservative optimisation [of our treatment protocols], which we call “safe optimisation” or “safe exploration”,” Dr Sui explains. The process involves defining a safe range of experimental parameters within the decision space and making choices only within this boundary. When decisions are made using the boundary conditions, the outcomes inform whether the safe zone can be expanded in a particular direction. If a boundary proves to be hard – meaning going further could lead to negative outcomes – the exploration shifts to other locations along these boundary conditions.

 

Crucially, this approach requires establishing clear safety criteria before beginning any experiments. These criteria are determined by specialists in the specific area, such as clinicians, who provide the thresholds and constraints that the algorithm must respect. And while the algorithm can make suggestions about potentially unsafe actions, the final decision always rests with human experts who can evaluate whether it is justifiable and ethical to proceed with uncertain explorations.

 

This methodological approach might also offer valuable insights for managing the broader development of AI technologies. Rather than allowing unbounded exploration, Dr Sui advocates for establishing clear safety boundaries and expanding them gradually based on empirical evidence. He acknowledges, however, the challenge of defining safety criteria, particularly when dealing with technologies whose long-term implications remain uncertain.

 

Dr Yanan Sui, Associate Professor, Tsinghua University

 

Addressing the current global context of increasing international tensions and what some term a “polycrisis”, Dr Sui sees potential in science’s role as a universal language. He argues that science’s pursuit of truth and its reliance on testable criteria provides a common ground that transcends cultural and national boundaries. Nevertheless, he also recognises the challenges posed by rapid information exchange in the digital age, which can amplify differences rather than foster understanding.

 

Dr Sui’s suggested solution for managing global diversity in an interconnected world draws heavily from his neuroscientific background. He notes the difficulty humans face in reconciling contradicting opinions and suggests that technology might help restore a balance between local consistency and global diversity. “It is so hard to say that we guarantee that all the people in the world are thinking exactly the same way. But it is probably acceptable to say ‘Oh, I acknowledge there are differences, but those differences do not directly confront my everyday life in a considerable way’,” he reflects.

 

Looking to the future, Dr Sui advocates for the importance of developing proper reward systems, both in artificial intelligence and human society. Drawing parallels between reinforcement learning in AI and social systems, he suggests that one of the key challenges lies in designing mechanisms that appropriately recognise and reward positive contributions. As AI systems become more powerful and integrated into society, he argues that ensuring the assignment of fair credit could be crucial for maintaining social harmony and encouraging beneficial behaviours.

 

The conversation with Dr Sui spotlights his thinking about a complex landscape where trust in science, technological advancement and social harmony intersect. Combining his insights from the specific Chinese context with his expertise in cutting-edge technologies offers a unique perspective on navigating the challenges of our increasingly interconnected and technologically sophisticated world. As the global scientific community grapples with questions of trust, regulation and social impact, such ideas become ever more essential in our collective search for the right solutions to our shared challenges in our interconected yet, in many ways, culturally still very “local” world.

 

 

‹ Back