PUBLICATIONS

International Peer Reviewed Articles & International Conference Papers

Abstract: 
The introduction of advanced new technologies is transforming the space industry. Artificial intelligence is offering unprecedented possibilities for space-related activities because it enables space objects to gain autonomy. The increasing autonomy level of space objects does not come without legal implications. The lack of human control challenges existing liability frameworks. This paper reviews the provisions of the Outer Space Treaty and the Liability Convention as the main legal documents introducing the legal grounds for attributing liability in case of damages caused by autonomous space objects. Looking at the limitations of these legal frameworks in what concerns the attribution of liability, this paper identifies the conditions that could cause a liability gap. The amendment of the Liability Convention, the concept of “international responsibility” introduced by Article VI of the Outer Space Treaty and several international law principles are analysed as potential solutions for preventing the liability gap and mitigating the risks posed by autonomous space objects.

Abstract:
GNSS offer solutions for many sectors, from road traffic, aviation, emergency-response services, civil engineering and agriculture. Due to the latest technological developments, GNSS, including Galileo, are also being integrated as an essential component of AI systems with various automation levels, such as self-driving vehicles, drones, lane keeping systems on highways etc. Despite their numerous benefits, GNSS are not risk-free. Even though it is unlikely that a loss of signal will lead to an accident caused by an AI system, this scenario cannot be totally ignored. Recent incidents revealed a series of vulnerabilities that need to be addressed before more AI systems using GNSS signals can become active participants in our societies. In this context, it becomes clear that the most pressing issue is the one related to liability: who will be liable in case an accident is caused by an AI system due to an absent or inaccurate GNSS signal at a critical point during navigation? Taking into consideration the debates concerning Galileo’s potential acceptance of liability, this paper investigates if international space law is able to prevent potential liability gaps, thus avoiding situations where incidents occur and liability cannot be attributed.

Abstract:
Artificial intelligence (AI) is transforming the space industry by offering unprecedented possibilities for space related activities. AI applications in space include from analysing large amounts of space data, enabling deep-space missions, mitigating the effects of climate change, combating space pollution, to assisting astronaut crews during their daily operations. 
In the context of building future human societies on the Moon, AI systems will play a double role: an ex-ante preparation role and a societal consolidative role. AI systems used in the ex-ante preparation phase include various applications used for exploration investigations and they will be decisive for the scientifical understanding of the future lunar habitat. For example, autonomous Lunar rovers will be especially designed for transporting astronauts on the lunar surface and for assisting them during their exploration missions. The Lunar rovers use AI for navigating on the Moon, similarly with the rovers launched in the context of Mars exploration. AI capabilities will also be used for “vehicle system management”, meaning that they will transform the astronaut's spacecraft into a robot for performing various tasks.
Because the Moon has an uneven surface, this might have a significant impact towards the future locations of Lunar habitats. In this respect, several types of AI systems are currently being used for identifying and mapping the location of lunar craters and for creating lunar crater databases. Once the habitats will be settled, AI systems will contribute for consolidation, playing an assistive role, among others. AI systems will be able to offer support in daily activities, including offering mental health assistance, in the form of social robots. Similar systems have already been successfully deployed on the International Space Station, for crew assistance.
The introduction of AI systems as part of future Lunar habitats does not come without corresponding risks, especially from a legal perspective. Several legal challenges may appear in the context of a high reliance on these systems, such as:  who will be liable in case an AI system will be involved in accidents causing economic losses or even worse, causing loss of human lives? What type of legal framework will be required to mitigate such risks? Will the existing body of laws representing international space law remain sufficient for addressing these challenges?
Therefore, the purpose of this paper is to critically analyse the above-mentioned legal risks and to propose corresponding mitigating actions for ensuring long-standing future Lunar societies.