Concurrency refers to the ability of multiple (possibly interacting) activities to overlap over time, so that one starts before the other finishes. Digital systems are a prime example of such ability for they exhibit concurrent aspects at every scale: from computing clouds to modern CPUs. Concurrency Theory is the science of concurrency. This research area started as part of Theoretical Computer Science but extends far beyond that. In fact, concurrent features appear in many different fields like Biology, Economics, Medicine, and Social Sciences. Building upon our group’s expertise in applied logic, programming languages, and software engineering, we continuously apply and advance the state-of-the-art in concurrency. Most of all, we focus on programming languages and tools to design, develop, and analyse complex concurrent systems.
The goal of Cybersecurity is to create secure computing systems and networks and to protect them against adversaries who try to compromise, damage, or steal components (e.g., software or hardware), data, and secret or personal information or to disrupt or compromise provided services. Cybersecurity, therefore, affects all aspects of interconnected computing systems, e.g., their architecture, protocols, hard- and software components, as well as users and administrators. In this research field, we particularly focus on cryptography and the secure development of distributed applications. Cryptography protects, e.g., the integrity, authenticity, and privacy of digital communication. It relies on the theoretical assumption that certain mathematical or computational problems are hard to solve by an attacker and on the practical requirement that the implementations of cryptographic primitives are secure. In particular, we are looking into Post-Quantum Cryptography, i.e., alternative mathematical constrictions that provide protection against the upcoming quantum computers, as well as into the efficient and secure implementation of cryptographic schemes. For the construction of secure software, we advocate and research the best-practices to develop secure software by construction (e.g., choreographic programming, formal methods) and modern software engineering approaches such as Dev(Sec)Ops and Site Reliability Engineering, where security is embedded in every step of the software-development process.
Knowledge representation is the subfield of AI that studies reasoning. In our group, we work in different subareas of this topic. We investigate techniques for online reasoning with incomplete information that arrives over time. We apply logic and logic techniques to specify and verify properties of programming models, both by: developing logics to write and prove interesting properties; designing type systems that automatically ensure interesting properties of our systems; or studying correspondences between logic proofs and programs that allow us to view the latter in insightful new ways. Finally, we also use logic and state-of-the-art solvers to optimize real world problems such as scheduling, packing, timetabling, configuration, and logistics problems.
Programming languages have a central role in solving computing problems for they empower developers with building blocks for solving problems or classes of problems. Research on programming languages is not just to find solutions to important problems, but to find the best expression of those solutions, typically in the form of a language, language extension, library, program analysis, or transformation. The aim is for simple, understandable solutions that are also general since by acting at the level of a language (or its toolchain) they apply to many sorts of programs and sorts of problems. Therefore, a rigorous approach to software behaviour is critical to prove that (classes of) programs enjoy desired properties, and/or eschew undesired ones. Building upon our group’s expertise, we develop programming language solutions to the challenges issued by the growing complexity of concurrent systems, especially in the wake of the digital transformation.
Software Engineering is the establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines. Software engineering is needed due to the hight rate of change in user requirements and environment on which the software is working. Based on the application to develop, the available resources and constraints, Software Engineering provides best practices and methodologies to ease and guide the task of developing and maintaining the software from the initial requirement specification through its entire evolution.
Choreographic Programming is an emerging paradigm for correct-by-construction concurrent programming based on message passing. In this paradigm, communications are specified using an Alice-to-Bob notation, rather than as two separate send and receive actions. As a consequence, protocols are guaranteed not to deadlock because of communication mismatches. Models based on choreographic programming have been successfully developed for different settings where concurrent programming is challenging, including service-oriented computing and cyber-physical systems.
In our group, we are extending choreographic programming by developing more expressive and powerful languages that can be used in real-world applications. The main challenge is to increase expressitivy while maintaining the good theoretical properties that made choreography languages successful in the first place.
The increasing dependency on computers to perform everyday tasks is associated with the need to store larger and larger quantities of information, which must be efficiently processed and retrieved. One of the challenges involved in managing this data is ensuring its internal consistency, especially as the information being stored changes due to the evolution of the real-world data it models.
Modern-day database management systems address this problem by allowing the user to define integrity constraints – properties of the data that should hold at all times – and algorithms to repair the database when these properties no longer hold. Extending these techniques to more powerful reasoning tools, such as those used in the Semantic Web, is however an ongoing complex process, which we address in our research.
DevOps is a software engineering collections of practices to link the software development (Dev) with software operations (Ops). DevOps strongly advocates for automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management. By using DevOps methodology it is possible to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality.
Computing systems are not always as visible as notebooks, PCs, servers, and smart phones - often, they are hidden within other devices, e.g., cars, trains, industrial appliances, and even refrigerators. Since the computing systems are embedded into these devices, they are called "embedded systems". Typically, embedded systems have only a limited amount of resources, e.g., small CPUs, small memory, and small communication capabilities. Therefore, the integration additional functionalities comes at a cost. This can be a sever disadvantage for features that are not mandatory for the function of the device, for example cyber security. To prevent security features to be stripped for costs sake, cryptographic functions that enable cybersecurity must be implemented in a efficient manner. We conduct research in the efficient and secure implementation of cryptographic schemes for embedded systems based on, e.g., ARM Cortex-M4 and RISC-V CPUs.
The microservices architectural style has been recently proposed to cope with the problems of scaling and evolving software in a distributed setting. A microservice is a cohesive, independent process interacting via messages. Following the microservices architectural style, a network of interacting microservices corresponds to the interconnected modules of a distributed software. Microservices are usually contrasted with traditional single executable artefacts, also called "monoliths". In monoliths, modules communicate via locally-available resources (memory, databases, files), making them tightly coupled together and ultimately hindering both software scalability and evolvability. The principle of microservices assists project managers and developers: it provides a guideline for the design and implementation of distributed applications. Following this principle, developers focus on the implementation and testing of a few, cohesive functionalities. This holds also for higher-level microservices, which are concerned with coordinating the functionalities of other microservices.
Besides promising huge advances in drug design, traffic simulation, etc., quantum computing poses a severe threat on cybersecurity: A large and powerful quantum may soon be able to break currently used cryptography, rendering our modern interconnected world insecure. To counteract this threat, in the field Post-Quantum Cryptography (PQC), researchers are looking into alternative cryptographic schemes that provide protection also against adversaries who are in possession of a large quantum computer. Current efforts in the research community focus on PQC schemes submitted to a standardization process by the National Institute of Standards and Technology (NIST). We are co-authors of the submissions SPHINCS+ and Classic McEliece that are currently having good chances for standardization. Besides this contribution, we are conduction research on the efficient and secure implementation of PQC schemes on resource-restricted embedded platforms.
Quantification of properties in complex systems is essential to improve the understanding of situations, to identify problems, and to support decision making. Quantitative models offer solid methodologies to conceptualise systems and to access this critical information. Nowadays, they are valuable assets to overcome the challenges issued by the growing complexity of systems, especially in the wake of the digital transformation. Building upon our group's expertise on quantitative methods, complex concurrent systems, and programming languages, we develop new tools and techniques to effectively quantify properties of choreographic programs. Most of all, we focus on availability, efficiency, performance, reliability, robustness, safety, security, survivability, and timeliness.
Modern day computer systems have to react in real time to information that they receive from e.g. sensors, continuously producing results in an online fashion. This reasoning problem is known as continuous query answering. We investigate how one can compute answers to contiuous queries in an online fashion, by continuously updating the system's knowledge with incoming information, giving information about how likely it is that a particular answer will be produced. We also study how to integrate practical problems such as data arriving out of order, or being lost in transmission, in our theoretical framework, in order to develop usable tools.
Since researchers in 1973 proved a mathematical result (the four-color theorem) using a computer program, there has been an increasing interest in using computers to do abstract mathematics – rather than simply viewing them as extremely powerful calculators. Although the classical results of Gödel and Turing tell us that there are limits to how much can be automated, computers have gained notourious success at being able to prove safety and correctness properties of sensitive software – either on their own or interactively.
Building upon our group's expertise on theorem proving, we are currently applying it to choreographic programming, evolving our mathematical constructions into full-fledged certified compilers that are guaranteed not to introduce errors in the code they generate.
Cloud computing is a new paradigm that enables the ubiquitous access to pools of computing resources that can be rapidly provisioned with minimal management effort. Since the launch of Amazon EC2 in 2006, cloud computing has revolutionize the software development, allowing the developers to partially abstract from the computing medium and develop applications that can be deployed in matter of minutes and can easily scale based on the request load. More than 50% of organizations are using the cloud to message, store data, and compute.
The promise of Digital Transformation is to accelerate and improve business activities, processes, and models through digital automation. This process has already started decades ago, but recently it has become a crucial asset for companies in highly-competing markets. At the essence of Digital Transformation there are two factors: integration and flexibility. Companies already own many digital systems, used to automate specific tasks. However, these systems are frequently silos: they are not meant to interact with other systems and the data they manage is difficult to extract and share. Integration of these legacy systems (both within the same company or belonging to different ones) as well as with recent applications, devices, and the Cloud must go hand-in-hand with a high flexibility to change how the integrated systems interact with each other. Together, these two factors lead to increased productivity, as well as better reactivity and system consistency.
In a world where we rely more and more on computers to perform tasks of ever-increasing responsibility, it is essential to be able to guarantee that these computers are doing their work correctly. It is unacceptable that a plane crashes because of a software fault, or that a patient in a hospital dies because a computer assigned them the wrong medication.
One of the goals of our group's research is to increase reliability of computations, by either designing tools and techniques to generate code that is mathematically guaranteed to satisfy specific safety properties (correctness-by-construction) or by exploring methods to verify interesting properties of existing computer systems (hardware/software verification).