Agenda

Département
Langue
Date
Thématique
2025

December

  • 14:00
    17:00

    Jury members:

    - Robert ROSS, Deputy Division Director/Senior Computer Scientist, Argonne National Laboratory (United States) - Rapporteur
    - Jesus CARRETERO, Full Professor, Universidad Carlos III de Madrid (Spain) - Rapporteur
    - Gabriel ANTONIU, Director of Research, Inria - Rapporteur
    - Thomas HERAULT, Director of Research, Inria - Examiner
    - Vania MARANGOZOVA, University Professor, Université Grenoble Alpes - Examiner
    - Julien KUNKEL, Professor, University of Göttingen (Germany) - Examiner
    - Brice GOGLIN, Director of Research, Inria - Director of Research 

     

    Salle Ada Lovelace (Inria)
  • 15:30
    18:30

    The thesis “Locomotion, step planning, and fall resistance in humanoid robots via reinforcement learning policies” studies how to equip humanoid robots with reliable walking and rapid recovery after a fall, without resorting to heuristics or preprogrammed trajectories. It addresses a central challenge in robotic autonomy: enabling real robots to act robustly in uncertain, contact-rich environments, while respecting the constraints of embedded computing. Purely model-based approaches show their limitations in terms of adaptation, while deep reinforcement learning (DRL) promises generalizable behaviors learned from data. The problem is therefore: how can we design DRL policies that are computationally light, transferable from simulation to real robots, and integrable into standard locomotion stacks? Methodologically, the thesis establishes the foundations of reinforcement learning applied to robotics, then proposes two main contributions trained in simulation with domain randomization and validated on small humanoids. FootstepNet is an efficient actor-critical step planner capable of producing continuous step placements while anticipating the number of steps needed to reach multiple local goals; it eliminates dependence on discrete step sets and heuristics, operates using embedded inference, and matches or exceeds the quality of ARA* planning, at a much lower computational cost, with validation in simulation and on a real robot at RoboCup 2023. FRASA, meanwhile, is a unified catching-up and lifting agent: a single policy transforms proprioceptive observations into motor commands that establish stabilizing contacts before standing up. By exploiting the Cross-Q algorithm and the robot's symmetry, FRASA reduces training to approximately 30 minutes and transfers zero-shot to the actual robot, surpassing a reference based on preprogrammed trajectories and handling a wide variety of initial postures. In conclusion, this work shows that lightweight, modular, and safe DRL policies can be made practical for onboard control of humanoids, significantly reducing downtime after disruption and paving the way for more general and robust autonomy in real-world conditions.

     

    Amphi 2 Bât 2A (IUT de Bx, Gradignan)
  • 14:00
    17:00

    The adoption of electronic health records has considerably enhanced access to large volumes of clinical data. While this accessibility is invaluable for both healthcare delivery and research, it also introduces new challenges arising from the complexity of medical data. These challenges include its implicitness (i.e., the need for domain expertise to interpret data), imperfections such as inconsistency, uncertainty, and incompleteness , and its inherently temporal nature. This thesis investigates how logic-based approaches can address these challenges.
    First, I investigated an ontology-driven approach to illustrate how ontologies can be used to evaluate medical data quality, with a focus on lung cancer phenotyping. This involved designing an ontology to capture essential domain knowledge and applying it to query the Clinical Data Warehouse of Bordeaux University Hospital. The work highlighted both the benefits of ontologies in representing domain knowledge and identifying inconsistencies, as well as their limitations, particularly in handling temporally inconsistent healthcare data.
    Building on this experience, I then proposed a novel logic-based framework for inferring high-level events from temporal clinical data, in a way that better aligns with clinical reasoning and decision-making . The framework defines logical rules specifying the existence conditions of an event at a given time-point, along with optional termination conditions that signal its possible end. It also introduces two aggregation methods to construct event intervals from these conditions. Furthermore, the formalism supports the definition of meta-events, obtained by combining or generalizing other events, and integrates confidence levels and a repair mechanism to handle imperfections in event detection. To validate the framework, I implemented its core components using Answer Set Programming, a declarative logic programming paradigm, and evaluated the resulting system, CASPER, on two medical use cases. The evaluation showed both computational feasibility and alignment with expert medical opinions.

     

    Amphi LaBRI
  • 15:30
    16:30

    Locomotion, step planning, and fall resistance in humanoid robots via reinforcement learning policies

    English
    Amphi 2 Bat A IUT Gradignan
  • 10:00
    13:00

    High-performance computing refers to the use of supercomputers to solve complex problems requiring exceptional computing power, particularly in numerical simulations such as weather forecasting or fluid dynamics. These systems, organized into computing clusters, combine system administration, networking, hardware architecture, and software optimization. Supercomputers are composed of multiple computing nodes, each equipped with multi-core processors or even graphics cards, connected in a network that allows data exchange. Rather than executing a task on a single machine, problems are divided and parallelized. This allows simultaneous execution on multiple resources. There are two forms of parallelism: inter-node, where communication between nodes is critical but resources are vast, and intra-node, where processors share memory, facilitating communication but with more limited resources. In this context, streaming applications, particularly software radio, take advantage of intra-node parallelism. Stream computing differs from batch processing. Data is processed as it arrives, without accumulating input data. Processing filters are organized in a pipeline, with each stage being executed by a different computing resource. This mechanism significantly increases throughput, which is essential for applications such as video or radio broadcasting. This thesis aims to optimize the automatic allocation of resources for streaming applications on multicore architectures, first homogeneous and then heterogeneous. The first part of this work focuses on task chain scheduling on homogeneous multicore architectures. The problem is modeled as a pipeline workflow scheduling problem. The objective is to maximize throughput by exploiting pipeline parallelism and task replication. Two algorithms are proposed: a dynamic programming approach to obtain an optimal solution, and OTAC, an optimal greedy algorithm that guarantees high throughput while minimizing resource usage. Experiments show that OTAC quickly produces optimal partitions with reduced resource usage. The emergence of hybrid processors composed of high-performance cores (core-P) and energy-efficient cores (core-E) introduces new challenges: execution times vary depending on the assignment. The objective becomes twofold: to maximize throughput while minimizing energy consumption, favoring the use of efficient cores. The second part therefore focuses on resource allocation for task chains on heterogeneous architectures. Three strategies are developed: two greedy heuristics (FERTAC and 2CATAC) and an optimal solution using dynamic programming (HeRAD).
    The results indicate that heuristics achieve near-optimal performance while consuming very few additional resources. The last part of this work focuses on the management of multiple simultaneous streaming channels. In certain contexts, such as embedded systems, IoT, or the cloud, multiple applications coexist on the same resources. The goal is to intelligently distribute resources among multiple pipelines while satisfying throughput constraints without wasting resources. The last part explores allocation strategies adapted to the management of multiple task chains or task graphs. Thus, this thesis offers several contributions to the optimization of streaming systems on parallel architectures, covering optimal scheduling, adaptation to heterogeneous architectures, and the coexistence of multiple simultaneous streams, with a constant focus on performance and energy efficiency.

    Amphi LaBRI
  • 14:00
    17:00

    In recent years, data production in biology has experienced unprecedented growth, driven by the development of high-throughput sequencing techniques, whose scope of application continues to expand. New sequencing technologies specifically targeting a single cell (``single-cell'') are one example. In oncology, this new data is crucial for improving our understanding of tumor development and heterogeneity by identifying the different cell types (or states) that make up a tumor. At the patient level, comprehensible mapping of this heterogeneity paves the way for new personalized medicine therapies. Characterizing this cellular heterogeneity requires the use of automatic or manual methods to annotate an individual cell (or group of similar cells) based on its gene expression. In this context, the aim of this thesis project is to develop new methods for annotating single-cell data at the intersection of several disciplines, such as bioinformatics and computer science (knowledge representation and visualization, in particular). 

    Amphi LaBRI
  • 14:30
    18:00

    Determining the conditions for culturing microorganisms is a difficult and recurring problem in microbiology, as it requires identifying nutritional requirements and characterising metabolism. \textit{Genome-scale metabolic networks} (GSMNs) obtained from genomic data enable simulations of the metabolic potential from a predefined environment. We developed methods that solve the inverse problem: predicting nutrient sources, or \textit{seeds}, from a GSMN and a metabolic objective. The methods propose hybrid models, combining a discrete and iterative Boolean approximation of metabolic activity with numerical flux balance analysis (FBA). Applied at the scale of an individual population on GSMNs, the logic modelling method is a good approximation of flux balance constraints. The problem was then extended to communities of microorganisms. At this scale, it is necessary to consider possible transfers between networks, which increases the combinatorial complexity of the problem. We hypothesise that it is relevant to identify in priority minimal sets of nutrients that ensure the functionality of the metabolic network, which led us to consider prioritising optimisations on sets, first seeking to minimise seeds, then transfers. Three algorithms were developed, two of which ensure subset minimality. The application of methods to small communities of network reveals the combinatorial complexity, but also the complementarity of the algorithms.

    Amphi LaBRI
  • 10:00
    14:00

    This thesis comes at a time of unprecedented growth in humanoid robotics, driven by rapid technological advances and growing enthusiasm among private actors and the general public. In this context, recent progress in humanoid robot motor skills and their growing social acceptance seem to herald their imminent integration into real-world environments alongside humans. Faced with this ambition, a major challenge remains: ensuring the robustness and autonomy of locomotion in the dynamic contexts in which these robots will operate. To address this challenge, this work combines physical modeling and reinforcement learning, exploiting the complementary advantages of these two paradigms—the stability guarantees offered by modeling and the adaptability provided by learning. This manuscript begins with a state-of-the-art review of control and learning methods applied to humanoid locomotion, aimed at identifying the most promising approaches for reconciling robustness, adaptability, and dynamic realism. On this foundation, the PlaCo software, dedicated to motion planning and robot control, is developed. It aims to abstract the complexity of the optimization formulations necessary for trajectory generation, while maintaining performance compatible with real-time execution. This framework is then used to design and deploy a walking controller based on the linear inverted pendulum model (LIPM) on the Sigmaban humanoid robot. This development highlights the model's ability to produce consistent trajectories in real time, while revealing the practical limitations encountered on a real platform. In order to overcome these limitations and enable dynamic adaptation to disturbances, a reinforcement learning agent dedicated to fall recovery is developed. Trained in simulation, this agent is successfully transferred to the real robot, demonstrating a significant gain in autonomy. The difficulty of this transfer nevertheless highlights the central issue of the gap between simulated and real environments. This observation leads us to seek ways to minimize this discrepancy by improving the accuracy of the simulation. An in-depth study of friction phenomena in servo-actuators has been conducted, showing how a more detailed consideration of these phenomena improves the quality of the simulation and policy transfers.

    Salle 178 (LaBRI)
  • 11:00
    12:00

    Hussein Kazemi (LaBRI)

    Title: Trajectory visibility at first sight

    Abstract:

    Let P be a simple polygon with n vertices, and let two moving entities q(t) and r(t) travel at constant (possibly distinct) speeds vq and vr along line-segment trajectories τq and τr inside P. We study the exact first-visibility time t∗= min t≥0: q(t) r(t) ⊆P, the earliest moment at which the segment joining q(t) and r(t) lies entirely within P.

    Prior work by Eades et al. focused on this question in the setting of a simple polygon. They gave a one-shot decision algorithm running in O(n) time. For a stationary entity and a moving one, they suggested a structure that, after O(n log n) pre-processing, answers the decision query in O(log n) time, requiring O(n) space. In addition, for moving entities, after preprocessing time of O(n log⁵ n), they construct a data structure with O(n^{3/4} log³ n) query time and O(n log⁵ n) space. Variants for polygonal domains with holes or when entities cross the boundary of P lie beyond our scope.

    In this work, we go beyond the decision to compute t* exactly under three models for a simple polygon P. When both trajectories are known in advance, we preprocess P in O(n) time and space and thereafter answer each query in O(log n) time. If one trajectory τr is fixed while τq is given as query, we build a structure in O(n log n) time and space that computes t∗in O(log² n) time per query. In a setting where the trajectories are not known in advance, we develop a randomized structure with O(n^{1+ε}) expected pre-processing time and O(n) space, achieving an O(√n polylog(n)) expected query time for any fixed ε > 0.

    https://algodist.labri.fr/index.php/Main/GT

    English
    LaBRI 178