The world of High Performance Computing continues to evolve, offering vast and varied parallel programming models. To adapt to the requirements of exascale in terms of computing power and fast processing. The most widely used parallel programming model is the Message Passing Interface (MPI). The MPI standard has been used in HPC for decades. The point-to-point send/receive communication protocol, called two-sided MPI, is widely used and preferred in applications. An alternative to this point-to-point communication model is one-sided com- munications, mainly implemented in the PGAS (Partitioned Global Address Space) languages and libraries. The MPI-3 standard, which was introduced in September 2012, included a significant update to one-sided communications in MPI, also known as RMAs (Remote Memory Access) and adopted by the standard from then on MPI-2, to provide more performance and introduce new data access modes. However, the performance of one-sided communications is still far from what was expected. Developing a parallel program is often more difficult but performs better in shared memory than using send/receive transfers to exchange data. Processes can implicitly communicate in shared memory with the use of some synchronization mechanisms (locks, sequencers...) to guarantee a concurrency-free access to the desired memory part.
In this research work we will mainly focus on the MPI-RMA distributed memory programming model whose principle is to design a shared global virtual memory space in distributed memory systems, which allows processes to communicate through this memory. Although the PGAS model has existed for a very long time, and promises more asynchronism. This model remains little used by the HPC community due to several reasons, including the synchronization modes required to secure the program. As the PGAS model proposes a concept of shared memory programming in distributed memory. Access concurrency issues apply to it. For this reason, PGAS programming can be very difficult, because the user is responsible for explicitly managing all memory accesses to ensure program concurrency. It is therefore interesting for application programmers to have tools that allow them to facilitate programming. To develop correct and efficient codes.
Within the framework of this research work, the main objective is to develop a dynamic analysis at runtime, and a static analysis at compile time, to check the codes of PGAS applications. This mixed analysis makes it possible to exploit the advantages of both approaches. A dynamic approach that relies on concrete executions that depend on a single set of inputs, and thus is limited to detecting only the errors present in the analyzed execution. A static approach that does not depend on the input set and offers a global view of the code, and considers all possible execution paths. During this thesis we have developed a tool that combines both static and dynamic analysis called RMA-Analyzer. It has been designed to assist in the programming of MPI-RMA codes, in particular by offering advanced help to the user in detecting concurrency errors known as illegal accesses related to MPI-RMA applications. With the aim of facilitating programming for users, with dynamic help on the reality and origin of possible blockages in MPI-RMA.