Anticipating application behavior, studying and designing algorithms are some of the most important objectives for performance and correctness studies of HPC-related simulations and applications. Many frameworks have been designed to simulate large distributed computing infrastructures and the applications running on them. At the node level, some tools have also been proposed to simulate task-based parallel applications. However, a critical capability missing from this work is the ability to take into account Non-Uniform Memory Access (NUMA) effects, even though almost all High Performance Computing (HPC) platforms today exhibit such effects. We model different shared memory architectures by performing our own measurements to obtain their characteristics. In this thesis, we present a new simulator for parallel applications based on dependent tasks, which allows to experiment several models of data locality. It is based on the recording of a trace of the sequential execution of the target application, using the standard trace interface for OpenMP, OMPT (OpenMP Trace). We also introduce three performance models, two of which are locality sensitive: a first model that only takes into account task execution times, a lightweight model that uses topology information to weight data transfers, and finally a more complex model that takes into account data storage in the LLC (Last Level Cache, i.e. L3). We validate our models on dense linear algebra test cases and show that, on average, our simulator reproducibly and quickly predicts the execution time with a reduced relative error and allows the experimentation and study of various scheduling heuristics.