PhD students involved
- Meriem Basti
- Yosra Fatnassi
- Amira Mhenni
Objectives
Strengthening failure-prevention policies tends to raise awareness across all sectors, from scientific training to production processes. In Tunisia, many decrees aim to promote and implement guidelines for monitoring production systems in order to improve their safety. In line with this policy, our proposal focuses on improving system monitoring by training young researchers through the exploration of new tools and methods in the field of dependability for production systems.
The main objectives of this project are:
- to develop state-estimation and fault-diagnosis methods for poorly known systems, meaning systems with poorly defined models and uncertain measurements;
- to build a unified set of concepts in order to establish a coherent scientific framework related to the integrated design of dependable systems;
- to optimize existing methods and develop new approaches that meet the growing dependability requirements of increasingly complex systems.
Summary
Dependability is, by nature, interdisciplinary and covers a very broad spectrum, both in terms of the methods used and the application domains concerned. By characterizing the ability to provide a specified service, dependability is formally defined as the “quality of the service delivered by the system, such that users can place justified trust in it.” A dependable system prevents or eliminates danger and keeps the process in a failure-free operating state where the level of confidence remains maximal.
This project is structured around four main axes, whose actions are presented below.
Action 1: Diagnosis of nonlinear systems described using multimodels
Diagnosis is essential in many application areas, for example for monitoring industrial installations, or in the context of satellite autonomy.
Model-based monitoring and diagnosis methods relying on linear models have reached a certain maturity after about twenty years of development. However, assuming linearity for the process representation model is a strong hypothesis that limits the relevance of the results. A direct extension of methods developed for linear models to arbitrary nonlinear models is difficult. In contrast, interesting results have already been obtained when the modeling approach relies on using a set of simple-structure models, where each model describes the system behavior in a specific “operating region” (defined, for example, by input values or the system state). In this context, the multimodel approach, which builds a global model by interpolating local linear models, has already produced promising results.
Action 2: Operational diagnosis using data analysis without an a priori behavior model
Conventional methods for automating the monitoring of complex systems generally fall into two broad categories:
- approaches based on a behavior model built from system physics or human expertise (internal methods);
- approaches assuming that the available knowledge about the system is limited to past and present observations (external methods). These methods assume no model is available to describe cause-and-effect relationships (the process operating model). Knowledge relies only on measured signals collected from the monitored installation.
For internal methods, diagnostic performance in terms of fault detection and fault localization depends directly on the quality of the model used. To avoid difficulties related to model quality, an alternative is to use external methods based on measured signals from the monitored system. These are well suited to revealing (linear) relationships between system variables without explicitly formulating the model that links them. In addition, it seems easier to incorporate fault detectability and isolability criteria within this class of methods.
Action 3: Diagnosis and tolerance of critical faults in an automated system
This action relies on an application architecture that, beyond nominal system functions, implements fault detection, localization, and diagnosis functions, detection of operating-mode changes (especially those related to environmental behavior changes), as well as prognostics, fault or disturbance accommodation, and control or objective reconfiguration. These functions provide the desired responsiveness characteristics. The set of mechanisms intended to ensure dependability is commonly referred to as FDIR (Fault Detection, Isolation and Recovery) or FTC (Fault Tolerant Control).
A fault-tolerant system is characterized by its ability to maintain or recover performance under malfunction (dynamic or static) close to that achieved under normal operating conditions. Many works aimed at guaranteeing some degree of fault “tolerance” stem from classical robust control techniques (so-called “passive” approaches). More recently, there has been strong interest in “active” approaches characterized by the presence of a diagnosis module (FDI: Fault Detection and Isolation). Depending on fault severity, a new set of control parameters or a new control structure can be applied after the fault has been detected and localized.
In the literature, few works have considered delays associated with control computation time. After fault occurrence, the faulty system continues to operate under nominal control until the fault-tolerant control is computed and applied. During this period, the fault may cause severe performance loss and affect system stability.
Action 4: Study of uncertainties in performance analysis of high-integrity safety systems
In design, significant advances focus on ensuring risk reduction when a hazardous situation occurs through the implementation of active safety systems. This relies on using reliability databases, accounting for influence factors, and propagating uncertainties. A key point is addressing uncertainties related to component reliability data for dependability assessment, in particular using fuzzy set theories, possibility theory, or evidence theory. The main target of these studies has been Safety Instrumented Systems, for which dependability requirements are critical. The dependability performance analysis of high-integrity protection systems can be carried out using Markov models, which provide a sound formalization of the states these systems can take depending on encountered events (failure, test, maintenance, etc.) and studied parameters (failure rate, maintainability, common-cause failure, etc.).
The project is divided into 7 tasks carried out sequentially, and for some of them, in parallel.
Task 1: Literature review
Goal: to stay informed about the state of scientific production in the target topic.
- Continuous monitoring of scientific production in system modeling, statistical data processing, measurement validation, and diagnosis.
- Participation in the scientific community’s working groups.
Task 2: Definition of a working method
Goal: to track progress of the work.
- Management of work progress and evaluation procedure.
- Frequency of progress reports.
- Participation in national working groups.
- Participation in national and international conferences.
Task 3: Modeling
Goal: to develop modeling tools for complex processes. The three proposed approaches are intentionally different, with the aim of exploring their complementary aspects.
- Black-box approach using the multimodel concept and uncertainty characterization.
- Black-box approach using neural networks.
- Statistical data-processing approach based on PCA.
Task 4: Generation of monitoring indicators
Goal: to build variables that indicate the presence of events in the data.
- Techniques based on model residuals.
- Techniques based on observers.
- Structuring residuals using fault detection and isolation criteria.
Task 5: Diagnosis
Goal: to develop tools for fault detection and fault characterization.
- Data and measurement validation.
- Event detection.
- Event analysis and anomaly detection.
- Diagnostic sensitivity analysis.
Task 6: Case studies
Goal: to test the developed diagnosis methods on concrete processes. This may include lab prototypes, software-simulated processes, industrial pilot processes, or partner datasets. The key point is studying implementation conditions, formulating realistic hypotheses, and analyzing discrepancies between application and theory.
- Definition of a common benchmark.
- Identification of industrial applications in Tunisia and initiating contacts to define the data-access protocol.
- Definition of protocols for comparing approaches.
- Analysis of pros and cons of the different approaches.
- Fusion of methods and results.
Task 7: Synthesis
Goal: to analyze and quantify the results obtained in terms of scientific output, idea exchange, and researcher training.
- Analysis of results (scientific contribution, publications, defended theses).
- Identification and analysis of unresolved points.
- Identification of potential industrial partners.

FR
EN