Manifold Learning for Adversarial Robustness in Predictive Process Monitoring

In recent years, many predictive models have been successfully applied to predictive process monitoring, enabling tasks such as predicting the next activity, remaining time, or the future state of a process instance (case). However, recent developments have shown the vulnerability of these models to adversarial attacks, causing algorithms to make incorrect predictions. This paper addresses this issue by leveraging adversarial examples to evaluate the predictive performance of predictive process monitoring models in the face of adversarial threats. Although augmenting training data with adversarial examples has proven effective in defending against specific adversarial attacks, it often remains insufficient in mitigating vulnerabilities to other types of attacks. Our proposed approach explores the use of manifold learning techniques to restrict these examples within the range of data on which the model is trained. By learning from these specifically engineered (hidden) attacks, we seek to develop models that maintain accuracy on new, unseen data while effectively improving adversarial robustness against potential threats.

Methodological Pipeline