Speaker
Description
Currently established graph anomaly detection methods predominantly operate in an unsupervised or self-supervised manner, assuming minimal anomaly contamination and relying solely on data-driven signals to infer notions of (a)normality. While these methods can, in theory, capture the relevant complex structural and attribute-based patterns, they typically do not allow for the meaningful incorporation of prior knowledge as guidance. This however is crucial, as alignment with existing prior knowledge is essential to ensure a suitable model of normality, that deviations are relevant, and to provide guarantees for known cases.
This talk presents our early work on integrating logical constraints into graph anomaly detection to address this limitation. Our approach begins by learning logic formulas that capture the normative behavior of nodes — effectively simulating constraints given by experts to guide the model. We further outline the next steps of our work, which will focus on incorporating these learned constraints into an established graph neural network framework for anomaly detection, guiding the detection process and providing guarantees for instances where the constraints apply.