CAT: Beyond Efficient Transformer for Content-Aware Anomaly Detection in Event Sequences

It is critical and important to detect anomalies in event sequences,which becomes widely available in many application domains.In-deed,various efforts have been made to capture abnormal patterns from event sequences through sequential pattern analysis or event representation learning.However,existing approaches usually ignore the semantic information of event content.To this end,in this paper,we propose a self-attentive encoder-decoder transformer framework,Content-Aware Transformer(CAT),for anomaly detection in event sequences.In CAT,the encoder learns preamble event sequence representations with content awareness,and the decoder embeds sequences under detection into a latent space,where anomalies are distinguishable.Specifically,the event content is first fed to a content-awareness layer,generating representations of each event.The encoder accepts preamble event representation sequence,generating feature maps.In the decoder,an additional token is added at the beginning of the sequence under detection,denoting the sequence status.A one-class objective together with sequence reconstruction loss is collectively applied to train our framework under the label efficiency scheme.Furthermore,CAT is optimized under a scalable and efficient setting.Finally,extensive experiments on three real-world datasets demonstrate the superiority of CAT.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here