Performance Evaluation of Anomaly Detection
Blog posts about unsupervised ML evaluation
At Balabit (now One Identity), the data science team I was part of worked on unsupervised anomaly detection methods. We quickly noticed that performance evaluation—an essential aspect of any ML project—was far less standardized in this area compared to supervised learning, with fewer practical resources available. To address this, I documented our experiences and the solutions we developed in a series of blog posts. The posts resonated with others facing similar challenges, leading to follow-up feedback and questions. I’m also pleased to have introduced a metric I designed (Part 3) and shared a unique aspect of ROC-AUC (Part 4) that I haven’t seen discussed elsewhere.
Here you can read the series:
This post is licensed under CC BY 4.0 by the author.