This website contains additional information about our paper “Dos and Don'ts of Machine Learning in Computer Security”. In the paper, we identify common pitfalls in the design, implementation, and evaluation of learning-based security systems. Please note that this website is still under construction. However, we will continuously add further information that we find helpful for the community.


With the growing processing power of computing systems and the increasing availability of massive datasets, machine learning algorithms have led to major breakthroughs in many different areas. Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance and render learning-based systems potentially unsuitable for security tasks and practical deployment.

In the paper, we look at this problem with critical eyes. First, we identify common pitfalls in the design, implementation, and evaluation of learning-based security systems. We conduct a study of 30 papers from top-tier security conferences within the past 10 years, confirming that these pitfalls are widespread in the current security literature. In an empirical analysis, we further demonstrate how individual pitfalls can lead to unrealistic performance and interpretations, obstructing the understanding of the security problem at hand. As a remedy, we propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible. Furthermore, we identify open problems when applying machine learning in security and provide directions for further research.

Important Note
Please note that our paper should not be interpreted as a finger-pointing exercise. On the contrary, it is a reflective effort that shows how subtle pitfalls, affecting our own research also, have a negative impact on actual progress, and how we—as a community—can mitigate them adequately.

Pitfalls & Recommendations

In the following, you find an overview of the identified pitfalls and their prevalence in the reviewed secury literature. To get details of each pitfall, please click on the corresponding symbol:

Present (but discussed)
Partly present
Partly present (but discussed)
Unclear from text
Does not apply
Not present


If you want to find out more about the identified pitfalls, you can read our publicly available paper. For interested readers, we also provide supplementary material:

Cite the Paper

To cite our paper, you can use the following BibTex entry:

    author = {Daniel Arp and Erwin Quiring and Feargus Pendlebury and Alexander Warnecke and
              Fabio Pierazzi and Christian Wressnegger and Lorenzo Cavallaro and Konrad Rieck},
    title = {Dos and Don'ts of Machine Learning in Computer Security},
    booktitle = {Proc. of USENIX Security Symposium},
    year = {2022},


In the following, we want to answer some questions you might have:

Not necessarily. While we demonstrate throughout our impact analysis that ignoring the pitfalls can affect the overall outcome significantly, the presence of a pitfall (or even multiple) does not always lower the contribution of a paper. Nonetheless, we should discuss as a community how to deal with these pitfalls in future research and minimize their impact whenever possible.

We skimmed all papers of the top security conferences in the last 10 years and identified 30 papers that use machine learning prominently (e.g., mentioned in the abstract or introduction). Even though this selection process is not complete and entirely free from bias, the identified papers are prototypical for this research branch and often highly cited by other researchers.

The intention behind our paper is to provide readers with an overview of common pitfalls that should be avoided when applying machine learning in computer security. We do not intend to blame or attack prior works. Therefore, we refrain from publishing the full list of assigned pitfalls. However, in order to allow other researchers to reproduce and criticize our work, we will share the assignments of individual papers if the authors agree.