Reports

Validation report 005: Food Image Classification

Published in Explainable Machine Learning 2023/2024 course, 2024

This study evaluates the resilience of the ‘nateraw/food’ Visual Transformer food classification model against data manipulation attacks, using LIME and Attention Rollout for insights. The model generally withstands most transformations, but extreme photographic effects and overlaying key non-food features significantly alter its predictions. The findings highlight the model’s robustness, revealing specific vulnerabilities to strategic overlays and severe photographic distortions.

Recommended citation: Tomasz Silkowski. (2024). "Vulnerabilities in Food Image Classification." Github: ModelOriented/CVE-AI.
Download Paper

Validation report 004: Breast Cancer Detector Model

Published in Explainable Machine Learning 2023/2024 course, 2024

This study conducts a Red Teaming analysis on a CNN-based Breast Cancer Detector Model, using XAI techniques to assess its reliability and uncover vulnerabilities. The analysis found that while the model is generally robust, certain intricate vulnerabilities could be exposed through data augmentation and out-of-distribution samples. LIME and SHAP analyses highlighted important phenomena, emphasizing the need for high-quality input data to ensure model reliability in clinical applications.

Recommended citation: Mikolaj Drzewiecki, Monika Michaluk. (2024). "Red Teaming analysis of the Breast Cancer Detector Model." Github: ModelOriented/CVE-AI.
Download Paper

Validation report 003: YOLOv5-license-plate exploration with SHAP

Published in Explainable Machine Learning 2023/2024 course, 2024

This project investigates the robustness of a fine-tuned YOLOv5 model for license plate detection against DPatch adversarial attacks and explores the interpretability of its predictions through SHAP analysis. The study found that DPatch adversarial attacks reduce the model’s detection rates, with discrepancies from previous findings attributed to specific fine-tuning and model advancements. SHAP analysis highlighted the model’s focus on specific regions, such as the license plate, providing insights into its decision-making process.

Recommended citation: Robert Laskowski, Szymon Sadkowski. (2024). "Interpreting License Plate Detection Model: A SHAP-based Analysis and Adversarial Attack Exploration." Github: ModelOriented/CVE-AI.
Download Paper

Validation report 002: Go Policy Networks

Published in Explainable Machine Learning 2023/2024 course, 2024

This report identifies shortcomings of using convolutional architectures as Go policy networks by comparing them to Transformer policies. The findings show that while convolutional networks excel at capturing local features, they struggle with global phenomena, which can be detrimental in games like Go. Transformers, with their flexible attention mechanisms, better incorporate both local and global understanding, suggesting potential for future research in using Transformers for large positional games.

Recommended citation: Antoni Hanke, Michal Grotkowski. (2024). "Comparative Analysis of Convolutional and Transformer Architectures in Go Policy Networks." Github: ModelOriented/CVE-AI.
Download Paper

Validation report 001: MIDI-to-score Conversion Model

Published in Explainable Machine Learning 2023/2024 course, 2024

This project aims to explore advancements in automatically transcribing performance MIDI streams into musical scores, focusing on the paper “Performance MIDI-to-score Conversion by Neural Beat Tracking” by Liu et al. (2022). The study reveals artifacts in score generation models, particularly with velocity contributions and time signature robustness, and highlights the need for more tailored explainable AI (XAI) methods for symbolic music data to enhance model interpretability.

Recommended citation: Mateusz Szymanski. (2024). "Performance MIDI To Score Automatic Transcription." Github: ModelOriented/CVE-AI.
Download Paper