Trustworthy AI

State: Open
Published: 2020-11-10

Cyberattacks are affecting more and more IoT-based scenarios. With the goal of protecting these scenarios from unseen cyberattacks (also known as zero-day attacks), a relevant number of anomaly detectors based on Machine and Deep Learning have been proposed for the last years. The idea behind these solutions is to model the benign behaviour of the IoT device and detect deviations produced by malware attacks. However, some of the existing models have important drawbacks and failures such as data leakage, methodology failures, data mismatch, or vulnerable software, among others. It makes that predictions produced by those models are not trusted enough. With the goal of improving this challenge, the main objective of this thesis is to design, implement and validate a trust algorithm able to measure and quantify the trust level of different ML/DL-based models able to detect anomalies created by malware affecting and IoT-based scenario.


[1] In AI We Trust? Factors That Influence the Trustworthiness of AI-infused Decision-Making Processes  —> https://arxiv.org/pdf/1912.02675.pdf

[2] Factsheets for AI Services —>  https://www.ibm.com/blogs/research/2018/08/factsheets-ai/

[3] To Trust Or Not To Trust A Classifier  —> https://arxiv.org/pdf/1805.11783.pdf 

20% literature review, 35% Design, 20% Implementation, 15% Evaluation , 10% Documentation
Basic knowledge in Machine and Deep Learning

Supervisors: Dr Alberto Huertas

back to the main page