Cyberattacks are affecting more and more IoT-based scenarios. With the goal of protecting these scenarios from unseen cyberattacks (also known as zero-day attacks), a relevant number of anomaly detectors based on Machine and Deep Learning have been proposed for the last years. The idea behind these solutions is to model the benign behaviour of the IoT device and detect deviations produced by malware attacks. However, some of the existing models have important drawbacks and failures such as data leakage, methodology failures, data mismatch, or vulnerable software, among others. It makes that predictions produced by those models are not trusted enough. With the goal of improving this challenge, the main objective of this thesis is to design, implement and validate a trust algorithm able to measure and quantify the trust level of different ML/DL-based models able to detect anomalies created by malware affecting and IoT-based scenario.
 In AI We Trust? Factors That Influence the Trustworthiness of AI-infused Decision-Making Processes —> https://arxiv.org/pdf/1912.02675.pdf
 Factsheets for AI Services —> https://www.ibm.com/blogs/research/2018/08/factsheets-ai/
 To Trust Or Not To Trust A Classifier —> https://arxiv.org/pdf/1805.11783.pdf
Supervisors: Dr Alberto Huertasback to the main page