For trustworthy AI

Open-Source

Our tools are based on established Open-Source toolkits. We apply state-of-the-art software developed by leading research institutions and by optimizing these tools we ensure a perfect user experience and simple operation.

Fairness & ethics

We analyze both training data and predictions by AI systems in terms of possible discrimination, unfair imbalances and biases. You will receive a comprehensive report and easy-to-understand explanation of the results.

Robustness

Machine learning models can be vulnerable to so-called “adversarial attacks,” during which very small (often imperceptible) perturbations to the inputs cause the model to make an incorrect prediction. We use the “Adversarial Robustness Toolbox” developed by IBM to simulate such attacks and test the robustness of the AI system.

GDPR

To date, no established tools exist that examine whether training data contains personal data covered by the General Data Protection Regulation (GDPR). Our in-house-developed tools automatically scan and tag data that may be personal.

Cybersecurity

Cybercrime is a major challenge when assessing AI systems. Conventional static testing is not sufficient. We are exploring fundamentally new safety engineering concepts to obtain continuous attestation of AI systems’ resilience against cyberattacks.

Ethik & Recht

Berücksichtigt Ihre KI-Anwendung ausreichend gesellschaftliche Werte und erfüllt alle rechtlichen Anforderungen?
Das Business Analytics und Data Science Center der Universität Graz erforscht wie datenbasierte Technologien in der Wirtschaft eingesetzt werden können und welche gesellschaftlichen Auswirkungen diese haben. Das BANDAS Center bringt seine Expertise im Bereich Software Validierung, Auditing und Dokumentation von KI ein, wobei ethische und rechtliche Aspekte von KI besonders berücksichtig werden. Das Institute of Interactive Systems and Data Science der TU Graz deckt den Bereich “Ethics by Design” ab.