We advise, plan, test and certify in the development and use of information and communication technologies.
Active changes in society through the use of digital technologies and techniques.
Algorithmically controlled systems, decision-making systems (e.g. credit approval/rejection, automatic ratings): Analysis of transparency (cf. Section 12 of the GDPR) and fairness.
Machine Learning and Artificial Intelligence
Ethical Guidelines for Trustworthy AI
Ethics certification based on the European Commission’s Ethics Guidelines for Trusted AI:
- Respect for human autonomy
- Harm prevention
- Fairness and explainability.
The catalogue of requirements and tests for an evaluation or for conducting an audit was developed together with the Vienna University of Technology.
EU General Data Protection Regulation (GDPR)
- Analysis of the current state
- Identification and definition of necessary measures
- Implementation and certification
Biometrics and Forensics
- Forensic biometrics: face recognition, person recognition,…
- (Biometric) access control: fingerprint recognition, face recognition, …
- Video and image analysis: identity verification, authenticity verification
Practical and Applied Informatics
Expert's Reports and Reviews
Court Expert Opinion
As sworn and court-certified experts, we prepare IT and (image) forensic reports.
Expert Opinions and Official Documents
With more than two hundred scientific publications on our core topics in the last 25 years, we can prove our expertise. We support you in your scientific dissemination, e.g. in a funding project….
As civil engineers, we draw up public deeds and, similar to notaries, are authorised to issue them as certificates. Certificates for GDPR compliance, for example, are becoming increasingly important.
Project 1: Fairness, Bias and Transparency of Algorithms
Data-based technologies aim to discriminate, classify and differentiate according to certain characteristics. However, a distinction must be made between desirable and unfair forms of discrimination.
Internal structures and decision-making mechanisms of an algorithm can also be unfair and/or non-transparent. This can concern the following areas, for example:
- Decisions on creditworthiness or eligibility.
- Access to health services
- Access to employment
- Access to education (e.g. university admission).
Apart from the principle of transparency under the GDPR, transparency of systems is crucial for both legal experts, software developers, but also regular users. The latter group benefits when decisions of an algorithm can be traced and thus more trust in the system is created.
The goals of the project* are:
- To create basic knowledge: what is the benefit of Explainable AI (XAI) methods, which definitions of transparency exist and into which levels can it be subdivided, analysis of methods for explaining algorithms.
- Development of a framework: design criteria for a fair and transparent algorithm, potential sources of error, metrics.
- Data from practice: what is the degree of transparency, analysis of explainability taking into account confidential information.
*Research project Fairalgos (FFG Bridge 878730) under the direction of Dr. Kampel at the TU Vienna.
Project 2: Algorithmic Governance of Care
Care work in long-term care is a human-centred activity that requires empathy, physical closeness and trusting relationships between different caregivers and care recipients. In recent years, artificial intelligence (AI) has been increasingly used in care systems to support professional caregivers in their daily activities and to provide care recipients with an additional level of safety. Despite the increasing prevalence of AI in care, few studies have addressed the bias of algorithmic systems in this field. By linking multiple Big Data sets, AI can set in motion unfair and non-transparent decision-making processes that lead to discriminatory care practices in practice.
The aim of the project* is to investigate the potential bias of algorithm-driven care technologies in their impact on long-term care. Findings from qualitative case studies in long-term care provide the basis for a differentiated understanding of the effects and needs of care in relation to AI systems.
Care in relation to AI systems. Based on these findings, the project investigates the utility value of XAI (explainable AI) methods (trustworthiness, fairness, transparency) and different levels of transparency in their applicability to care systems.
*Research project AlgoCare (WWTF project) under the direction of Dr. Kampel at the TU Vienna
Project 3: Trusted AI – Certification process
To ensure trustworthy AI, we conduct audits and assess AI applications and algorithms using professional checklists and guidelines. In the analysis, we distinguish between Pre-Processing, In-Processing and Post-Processing.
Pre-processing methods examine the training data with regard to bias. In-processing describes techniques that analyse the network architecture. The goal of post-processing is to understand the result of a learning algorithm, provided that the trained model is a black box and the training or learning algorithm cannot be changed. The transparency of the AI system is of particular importance.
The result is certified in a certificate for the reliability, transparency and security of the AI application and the AI algorithms.
The basis of a certification for a trustworthy AI is the “EthicaI Guideline for Trustworthy AI” of the independent expert group HEG-KI of the European Commission. For the testing and certification of the reliability and trustworthiness of AI systems, reference is made to the ethical framework „Malta towards Trustworthy AI“, among others. (Translated with www.DeepL.com/Translator, free version)
Assoc. Prof Dr. Martin Kampel,
Engineering Consultant for Informatics
Dr. Kampel, ZT is the founder and managing partner of app informatics zt GmbH. He is also a Senior Scientist at the Institute for Visual Computing & Human Centered Technology, Vienna University of Technology.
He studied data technology and computer science, obtained his doctorate with distinction and habilitated in practical computer science at the Faculty of Computer Science at the Vienna University of Technology. As a computer scientist at the interface between research and development, Dr Kampel specialises in interdisciplinary issues of practical computer science, especially visual computing and artificial intelligence, as well as ethics and digital transformation.
He is an international reviewer and examiner of scientific and commercial projects, an engineering consultant for computer science, and a sworn and court-certified expert for ICT.
Prof. Dr. Robert Sablatnig
Prof. Sablatnig is the founder and shareholder of app informatics zt Gmbh. He is also a member of the board of the Institute for Visual Computing
& Human Centered Technology at the Vienna University of Technology, where he is active in research and teaching in the field of Computer Vision & AI.
He studied computer science with a focus on visual computing at the Vienna University of Technology, where he has been an associate professor for computer vision since 2003. From 2005 to 2017 he was head of the Institute for Computer-Aided Automation. Since 2010, he has headed the Computer Vision Lab, which is part of the Institute for Visual Computing & Human-Centered Technology founded in 2018, which he has headed since 2019. By leading and coordinating more than 30 industry-relevant and academic research projects, he applies basic research in an application-oriented way. He is an international reviewer and examiner of scientific and commercial projects, actively involved in the international research landscape, on the board of the Austrian Working Group for Pattern Recognition, represents Austria on the board of the International Working Group for Pattern Recognition, and is a sworn and court-certified expert in computer vision.