artificial
intelligence
& ethics
software
engineering
& humanism
scientific
research
& responsibility
App informatics zt gmbh is a state-authorised and sworn civil engineering office for information technology.

We advise, plan, test and certify in the development and use of information and communication technologies.

Artificial Intelligence

Groundbreaking technology with substantial risks

Bias und Fairness

an AI system should be designed and deployed to deliver results fairly and without bias.

Digital Transformation

Active changes in society through the use of digital technologies and techniques.

Algorithmic Governance

Technical, ethical and also legal challenges posed by algorithmically controlled systems

Data Protection

The protection of personal data is regulated by data protection laws and is ensured by technical organisational measures

Transparent AI

Explainability and transparency (cf. § 12 GDPR) are the foundations of reliable and trustworthy AI

Themes

Algorithmic Governance

Algorithmically controlled systems, decision-making systems (e.g. credit approval/rejection, automatic ratings): Analysis of transparency (cf. Section 12 of the GDPR) and fairness.

Machine Learning and Artificial Intelligence
Reliable and trustworthy AI: Independent verification of an AI solution on the basis of a legally sworn expert witness.
Ethical Guidelines for Trustworthy AI

Ethics certification based on the European Commission’s Ethics Guidelines for Trusted AI:

  • Respect for human autonomy
  • Harm prevention
  • Fairness and explainability.

The catalogue of requirements and tests for an evaluation or for conducting an audit was developed together with the Vienna University of Technology.

EU General Data Protection Regulation (GDPR)
  • Analysis of the current state
  • Identification and definition of necessary measures
  • Implementation and certification
Biometrics and Forensics
  • Forensic biometrics: face recognition, person recognition,…
  • (Biometric) access control: fingerprint recognition, face recognition, …
  • Video and image analysis: identity verification, authenticity verification
Practical and Applied Informatics
Planning, implementation and review of IT projects on selected topics

Methods

Expert's Reports and Reviews
With our experience as national and international experts and auditors of ICT projects, we evaluate your project or project idea in terms of feasibility, degree of innovation and technological readiness level (TRL). In doing so, we take into account both technical – scientific and commercial criteria.
GDPR Audits
We review, assess and evaluate the effectiveness of your technical and organisational measures to ensure the security of the processing of your personal data. We certify the compliance of your data processing with the GDPR.
Ethical Audits
The increased use of algorithms and artificial intelligence in our everyday lives can lead to unfair or non-transparent decision-making processes. Based on an assessment scheme developed at the TU according to scientific methods, we evaluate your AI solution with regard to fairness, bias and transparency.
Funding Projects
We take over the application, project support or coordination of your funding project in our core topics. With 20 years of experience in the implementation and management of national and international funding projects, we are an experienced applicant and competent technology partner.
Court Expert Opinion

As sworn and court-certified experts, we prepare IT and (image) forensic reports.

Expert Opinions and Official Documents
We draw up deeds and expert opinions in accordance with the Civil Technician Act. Civil engineers are persons endowed with public faith. Public documents issued by them within the scope of their authority are regarded by the administrative authorities in the same way as if these documents were issued by public authorities.
Feasibility Study
We consider the feasibility of your technical project or project idea according to scientific and commercial aspects in terms of feasibility, effort and novelty. We specify your project with you.
Technical Statements
Objection to claiming your research tax claim? We analyse your activities and formulate your research activity.
Presentations
We are happy to present as a keynote at your workshop or a corporate event. As university professors at the Technical University, we are used to present and explain computer science topics.
Publications

With more than two hundred scientific publications on our core topics in the last 25 years, we can prove our expertise. We support you in your scientific dissemination, e.g. in a funding project….

Scientific Studies
The planning, implementation and coordination of scientific studies is one of our core areas and is derived from our research activities at the Vienna University of Technology.
Certification

As civil engineers, we draw up public deeds and, similar to notaries, are authorised to issue them as certificates. Certificates for GDPR compliance, for example, are becoming increasingly important.

Project 1: Fairness, Bias and Transparency of Algorithms

 

 

Data-based technologies aim to discriminate, classify and differentiate according to certain characteristics. However, a distinction must be made between desirable and unfair forms of discrimination.

Internal structures and decision-making mechanisms of an algorithm can also be unfair and/or non-transparent. This can concern the following areas, for example:

  • Decisions on creditworthiness or eligibility.
  • Access to health services
  • Access to employment
  • Access to education (e.g. university admission).

Apart from the principle of transparency under the GDPR, transparency of systems is crucial for both legal experts, software developers, but also regular users. The latter group benefits when decisions of an algorithm can be traced and thus more trust in the system is created.

The goals of the project* are:

  1. To create basic knowledge: what is the benefit of Explainable AI (XAI) methods, which definitions of transparency exist and into which levels can it be subdivided, analysis of methods for explaining algorithms.
  2. Development of a framework: design criteria for a fair and transparent algorithm, potential sources of error, metrics.
  3. Data from practice: what is the degree of transparency, analysis of explainability taking into account confidential information.

*Research project Fairalgos (FFG Bridge 878730) under the direction of Dr. Kampel at the TU Vienna.

01

interconnected crowd

Project 2: Algorithmic Governance of Care

 

 

Care work in long-term care is a human-centred activity that requires empathy, physical closeness and trusting relationships between different caregivers and care recipients. In recent years, artificial intelligence (AI) has been increasingly used in care systems to support professional caregivers in their daily activities and to provide care recipients with an additional level of safety. Despite the increasing prevalence of AI in care, few studies have addressed the bias of algorithmic systems in this field. By linking multiple Big Data sets, AI can set in motion unfair and non-transparent decision-making processes that lead to discriminatory care practices in practice.

The aim of the project* is to investigate the potential bias of algorithm-driven care technologies in their impact on long-term care. Findings from qualitative case studies in long-term care provide the basis for a differentiated understanding of the effects and needs of care in relation to AI systems.
Care in relation to AI systems. Based on these findings, the project investigates the utility value of XAI (explainable AI) methods (trustworthiness, fairness, transparency) and different levels of transparency in their applicability to care systems.

*Research project AlgoCare (WWTF project) under the direction of Dr. Kampel at the TU Vienna

02

Nurse holding hand

Project 3: Trusted AI – Certification process

 

 

To ensure trustworthy AI, we conduct audits and assess AI applications and algorithms using professional checklists and guidelines. In the analysis, we distinguish between Pre-Processing, In-Processing and Post-Processing.

Pre-processing methods examine the training data with regard to bias. In-processing describes techniques that analyse the network architecture. The goal of post-processing is to understand the result of a learning algorithm, provided that the trained model is a black box and the training or learning algorithm cannot be changed. The transparency of the AI system is of particular importance.

 

The result is certified in a certificate for the reliability, transparency and security of the AI application and the AI algorithms.

 

The basis of a certification for a trustworthy AI is the “EthicaI Guideline for Trustworthy AI” of the independent expert group HEG-KI of the European Commission. For the testing and certification of the reliability and trustworthiness of AI systems, reference is made to the ethical framework „Malta towards Trustworthy AI“, among others. (Translated with www.DeepL.com/Translator, free version)

03

AI-Certification

Team

Martin Kampel

Assoc. Prof Dr. Martin Kampel,

Engineering Consultant for Informatics

 

 

Dr. Kampel, ZT is the founder and managing partner of app informatics zt GmbH. He is also a Senior Scientist at the Institute for Visual Computing & Human Centered Technology, Vienna University of Technology.

He studied data technology and computer science, obtained his doctorate with distinction and habilitated in practical computer science at the Faculty of Computer Science at the Vienna University of Technology. As a computer scientist at the interface between research and development, Dr Kampel specialises in interdisciplinary issues of practical computer science, especially visual computing and artificial intelligence, as well as ethics and digital transformation.

He is an international reviewer and examiner of scientific and commercial projects, an engineering consultant for computer science, and a sworn and court-certified expert for ICT.

Robert Sablatnik

Prof. Dr. Robert Sablatnig

 

 

Prof. Sablatnig is the founder and shareholder of app informatics zt Gmbh. He is also a member of the board of the Institute for Visual Computing
& Human Centered Technology at the Vienna University of Technology, where he is active in research and teaching in the field of Computer Vision & AI.

He studied computer science with a focus on visual computing at the Vienna University of Technology, where he has been an associate professor for computer vision since 2003. From 2005 to 2017 he was head of the Institute for Computer-Aided Automation. Since 2010, he has headed the Computer Vision Lab, which is part of the Institute for Visual Computing & Human-Centered Technology founded in 2018, which he has headed since 2019. By leading and coordinating more than 30 industry-relevant and academic research projects, he applies basic research in an application-oriented way. He is an international reviewer and examiner of scientific and commercial projects, actively involved in the international research landscape, on the board of the Austrian Working Group for Pattern Recognition, represents Austria on the board of the International Working Group for Pattern Recognition, and is a sworn and court-certified expert in computer vision.

 

European Commission
ERSTE
FH Johanneum
FH Johanneum
KFV
KFV
KFV
KFV
KFV
KFV
KFV
KFV
KFV