Guides

AI Act 2026 for recruiting - high-risk checklist for German HR teams

From August 2026 the high-risk duties of the EU AI Act apply to recruiting too. Here's what that concretely means, what to do now, and what can wait until 2027.

AI Act
Compliance
Guide
Finn Glas
Finn GlasCo-Founder + Engineering
·February 11, 2026·
4 min read

Key takeaways

AI Act in force since August 2024, high-risk obligations from August 2 2026.
Recruiting KI is explicitly high-risk (Annex III, 4(a)).
Obligations apply regardless of company size - no SME exemption in the recruiting context.
Fines up to €35M or 7% of global annual turnover (Art. 99(3)).
Step by step
1

1. Inventory - which KI do you use?

List: ATS-embedded screening KI, external resume-parsing services, KI ad generators, conversational bots on careers page, active-sourcing AI. Also 'we use ChatGPT sometimes for mail drafts' qualifies if the mails relate to applications.

2

2. Add transparency to the application form

Before clicking submit, it must be clear: KI pre-sorting is used, a human decides, right to information stands. Example: 'Your application will be pre-sorted by KI (score + reasoning). A human always decides. You have the right to information, correction, and deletion at any time.'

3

3. Activate audit log - at least 6 months retention

Who changed what when. KI BMS does this by default; check + activate elsewhere. Store: date, user, stage transition, KI score (if relevant), reasoning text (if KI-generated).

4

4. Make override functionality clear

HR must be able to override a KI score in one click; the system must not 'auto-reject below score 40'. If your current config does auto-rejection: disable until a human reviews.

5

5. Bias monitoring: check score distribution against gender / age / origin

At least quarterly: aggregate KI scores of the last 50 applications, compare means against sensitive attributes (anonymised). If male-coded names have an 8+ point advantage, you have a bias signal - re-instruct the KI prompt.

6

6. Technical documentation - one page per role

Per role record: which KI prompt was used, which requirements were mandatory / knockout, what score threshold applies. A 1-page doc per role is enough; retention 5 years after role closure.

7

7. Check the DPA with the KI vendor

The KI vendor (e.g. us if you use KI BMS) is processor. DPA under Art. 28 GDPR + Annex IV duties under AI Act must be signed. Check: hosting region, subprocessors list, audit rights, deletion on contract end.

8

8. Train HR staff

Everyone working with KI should once understand: what the KI does, what it doesn't, how to override, what a bias signal looks like. A 30-minute training with practice applications suffices. Document the training date.

What the AI Act concretely requires in recruiting

The EU AI Act classifies KI systems by risk tier. Recruiting KI falls under Annex III (high-risk), 4(a): 'KI systems intended for use in the recruitment or selection of natural persons, particularly for placing targeted job advertisements, screening or filtering applications, and evaluating candidates'. That's exactly what a modern ATS with KI pre-sort does.

High-risk duties per Art. 8-15: risk management system, data governance, technical documentation, logging, user-facing transparency, human oversight, accuracy + robustness + cybersecurity. Sounds heavy; in practice it's eight concrete actions listed below.

Mandatory from Aug 2 2026: user-facing transparency, logging duty (audit log), human oversight, bias monitoring. Recommended (but strongly): document external risk assessment, bias tests per role, data protection impact assessment per role.

Using KI in recruiting and ignoring the duties risks: (1) fines up to 7% turnover; (2) discrimination claims with eased burden of proof (AGG lowers the bar with documented auto-sorting); (3) reputational damage when candidates learn of KI involvement only after the fact.

What 'human oversight' really means

Three aspects. One - the human must be able to override an auto-decision. Auto-rejection without override is prohibited. Two - the human must be able to understand the auto-decision. Black-box KI without per-application reasoning is risky. Three - the human must have the authority to decide against an auto-decision. If HR can de jure override but de facto never does because of time pressure, oversight is nominal, not real.

FAQ

Frequently asked

Share this article

Try KI BMS

Free plan, no credit card. We host in Germany. You can export and delete everything self-serve.

Finn Glas

Written by

Finn Glas

Co-Founder + Engineering

Finn is one of the Co-Founders. He owns the engineering side, the infrastructure, and most of the late-night fixes that ship before anyone notices.

finn.glas at aicuflow dot comLinkedInWebsite