Data model

Evaluations

Scorecard tied to an application (optionally pinned to a specific interview). Carries dimensions (skills / culture / communication), 1-5 scale, recommendation (strong yes -> strong no) and a note. Multiple per application, one per interviewer.

Model name: evaluation
Endpoints: 5
Max page size: 200

Fields

Per-field validation rules. Values that violate any constraint are rejected with 400 before they reach the database.

FieldTypeConstraints
summarystring
max length4000
concernsstring
max length4000
highlightsstring
max length4000
interview_idstring
max length64ref →interview
skills_scorenumber-
culture_scorenumber-
overall_scorenumber-
application_idstring
max length64ref →application
interviewer_idstring
max length64
recommendationenum
enumstrong_yes | yes | neutral | no | strong_no
potential_scorenumber-
communication_scorenumber-

Mutability

Which fields can you send, and when? Anything without a marker is server-managed - sending it isn't an error, it's silently ignored.

Create-only - read from POST body.Patchable - read from PATCH body.Server-managed - ignored on the body.
FieldCreatePatch
summary
concerns
highlights
interview_id
skills_score
culture_score
overall_score
application_id
interviewer_id
recommendation
potential_score
communication_score

Fields marked create-only but not patchable are immutable after creation. Server-managed fields include id, timestamps, ownership, and status.

Filtering & sorting

Combinable on list endpoints. Repeating a filter key produces an IN clause; prefixing a sort key with - reverses direction. Example: ?status=open&status=blocked&sort=-created_at.

Filter keys

application_iddata__application_id
interview_iddata__interview_id
interviewer_iddata__interviewer_id
recommendationdata__recommendation
statusstatus
is_archivedis_archived
owned_by
created_bycreated_by

Sort keys

created_atcreated_at
overall_scoredata__overall_score

Default: created_at

Endpoints

Each endpoint below lists its HTTP method, path, and the PAT scope it needs. Code samples cover curl, JavaScript, TypeScript, Python, Rust, Java, and WebSocket.

GET/xapi2/data/evaluationevaluation:list

List objects

Returns a paginated list of objects you can read. Default page size is 20; pass ?limit= to change (capped per type). Use ?after=<id> for keyset pagination on created_at-sorted lists, or ?offset= for offset paging.

curl -H "Authorization: Bearer pat_…" \
"https://www.ki-bewerber-management.de/xapi2/data/evaluation?limit=20"
GET/xapi2/data/evaluation/{id}evaluation:read

Read one

Returns the object by id. 404 if it does not exist or you cannot read it (the two cases are intentionally conflated).

curl -H "Authorization: Bearer pat_…" \
https://www.ki-bewerber-management.de/xapi2/data/evaluation/OBJECT_ID
POST/xapi2/data/evaluationevaluation:create

Create

Creates a new object. Body is a flat JSON dict of field values. Server-side fields (id, timestamps, ownership) are filled automatically; only fields listed below as creatable are read from the body.

curl -H "Authorization: Bearer pat_…" \
-H "Content-Type: application/json" \
-X POST https://www.ki-bewerber-management.de/xapi2/data/evaluation \
-d '{"name": "…"}'
PATCH/xapi2/data/evaluation/{id}evaluation:update

Update

Partial update. Only fields included in the body are touched; everything else is preserved. Same allow-list as create, minus the fields that are immutable post-create.

curl -H "Authorization: Bearer pat_…" \
-H "Content-Type: application/json" \
-X PATCH https://www.ki-bewerber-management.de/xapi2/data/evaluation/OBJECT_ID \
-d '{"name": "…"}'
DELETE/xapi2/data/evaluation/{id}evaluation:delete

Delete

Removes the object. It vanishes from every default list immediately and stops being returned by read / list.

curl -H "Authorization: Bearer pat_…" \
-X DELETE https://www.ki-bewerber-management.de/xapi2/data/evaluation/OBJECT_ID

Use in CLI

The same endpoints are also exposed via the KI BMS CLI. For scripts, CI, and bulk imports it's usually the faster path.

atscli evaluation list --limit 5
atscli evaluation get <id>
atscli evaluation create --application-id "Hello"
atscli evaluation upsert --unique application_id --csv items.csv
atscli evaluation schema # fields & limits

Full command reference, profiles, CSV import, auto-retry, NDJSON streaming → /docs/cli