AI in hiring: the 2026 line between helpful and creepy
Resume parsing, interview scoring, cohort matching — what AI is good at, what it isn't, and where to draw the line.
There is a version of AI-assisted hiring that helps both sides. It speeds up the boring parts of recruiting: parsing a resume, normalising titles across companies, finding the people in your existing pipeline who match a brand-new role. It surfaces qualified candidates who would have been buried under hundreds of other resumes. It gives candidates faster answers and fewer ghostings.
There is also a version that scrapes a candidate's public social media for "vibes," scores their face on a video interview, and silently rejects them because a sentence in their cover letter sounded like an LLM. Same technology, different choices. The line between the two is mostly about consent and disclosure.
What AI is genuinely good at, in 2026
- Structured extraction from unstructured documents. A modern LLM can pull experience, skills, and education from a resume better than the regex-and-template tools that ATSes used for twenty years.
- Normalisation. "Senior SWE", "Senior Software Engineer", and "SE3" are the same role. AI can collapse those without a 50,000-row mapping table.
- Cohort matching. Given a job description, find the people in your pipeline whose backgrounds genuinely match. Boring and useful.
- Translation. Job descriptions written in industry jargon are hostile to candidates from other industries. AI can produce a candidate-facing rewrite that doesn't lose meaning.
What AI is bad at, still
- Predicting whether someone will be a good employee. The best models can't do this. Don't let anyone tell you their model can.
- Scoring video interviews on "personality fit." This is pseudoscience and at least three lawsuits in the US have made it expensive pseudoscience.
- Reasoning about gaps in a resume. A pause for caregiving, illness, layoff recovery, or sabbatical means nothing the model can read from the gap itself.
- Catching its own bias. If your training data underweights candidates from regional schools, your model will too. Audits aren't optional.
A simple test
When you're evaluating an AI-assisted hiring tool, ask whether the candidate could see what the system saw and infer how it ranked them. If the answer is yes, you're probably on the helpful side of the line. If the answer is "we couldn't explain it if we tried" — that's the creepy side. The technology hasn't changed; the willingness to say what you're doing has.