Using AI tools ethically in interviews
When AI assistance is fine, when it is a gray area, and when it crosses a line. A practical framework for candidates and a note for hiring teams.
AI interview assistance sits in the same category as Grammarly, calculators, IDEs, and search engines: tools that augment human ability. The ethical question is not whether to use them, but where the line is. This post lays out a framework.
**Clearly fine.** Using AI to prepare before an interview — practicing answers, researching the company, formatting your resume. Nobody disputes this.
**Clearly fine, also.** Using AI as a real-time accessibility tool. Live transcription helps candidates who are hard of hearing or who interview in a non-native language. Drafting tools help candidates with dyslexia or social anxiety formulate answers under pressure.
**Gray area.** Using AI to draft full answers during a live conversation. The candidate is still saying the words, still thinking about the role, still showing up. But the words came from a model, not from recall. Whether this is acceptable depends on what the interviewer is actually testing for.
**Crosses a line.** Misrepresenting credentials. Lying about experience. Submitting AI-generated take-home projects without disclosure when the assignment was meant to evaluate your unaided ability. These were unethical before AI and remain so.
For candidates: the cleanest position is to think of AI as your second brain — useful for memory, formatting, and unfamiliar terminology, not a substitute for actually knowing the role.
For hiring teams: if you want to evaluate a candidate's unassisted ability, design the interview to do that — supervised technical assessments, on-site whiteboarding, structured behavioral interviews with follow-up probes. AI assistance is a fact of the modern workplace; pretending otherwise leads to hiring decisions based on whether the candidate happened to be using a tool, not on the quality of their work.