Tag: Interviews

  • Anthropic on AI-Resistant Interviews

    It’s ironic that Anthropic is looking to beat AI in their interviews, but aren’t we all? Here is a list of key principles from Designing AI-resistant technical evaluations.

    1. Test the process, not just the output

    Many tasks can be solved well by AI, so the final answer is no longer a good signal. Evaluations should:

    • Capture intermediate reasoning steps
    • Require explanations of trade‑offs
    • Examine decision‑making, not just correctness

    2. Use tasks that require contextual judgment

    AI is good at pattern‑matching and known problem types. It struggles more with:

    • Ambiguous requirements
    • Real‑world constraints
    • Messy or incomplete information
    • Prioritization under uncertainty

    Evaluations should lean into these.

    3. Incorporate novel or unseen problem types

    If a task is widely available online, an AI model has probably trained on it. Stronger evaluations:

    • Use fresh, unpublished tasks
    • Introduce domain‑specific constraints
    • Require synthesis across multiple knowledge areas

    4. Look for human‑specific signals

    Anthropic highlights qualities that AI still struggles to fake:

    • Personal experience
    • Tacit knowledge
    • Real‑time collaboration
    • Values‑driven reasoning
    • Long‑horizon planning with incomplete data

    Evaluations can intentionally probe these.

    5. Design for partial AI assistance

    Instead of pretending AI doesn’t exist, assume candidates will use it. Good evaluations:

    • Allow AI for some steps
    • Restrict AI for others
    • Measure how well a person integrates AI into their workflow

    This mirrors real‑world work more accurately.