James Duffy

Interviewing in the age of AI

It is time to embrace AI in interviews. We do not need to fear that ‘AI will take our jobs,’ but that ‘people who know how to effectively use AI will take the jobs of those who don’t.’ This shift in perspective must drive a fundamental change in how we interview and assess talent.

Stop worrying about candidates cheating with AI. If a candidate uses AI as a crutch, they will fail faster in an AI-enabled interview than a traditional one as long as you change how you are interviewing.

The most effective way to interview now is to treat the AI as a confident but incompetent junior engineer. The candidate’s job is not to write the code; their job is to save production from what the AI just wrote.

#The Setup: Forced Generation

Don’t just allow AI we need to mandate it. Start the session by asking the candidate to prompt ChatGPT or Copilot to generate a solution for a non-trivial infrastructure problem. Example:

1
Generate a Terraform configuration for a Private AKS cluster that uses a custom Private DNS Zone for name resolution.

The AI will generate code that is unfit for production.

#The Real Test: The Audit

Once the code generates, do not correct it. Do not hint that it is broken. Sit on your hands and ask a single, neutral question:

“Walk me through this configuration. Is it ready to apply?”

Now the interview actually begins. You are not watching them type, you are watching them review. You are looking for three specific failures.

#1. The “Happy Path” Fallacy (Operational Failure)

AI models generate resources in isolation. They rarely understand the “glue” that makes them actually work together.

What breaks: The Terraform plan passes, but the infrastructure is a brick.

The test: The candidate accepts the code without checking for the integration points.

  • Example: Did the AI create a Private DNS Zone but forget to link it to the VNET?
  • Example: Did it create a Managed Identity but forget the Role Assignment required to actually read the registry or modify DNS?

#2. The “Default” Trap (Security Failure)

AI optimizes for “first-run success,” which usually means “zero security.” It prioritizes wide-open permissions to avoid access errors.

What breaks: The code is functional but dangerous.

The test: The candidate blindly trusts the defaults.

  • Example: Accepting 0.0.0.0/0 on a security group or firewall rule.
  • Example: Leaving enable_public_network_access = true on a database that should be private.
  • Example: Using default keys instead of a Customer Managed Key (CMK) for encryption.

#3. The “Time Traveler” (Maintenance Failure)

AI training data is historical. It loves deprecated patterns and old versions that “work” in a tutorial from 2021 but create technical debt immediately in 2026.

What breaks: The code deploys legacy infrastructure that you will have to rewrite next month.

The test: The candidate doesn’t check the provider documentation.

  • Example: Using a deprecated resource (e.g., azurerm_kubernetes_cluster_node_pool with arguments that were removed in v3.0).
  • Example: Configuring “Pod Identity” instead of “Workload Identity.”

#Wrapping it up

This format exposes the “crutch” user immediately. They struggle to explain the logic behind the generated syntax and lack the mental model to debug the AI’s mistakes.

In this mode we are testing for governance and AI competency.