Interview Design and Question Banks
Design structured interview processes with role-specific behavioral questions. Build question banks that reduce bias and actually predict job performance.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skills included
- New content added weekly
The Unstructured Interview Problem
In the previous lesson, we explored candidate screening and evaluation. Now let’s build on that foundation. Most interviews are friendly conversations that feel productive but reveal almost nothing about whether a candidate will succeed in the role.
Research from decades of organizational psychology is clear: unstructured interviews are one of the least predictive hiring methods. They’re heavily influenced by first impressions, similarity bias, and how charismatic the candidate happens to be on that particular day.
Structured interviews, by contrast, are among the most predictive tools available. The difference isn’t about being robotic – it’s about being consistent.
What You’ll Learn
By the end of this lesson, you’ll design structured interview processes with role-specific behavioral questions, build question banks you can reuse across similar roles, and create interviewer guides that ensure fair, consistent evaluation.
From Screening to Interviews
In Lesson 3, you built screening rubrics based on specific evaluation dimensions. Those same dimensions now drive your interview design. The competencies you screened for on paper are the competencies you’ll probe for in person. This creates a coherent hiring process where each stage builds on the last.
Structured Interview Basics
A structured interview has three components:
1. Consistent questions. Every candidate for the same role gets asked the same core questions. You can follow up differently based on their answers, but the starting point is identical.
2. Competency focus. Each question targets a specific competency you’re evaluating. No fishing expeditions.
3. Scoring rubric. Before the interview, you’ve defined what strong, adequate, and weak answers look like for each question.
This doesn’t mean reading from a script in a monotone voice. You still build rapport, have natural conversation, and follow interesting threads. The structure is the backbone, not a straitjacket.
Designing Behavioral Questions
Behavioral questions ask candidates to describe specific past experiences. The format is: “Tell me about a time when you [situation relevant to the role].”
Why behavioral over hypothetical? Because anyone can describe what they would do. What someone actually did, in real situations with real constraints, is far more revealing.
The STAR framework for writing questions:
- Situation: What context should the candidate describe?
- Task: What was their specific responsibility?
- Action: What did they personally do?
- Result: What was the outcome?
Good behavioral questions naturally prompt all four elements.
Building Questions with AI
Here’s a prompt template:
Generate 5 behavioral interview questions for a [Job Title] role,
focusing on the competency of [specific competency].
For each question:
1. The main question (behavioral, starting with
"Tell me about a time...")
2. 2-3 follow-up probes to get depth
3. What a strong answer includes (specific indicators)
4. What a weak answer looks like (red flags)
5. The STAR elements you're listening for
Context about the role:
- [Key challenge 1]
- [Key challenge 2]
- [Team structure]
Example output for “Conflict Resolution” in a Project Manager role:
Main question: “Tell me about a time when two stakeholders on a project had conflicting priorities, and you had to find a path forward.”
Follow-up probes:
- “How did you identify the root cause of the disagreement?”
- “What specific steps did you take to align them?”
- “What would you do differently in hindsight?”
Strong answer indicators:
- Describes a specific, real situation (not hypothetical)
- Identifies the root cause of the conflict, not just the symptoms
- Took direct action rather than escalating immediately
- Achieved a resolution that addressed both parties’ core concerns
- Reflects on what they learned
Weak answer indicators:
- Stays vague (“I usually just talk to both sides”)
- Takes credit for resolution without explaining their specific actions
- Can’t describe the outcome
- Blames one party entirely
Quick Check
Think about the last interview you conducted. Did you ask every candidate the same core questions? Did you have a scoring rubric? If not, your interviews were likely more influenced by rapport and gut feeling than by actual competency evaluation.
Designing the Interview Process
For most roles, you need 3-4 interview stages, each with a specific purpose:
| Stage | Format | Focus | Duration |
|---|---|---|---|
| Phone screen | 1-on-1, phone/video | Basic fit, motivation, communication | 20-30 min |
| Technical/skills | 1-on-1 or panel | Role-specific competencies | 45-60 min |
| Behavioral | 1-on-1 or panel | Collaboration, problem-solving, culture | 45-60 min |
| Final/team | Meet the team | Culture add, team dynamics | 30-45 min |
Key principle: Each stage should evaluate different competencies. Don’t ask about conflict resolution in three separate interviews. Assign competencies to specific rounds.
Example competency mapping for a Customer Success Manager:
| Competency | Assessed In |
|---|---|
| Communication quality | Phone screen |
| Client relationship management | Technical/skills round |
| Problem-solving under pressure | Behavioral round |
| Empathy and emotional intelligence | Behavioral round |
| Product/technical aptitude | Technical/skills round |
| Team collaboration | Final/team round |
Building Reusable Question Banks
Instead of creating new questions for every requisition, build a question bank organized by competency. Then mix and match based on the role.
Use AI to generate the bank:
Create an interview question bank organized by competency.
Include 4 behavioral questions for each of these competencies:
1. Leadership and influence
2. Problem-solving and analytical thinking
3. Communication and stakeholder management
4. Adaptability and learning agility
5. Collaboration and teamwork
For each question, include:
- The behavioral question
- 2 follow-up probes
- 3-sentence description of what "strong" looks like
- 1-sentence red flag to watch for
Over time, you’ll refine this bank based on which questions actually produce the most useful signal. Some questions will consistently reveal differences between strong and average candidates. Keep those. Ditch the ones where everyone gives similar answers.
The Interviewer Guide
Interviewers need more than a list of questions. Create a one-page guide for each interview round:
Create an interviewer guide for the [round name] interview
for a [Job Title].
Include:
- The 2-3 competencies this round assesses
- 3-4 core questions with follow-up probes
- A scoring rubric (1-4 scale with behavioral anchors)
- Do's and don'ts for the interviewer
- How to document feedback (specific format)
- Time allocation (how long for each section)
Interviewer do’s:
- Take notes during the interview (specific quotes and examples)
- Score independently before discussing with other interviewers
- Ask follow-up probes when answers are vague
- Give the candidate time to think before answering
Interviewer don’ts:
- Don’t share your impression with other interviewers before they’ve scored
- Don’t evaluate “culture fit” based on personal similarity
- Don’t ask about protected characteristics (even indirectly)
- Don’t let one impressive answer inflate scores on unrelated competencies
Handling Common Interview Challenges
The over-prepared candidate: They’ve rehearsed answers to common behavioral questions. Solution: Go deeper. “You mentioned you improved the process. Walk me through exactly how you identified the bottleneck.” Rehearsed candidates have surface-level stories. Strong candidates can go three layers deep.
The nervous candidate: Anxiety doesn’t correlate with job performance (unless the job requires constant public speaking). Solution: Start with easy warm-up questions. Give them permission to think. “Take a moment – there’s no rush.” Evaluate what they say, not how comfortably they say it.
The rambling candidate: They tell long, winding stories without getting to the point. Solution: Redirect politely. “That’s helpful context. Can you zero in on what you specifically did and the outcome?”
The “we” candidate: Everything is “we did” without clarity on their individual contribution. Solution: Ask directly. “I appreciate that it was a team effort. What was your specific role and contribution?”
Exercise: Design an Interview Round
Pick a role and one competency. Design a complete 15-minute interview segment:
- Write one behavioral question with the STAR elements you’re looking for
- Write 3 follow-up probes
- Define what scores of 4, 3, 2, and 1 look like
- Identify one red flag that should concern you regardless of the overall score
Key Takeaways
- Structured interviews predict job performance far better than unstructured conversations
- Behavioral questions reveal more than hypothetical ones – past behavior predicts future behavior
- Assign specific competencies to specific interview rounds – don’t evaluate everything everywhere
- Build reusable question banks organized by competency and refine them over time
- Create interviewer guides with scoring rubrics so every interviewer evaluates consistently
- Probe for depth – strong candidates can go three layers deep on their experiences
Next lesson: the candidate accepted your offer. Now let’s make sure their first 90 days don’t fall apart.
Knowledge Check
Complete the quiz above first
Lesson completed!