AI Test Case Generation
Learn to generate comprehensive test cases from requirements and user stories using AI tools like testRigor and mabl — turning plain English into executable tests in minutes.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skill templates included
- New content added weekly
From Requirements to Tests in Minutes
Writing test cases is one of the most time-consuming parts of QA. A typical feature with 10 acceptance criteria might need 40-60 test cases covering happy paths, edge cases, boundary values, and error scenarios. Writing those by hand takes days.
AI test generation tools do it in minutes.
The shift is as fundamental as the one from hand-writing code to using an IDE with autocomplete. You’re not removing the human — you’re removing the tedious part so the human can focus on what matters.
How AI Test Generation Works
AI test generators follow a three-step process:
1. Input: You provide requirements in natural language — user stories, acceptance criteria, API specs, or even screenshots of the UI.
2. Analysis: The AI parses the requirements, identifies testable behaviors, infers edge cases from patterns (like form fields needing boundary tests), and maps out the test scenarios.
3. Output: The tool generates executable test cases — either in plain English (testRigor), as coded test scripts (Katalon), or as visual test flows (mabl).
Here’s what this looks like in practice:
Input (user story):
As an online shopper, I want to apply a discount code at checkout
so I can save money on my purchase.
Rules:
- Codes are case-insensitive
- Each code can only be used once per account
- Expired codes show an error message
- Codes cannot be combined (max 1 per order)
AI-generated test cases (plain English):
1. Apply valid discount code → verify price reduces correctly
2. Apply same code with different casing → verify it still works
3. Apply valid code twice on same account → verify second attempt shows "already used" error
4. Apply expired code → verify error message displays
5. Apply two different valid codes → verify only one can be active
6. Apply code to empty cart → verify appropriate error
7. Apply code, remove items below minimum → verify code behavior
8. Apply code, then refresh page → verify code persists
From four lines of requirements, AI produced eight test scenarios — including edge cases (empty cart, page refresh) that many testers would miss on a first pass.
✅ Quick Check: Why does AI test generation excel at edge cases? Because AI systematically analyzes every rule and generates tests for both the valid case AND the violation of that rule. Humans tend to focus on the happy path first and may not get to all edge cases before the sprint ends.
The Leading Tools
testRigor: Plain English Testing
testRigor lets you write tests in plain English that execute against real browsers:
login as "customer@test.com"
click on "Products"
add "Blue Widget" to cart
go to checkout
enter "SAVE20" in "Discount Code"
click "Apply"
check that "Total" decreased by 20%
No code. No selectors. No XPaths. The AI figures out which elements to interact with based on what they look like and what they do.
Best for: Teams without deep automation expertise, or teams that want business stakeholders to read and validate tests.
mabl: The Agentic Tester
mabl takes a different approach — it watches you use the application and generates tests from your behavior. Click through a user flow once, and mabl creates a reusable, self-maintaining test.
It calls itself an “agentic tester” — meaning it acts as a digital teammate that:
- Learns from how real users interact with your app
- Explores new paths autonomously
- Flags visual regressions and broken flows
Best for: Teams that want AI to actively discover issues, not just verify predefined scenarios.
Katalon with AI Assist
Katalon combines traditional test automation with AI-powered generation. Its StudioAssist feature lets you describe what you want to test, and it generates the test scripts in your preferred framework.
Best for: Teams already using Katalon or those who want AI-generated tests in standard frameworks (Selenium, Appium, Playwright).
✅ Quick Check: When would you choose testRigor over mabl? testRigor is ideal when you want to write tests from requirements in plain English. mabl is better when you want AI to learn from actual user behavior and discover issues autonomously. Choose based on whether your starting point is specifications or user interactions.
Writing Effective Prompts for Test Generation
AI test generation is only as good as the input you provide. Here’s a framework for getting comprehensive test cases:
The RBCE Framework
R — Requirements: State exactly what the feature does. B — Boundaries: Define limits, minimums, maximums, and thresholds. C — Constraints: List business rules, dependencies, and prerequisites. E — Exceptions: Describe what should happen when things go wrong.
Weak input:
Test the login page.
Strong input (RBCE):
Requirements: Users log in with email and password.
Boundaries: Email max 254 chars. Password 8-128 chars.
Constraints: Account locks after 5 failed attempts for 30 minutes.
Two-factor auth required for admin accounts. Session expires after 2 hours.
Exceptions: Show "invalid credentials" (not which field is wrong — security).
Show "account locked" with unlock time. Handle SSO redirect failures.
The weak input generates 3-5 basic tests. The strong input generates 20-30 tests covering security, edge cases, and error handling.
From Generation to Execution
AI-generated tests aren’t finished tests — they’re a first draft. Here’s the workflow that works:
- Generate: Feed requirements to AI → get test cases
- Review: QA engineer reviews for domain-specific gaps
- Augment: Add scenarios AI missed (complex business logic, cross-system workflows)
- Prioritize: Tag as smoke/regression/deep (AI can help with this too)
- Execute: Run in your CI/CD pipeline
- Maintain: AI updates tests when the app changes (more on this in Lesson 4)
The human’s job shifts from “write every test from scratch” to “review, augment, and validate” — which is faster, less tedious, and produces better results.
Key Takeaways
- AI generates test cases from natural language requirements in minutes, not days
- The RBCE framework (Requirements, Boundaries, Constraints, Exceptions) produces the most comprehensive test suites
- testRigor uses plain English, mabl learns from user behavior, Katalon generates traditional scripts with AI assist
- AI-generated tests cover 60-80% of what you need — human review adds the domain-specific scenarios that make the suite production-ready
- The workflow shifts from “write everything” to “generate, review, augment, prioritize”
Up Next: You’ll learn how AI-powered code review catches bugs at the cheapest point in the development cycle — before code even reaches the QA environment.
Knowledge Check
Complete the quiz above first
Lesson completed!