QA Engineer

Updated for 2026: QA Engineer interview questions and answers covering core skills, tools, and best practices for roles in the US, Europe & Canada.

18 Questions
mediumqa-test-strategy

How do you create a test strategy for a new product or feature?

A good test strategy is risk-based. Start by mapping user flows and failure impact, then define: - Test scope (what’s in/out) - Test levels (unit, integration, E2E) - Environments and test data - Release gates and acceptance criteria Prioritize high-risk areas (payments, auth, data loss) and automate stable, repeatable checks first.

TestingStrategyQuality Assurance
easyqa-test-plan-vs-test-case

Test plan vs test case vs test suite: what’s the difference?

A test plan defines approach and scope. A test case is a single check with steps and expected results. A test suite groups related test cases for execution. Clear structure improves communication and prevents missing coverage during releases.

TestingProcessQA
easyqa-bug-reporting

What makes a high-quality bug report that developers can act on quickly?

A good bug report is reproducible and specific. Include: - Clear title and severity/priority - Steps to reproduce - Expected vs actual behavior - Environment details (device/browser/build) - Logs/screenshots/recording Also note frequency and scope (all users vs specific segment) to help triage.

Bug ReportingCommunicationQA
easyqa-severity-vs-priority

Severity vs priority in QA: what’s the difference?

Severity describes technical impact (crash, data loss). Priority describes how urgently it should be fixed. Example: a cosmetic typo is low severity but could be high priority if it’s on a marketing landing page before a launch.

TriageProcessQA
mediumqa-regression-testing

What is regression testing and how do you choose what to regression test?

Regression testing verifies that new changes didn’t break existing functionality. Choose regression coverage by: - Core user journeys (login, checkout) - High-risk modules (payments, auth) - Recently changed areas Automate stable regressions and keep manual regression focused on high-value exploratory checks.

RegressionTestingQA
easyqa-smoke-vs-sanity

Smoke testing vs sanity testing: what’s the difference?

Smoke tests are quick checks to confirm a build is stable enough for deeper testing. Sanity tests validate a specific change or small area works after a minor update. Both help catch obvious issues early and avoid wasting time on broken builds.

TestingReleaseQA
mediumqa-exploratory-testing

What is exploratory testing and how do you do it effectively?

Exploratory testing is learning and testing at the same time. Do it well by: - Setting a time-box and mission - Taking notes and capturing evidence - Varying inputs, roles, and devices - Focusing on edge cases and workflows It complements automation by finding issues scripts don’t anticipate.

Exploratory TestingQAQuality
mediumqa-test-automation-what-to-automate

What should you automate in QA and what should stay manual?

Automate stable, repeatable checks with clear expected outcomes. Automate: - Regression for critical flows - API contract checks - Data validation Keep manual: - Exploratory testing - UX and visual nuance - New features early on Goal: maximize signal and minimize flaky, high-maintenance tests.

AutomationTestingQA
hardqa-flaky-tests

What causes flaky tests and how do you fix them?

Flaky tests fail intermittently without code changes. Causes: - Timing/race conditions - Shared test data - Unstable selectors - Environment variability Fix by stabilizing data, using deterministic waits, improving selectors, isolating dependencies, and adding retries only as a last resort (with investigation).

AutomationReliabilityTesting
mediumqa-api-testing

How do you test APIs effectively (functional, contract, negative tests)?

API testing should cover correctness and robustness. Include: - Happy path tests - Validation and error shapes - Auth/authz checks - Rate limiting - Contract tests (schema) Also run negative tests (invalid inputs) and ensure consistent status codes and error messages.

API TestingQAAutomation
mediumqa-e2e-testing-tools

How do you choose E2E testing tools (Playwright vs Cypress vs Selenium)?

Choose based on reliability, debugging, browser support, and CI performance. Playwright offers strong cross-browser support and modern tooling. Cypress has a great developer experience but different architecture constraints. Selenium is flexible but can be heavier to maintain. Keep E2E scope small and focus on critical paths.

E2EToolingQA
hardqa-test-data-management

How do you manage test data for reliable automated tests?

Test data must be deterministic. Approaches: - Seeded fixtures - API-based setup/teardown - Isolated test accounts - Synthetic data generation Avoid shared state across tests and keep cleanup reliable. Data drift is a common cause of flaky failures.

Test DataAutomationQA
mediumqa-performance-testing-basics

What is performance testing and what should QA measure?

Performance testing measures speed and stability under load. Measure: - Latency percentiles (p95/p99) - Throughput - Error rate - Resource usage Use realistic scenarios and ramp patterns. Performance issues often come from DB bottlenecks, N+1 queries, and missing caching.

Performance TestingQAReliability
mediumqa-security-testing-basics

What security checks can QA add without being a security engineer?

QA can add practical security checks: - Auth/authz regression tests - Input validation and error handling - Basic OWASP checks for critical endpoints - Dependency and config sanity checks QA should partner with security for deeper testing, but can catch many common issues early with automated checks.

SecurityQATesting
mediumqa-ci-quality-gates

How do you set quality gates in CI to prevent bad releases?

Quality gates are automated checks required to merge or deploy. Include: - Unit tests and linters - Smoke tests on staging - API contract checks - Critical E2E flows Make gates fast and reliable. Slow or flaky gates get bypassed and reduce trust.

CI/CDQuality GatesQA
easyqa-metrics

What QA metrics are useful and which ones are misleading?

Useful metrics measure outcomes: - Defect escape rate - Time to detect/resolve - Flaky test rate - Coverage of critical flows Misleading metrics: raw bug counts or test case counts without context. Focus on improving reliability and user experience, not vanity numbers.

MetricsQualityProcess
easyqa-acceptance-criteria

How do you write acceptance criteria that are testable and reduce ambiguity?

Acceptance criteria should be specific and measurable. Use: - Given/When/Then format - Clear edge cases - Error states and permissions - Non-functional constraints when relevant Good criteria prevent late surprises and make automation and regression planning easier.

RequirementsQAProcess
mediumqa-release-readiness

What does release readiness look like for QA before shipping?

Release readiness is evidence-based. Checklist: - Critical tests passed - Known issues documented and accepted - Monitoring and rollback plan ready - No major regressions in core flows QA should communicate risk clearly and recommend go/no-go based on impact and confidence, not feelings.

ReleaseQARisk