AT*SQA Micro-Credentials - AI Introduction for Testers

AT*SQA Software Testing Micro-Credential

AI Introduction for Testers

AI Introduction for Testers Micro-Credential

Artificial Intelligence (AI) refers to technology that enables machines to perform tasks typically requiring human intelligence, such as learning, reasoning, and decision-making. Generative AI is a subset of AI that creates new outputs like text, images, or code by recognizing patterns in training data and applying them in new contexts. In its simplest form, AI systems rely on machine learning models and neural networks trained on large datasets to analyze inputs and produce predictions or results. This AI for Testers Overview micro-credential shows you have a fundamental understanding of AI concepts so you can begin to evaluate AI-based systems and use AI to support testing.

Learn AI Introduction for Testers through AT*Learn Training

AT*SQA AI Introduction for Testers Body of Knowledge (Syllabus)

Register for the AI Introduction for Testers Micro-Credential Exam


AI in software testing accelerates test design, execution, and analysis while improving consistency and coverage. Used with traditional automation, it can generate and prioritize test ideas, detect patterns in logs and interfaces, and summarize results. Human oversight remains essential so that quality decisions are traceable and defensible.

AT*SQA’s AI Introduction for Testers micro-credential syllabus gives QA teams a concise foundation. It explains core concepts of AI and machine learning for testers, clarifies the difference between AI and generative AI, and outlines model types that matter in practice, including foundation, instruction-tuned, reasoning, multimodal, and vision-language models.

In practical terms, the syllabus shows how teams can use AI to help with testing. Examples include building test cases, building test automation, recording results, and automating many manual aspects of the testing process. Vision-language models can compare screenshots to requirements, highlight differences, and draft defect reports. These uses speed repetitive work and expand what a small team can cover.

Tooling options in the syllabus range from open-source frameworks such as TensorFlow and Apache MXNet to pre-trained models and AI-as-a-service from major providers. The right choice depends on requirements, budget, and team skills. Hardware needs can vary for training versus running models, so plan accordingly.

Benefits come with responsibilities. The syllabus recommends clear acceptance criteria, evaluation with defensible metrics such as accuracy, precision, recall, F1, and the confusion matrix, and adding these results to defect reporting. It also calls for monitoring production models for drift and being ready to re-tune or retrain. With guardrails for privacy, bias, and traceability, AI helps teams release faster without sacrificing reliability.

AT*SQA's free body of knowledge provides helpful insights AI and Generative AI for software testers. For those who prefer to watch a presentation on AI, AT*SQA also offers the $7.99 per month AT*Learn software testing training area.