Introduction to AI: ISTQB AI Testing
For those seeking a quick introduction to AI, here is a summary of pages 12-48 from the ISTQB AI Testing syllabus. Do not rely upon it as preparation for the ISTQB AI Testing exam – this is a quick summary to help you gauge your interest in this important testing topic.
Introduction to AI
Artificial Intelligence (AI) refers to the capability of engineered systems to acquire, process, and apply knowledge and skills. The understanding of AI has evolved with societal perceptions, described as the "AI Effect," where previously revolutionary technologies (e.g., chess-playing computers) are no longer considered AI.
AI is categorized into:
- Narrow AI: Focused on specific tasks, like spam filters or voice assistants.
- General AI: Not yet realized, aims to simulate broad human cognitive abilities.
- Super AI: Hypothetical, surpassing human intelligence and tied to the concept of technological singularity.
Differences Between AI-Based and Conventional Systems
Conventional systems follow predefined rules, while AI-based systems use data patterns for decision-making. For example, AI systems trained to identify images rely on inferred patterns, resulting in less transparency compared to conventional systems. AI-based systems may exhibit non-deterministic and dynamic behavior, unlike their rule-based counterparts.
Key AI Technologies
AI incorporates a variety of techniques, including:
- Fuzzy Logic: Handles uncertainties in reasoning.
- Machine Learning (ML): Uses algorithms like regression, clustering, and neural networks.
- Genetic Algorithms: Solve optimization problems by mimicking evolutionary processes.
These technologies can function individually or in combination.
AI Development Frameworks
Popular AI frameworks streamline model development:
- TensorFlow: Google’s tool for data flow graphs.
- Keras: High-level Python API for neural networks.
- PyTorch: Favored for image processing and natural language tasks.
- Scikit-learn: A library for algorithms like random forests and SVMs.
Frameworks differ in usability, computational resource needs, and compatibility with programming languages.
Hardware for AI Systems
AI systems leverage various hardware types:
- CPUs and GPUs: GPUs excel in parallel processing, crucial for ML tasks.
- AI-Specific Chips: Includes Google TPUs, NVIDIA GPUs, and neuromorphic processors designed for edge computing and energy efficiency.
AI as a Service (AIaaS)
AIaaS provides cloud-based AI capabilities like image recognition or NLP through platforms such as AWS and Microsoft Azure. Benefits include scalability and cost efficiency. However, contracts often focus on uptime rather than functional performance metrics.
Pre-Trained Models
Pre-trained models save time and resources by reusing existing algorithms trained on large datasets. Techniques like transfer learning fine-tune such models for new tasks. Risks include inheriting biases or vulnerabilities from the original model.
Standards and Regulations
AI systems must adhere to evolving standards, including:
- GDPR: Mandates accuracy in automated decision-making.
- ISO Standards: Cover AI and safety-related systems like autonomous vehicles.
Ethical guidelines emphasize fairness, transparency, and accountability.
Quality Characteristics for AI-Based Systems
AI systems introduce unique quality characteristics, including:
- Flexibility and Adaptability: Systems must handle changing environments and operational contexts.
- Autonomy: Allows prolonged operation without human oversight but requires defined operational bounds.
- Bias: Algorithmic and sample biases can lead to unfair outcomes, necessitating mitigation.
- Ethics: Ethical principles, like fairness and privacy, must be embedded during development.
- Transparency and Explainability: Users should understand AI decisions to foster trust, often through Explainable AI (XAI).
- Safety: Ensuring AI systems operate without causing harm is vital, especially in high-stakes domains like healthcare and transportation.
Overview of Machine Learning (ML)
Machine Learning forms the backbone of many AI systems, categorized into:
- Supervised Learning: Uses labeled data for classification or regression tasks.
- Unsupervised Learning: Identifies patterns through clustering or association.
- Reinforcement Learning: Involves iterative interactions with the environment, applying rewards and penalties to train systems like autonomous vehicles.
ML Workflow
The ML workflow includes:
- Data Preparation: Accounts for 43% of effort, encompassing cleaning, transformation, and feature engineering.
- Model Training: Algorithms process datasets to develop predictive models.
- Evaluation and Tuning: Refines models based on functional performance metrics.
- Testing: Assesses generalizability using separate datasets.
- Deployment and Monitoring: Ensures operational accuracy and addresses concept drift over time.
Data Considerations in ML
High-quality datasets are essential. Common issues include:
- Insufficient or Unbalanced Data: Reduces model reliability.
- Duplicate Data: Skews predictions.
- Privacy Concerns: Compliance with regulations like GDPR is mandatory.
Handling Overfitting and Underfitting
Overfitting: Occurs when models are too tailored to training data, leading to poor generalization.
Underfitting: Results from overly simplistic models that fail to capture data patterns.
Neural Networks
Neural networks mimic human brain structures, excelling in tasks like image and speech recognition. Testing neural networks includes:
- Coverage Measures: Ensures model performance across various conditions.