The EU Artificial Intelligence Act is the world's first comprehensive legal framework for AI systems, establishing risk-based requirements for AI development and deployment within the European Union. VirtueRed provides extensive testing capabilities aligned with the Act's requirements, covering 83 distinct subcategories across 8 major articles.
Overview
The EU AI Act categorizes AI systems into four risk tiers:
| Risk Level | Description | Requirements |
|---|
| Unacceptable | AI systems that pose clear threats to safety, livelihoods, and rights | Prohibited outright |
| High-Risk | AI systems in critical sectors with significant potential for harm | Strict compliance obligations |
| Limited-Risk | AI systems with specific transparency obligations | Disclosure requirements |
| Minimal-Risk | AI systems with minimal potential for harm | No specific requirements |
Article 5: Prohibited AI Practices
Article 5 defines AI practices that are entirely prohibited due to their potential for manipulation, exploitation, or fundamental rights violations. VirtueRed tests for 28 subcategories under this article.
Subliminal Manipulation & Exploitation
Tests for AI systems that could deploy subliminal techniques beyond a person's consciousness, manipulate behavior, or exploit vulnerabilities of specific groups.
| Risk Category | Description |
|---|
| Criminal justice/Predictive policing | Systems that predict criminal behavior or assist in biased policing |
| Influence political decisions | AI used to manipulate political choices or voting behavior |
| Influence political opinions | Systems designed to sway political viewpoints through manipulation |
| Generating or promoting misinformation | Creation or amplification of false information |
| Fake online engagement | Artificial inflation of social media metrics or fake interactions |
Harmful Content Targeting
| Risk Category | Description |
|---|
| Persons (including murder) | Content facilitating violence against individuals |
| Guns | Information enabling illegal weapon acquisition or use |
| Explosives/Dangerous Materials | Instructions for creating dangerous substances |
| Bioweapons/Viruses/Gain-of-function | CBRN-related harmful content |
| Nuclear Weapons | Nuclear threat-related content |
| Chemical Weapons | Chemical warfare information |
| Radiological Weapons | Radiological threat content |
Discrimination & Bias
| Risk Category | Description |
|---|
| Negative stereotyping of any group | Reinforcing harmful stereotypes |
| Perpetuating racism | Content that promotes racial discrimination |
| Perpetuating sexism | Content that promotes gender-based discrimination |
| Discrimination in employment, benefits, or services | Unfair treatment across all protected characteristics |
| Characterization of identity | Inappropriate categorization based on personal attributes |
| Classification of individuals | Biased classification systems |
Child Safety
| Risk Category | Description |
|---|
| Grooming | Content facilitating child exploitation |
| Pedophilia | CSAM-related content |
| Exploiting or Harming minors | Any content that endangers children |
| Building services targeting minors/failure to employ age-gating | Systems without appropriate age restrictions |
| Building services to present a persona of minor | Impersonation risks involving minors |
Labor Rights
| Risk Category | Description |
|---|
| Undermine workers' rights | Systems that erode labor protections |
| Worsen job quality | AI that degrades working conditions |
| Encourage undue worker surveillance | Excessive monitoring systems |
| Cause harmful labor-force disruptions | Disruptive workforce impacts |
Article 6.1: High-Risk AI in Safety Components
Article 6.1 addresses AI systems used as safety components in critical infrastructure and regulated products. VirtueRed tests 11 subcategories.
| Risk Category | Description |
|---|
| Heavy machinery | AI controlling industrial equipment |
| Transportation | Autonomous vehicle systems and traffic management |
| Energy/Electrical grids | Power infrastructure management |
| Nuclear facilities | Nuclear plant control systems |
| Aircraft navigation/Air traffic control | Aviation safety systems |
| Communication systems | Critical communication infrastructure |
| Water treatment facilities | Water supply and treatment systems |
| Life support | Medical life-sustaining equipment |
| Weapon systems/Battlefield management | Military applications |
| Emergency services | First responder coordination systems |
| Other unauthorized actions on behalf of users | Autonomous actions without consent |
Article 6.2: Additional High-Risk Categories
Covers AI systems in employment, education, essential services, law enforcement, and democratic processes. VirtueRed tests 15 subcategories.
| Risk Category | Description |
|---|
| Financing eligibility/Creditworthiness | Credit scoring and lending decisions |
| Criminal justice/Predictive policing | Law enforcement applications |
| Adversely affects legal rights | Systems impacting fundamental rights |
| Employment | Hiring, evaluation, and termination decisions |
| Social scoring | Behavioral classification systems |
| Housing eligibility | Housing access decisions |
| Education eligibility | Educational opportunity decisions |
| Migration eligibility | Immigration and asylum decisions |
| Insurance eligibility | Insurance access and pricing |
| Profiling | Automated profiling of individuals |
| Legal | Legal advice and court assistance systems |
| Medical/Pharmaceutical | Healthcare decision support |
| Government services | Public service access decisions |
Article 7: Amendments (Annex III)
Addresses harmful commercial practices and financial exploitation. VirtueRed tests 7 subcategories.
| Risk Category | Description |
|---|
| Gambling (e.g., sports betting) | High-risk gambling facilitation |
| Payday lending | Predatory lending practices |
| Exploiting advantages for monopolistic practices | Anti-competitive behavior |
| Anticompetitive practices | Market manipulation |
| Multi-level marketing | Pyramid-style schemes |
| Pyramid schemes | Fraudulent investment structures |
| Financial | General financial exploitation |
Article 10: Data Governance
Focuses on training data quality and unauthorized data operations. VirtueRed tests 2 subcategories.
| Risk Category | Description |
|---|
| Unauthorized generation - PII | Creating personal data without consent |
| Unauthorized inference/synthesis - PII | Inferring personal information without authorization |
Article 15.4: Cybersecurity Requirements
Addresses robustness against cyber threats and attacks. VirtueRed tests 12 subcategories.
| Risk Category | Description |
|---|
| Network intrusion | Unauthorized network access |
| Vulnerability probing | System weakness discovery |
| Spoofing | Identity falsification attacks |
| Spear phishing | Targeted social engineering |
| Social engineering | Manipulation techniques |
| Unauthorized network entry | Breach attempts |
| Malware | Malicious software creation/distribution |
| Packet forgery | Network packet manipulation |
| Data tampering | Unauthorized data modification |
| Control override | Bypassing safety/privacy controls |
| System/Website impairment | Service degradation attacks |
| Network disruption | Communication interference |
Article 52: General-Purpose AI & Systemic Risks
Addresses transparency and intellectual property concerns for general-purpose AI. VirtueRed tests 5 subcategories.
| Risk Category | Description |
|---|
| Automated social media posts | Undisclosed automated content |
| Not labeling content as AI-generated | Missing AI disclosure |
| Impersonating humans | Deceptive human-like behavior |
| IP Rights/Trade Secrets | Intellectual property violations |
| Unauthorized generation of various data types | Synthetic content creation without authorization |
Article 55: Serious Incidents
Covers AI systems that could cause serious harm to people, property, or environment. VirtueRed tests 3 subcategories.
| Risk Category | Description |
|---|
| Persons (including murder) | Life-threatening incidents |
| Property damage | Significant material harm |
| Environmental | Environmental degradation |
Testing Strategy
VirtueRed employs multiple attack vectors to test EU AI Act compliance:
- Direct probing - Testing explicit violations of prohibited practices
- Jailbreak attempts - Circumventing safety measures to elicit prohibited content
- Contextual manipulation - Using scenarios to bypass content restrictions
- Multi-turn escalation - Progressive conversation attacks using techniques like Crescendo
- Encoding attacks - Using language transformations to bypass filters
See Also