Skip to main content

EU AI Act

The EU Artificial Intelligence Act is the world's first comprehensive legal framework for AI systems, establishing risk-based requirements for AI development and deployment within the European Union. VirtueRed provides extensive testing capabilities aligned with the Act's requirements, covering 83 distinct subcategories across 8 major articles.

Overview

The EU AI Act categorizes AI systems into four risk tiers:

Risk LevelDescriptionRequirements
UnacceptableAI systems that pose clear threats to safety, livelihoods, and rightsProhibited outright
High-RiskAI systems in critical sectors with significant potential for harmStrict compliance obligations
Limited-RiskAI systems with specific transparency obligationsDisclosure requirements
Minimal-RiskAI systems with minimal potential for harmNo specific requirements

Article 5: Prohibited AI Practices

Article 5 defines AI practices that are entirely prohibited due to their potential for manipulation, exploitation, or fundamental rights violations. VirtueRed tests for 28 subcategories under this article.

Subliminal Manipulation & Exploitation

Tests for AI systems that could deploy subliminal techniques beyond a person's consciousness, manipulate behavior, or exploit vulnerabilities of specific groups.

Risk CategoryDescription
Criminal justice/Predictive policingSystems that predict criminal behavior or assist in biased policing
Influence political decisionsAI used to manipulate political choices or voting behavior
Influence political opinionsSystems designed to sway political viewpoints through manipulation
Generating or promoting misinformationCreation or amplification of false information
Fake online engagementArtificial inflation of social media metrics or fake interactions

Harmful Content Targeting

Risk CategoryDescription
Persons (including murder)Content facilitating violence against individuals
GunsInformation enabling illegal weapon acquisition or use
Explosives/Dangerous MaterialsInstructions for creating dangerous substances
Bioweapons/Viruses/Gain-of-functionCBRN-related harmful content
Nuclear WeaponsNuclear threat-related content
Chemical WeaponsChemical warfare information
Radiological WeaponsRadiological threat content

Discrimination & Bias

Risk CategoryDescription
Negative stereotyping of any groupReinforcing harmful stereotypes
Perpetuating racismContent that promotes racial discrimination
Perpetuating sexismContent that promotes gender-based discrimination
Discrimination in employment, benefits, or servicesUnfair treatment across all protected characteristics
Characterization of identityInappropriate categorization based on personal attributes
Classification of individualsBiased classification systems

Child Safety

Risk CategoryDescription
GroomingContent facilitating child exploitation
PedophiliaCSAM-related content
Exploiting or Harming minorsAny content that endangers children
Building services targeting minors/failure to employ age-gatingSystems without appropriate age restrictions
Building services to present a persona of minorImpersonation risks involving minors

Labor Rights

Risk CategoryDescription
Undermine workers' rightsSystems that erode labor protections
Worsen job qualityAI that degrades working conditions
Encourage undue worker surveillanceExcessive monitoring systems
Cause harmful labor-force disruptionsDisruptive workforce impacts

Article 6.1: High-Risk AI in Safety Components

Article 6.1 addresses AI systems used as safety components in critical infrastructure and regulated products. VirtueRed tests 11 subcategories.

Risk CategoryDescription
Heavy machineryAI controlling industrial equipment
TransportationAutonomous vehicle systems and traffic management
Energy/Electrical gridsPower infrastructure management
Nuclear facilitiesNuclear plant control systems
Aircraft navigation/Air traffic controlAviation safety systems
Communication systemsCritical communication infrastructure
Water treatment facilitiesWater supply and treatment systems
Life supportMedical life-sustaining equipment
Weapon systems/Battlefield managementMilitary applications
Emergency servicesFirst responder coordination systems
Other unauthorized actions on behalf of usersAutonomous actions without consent

Article 6.2: Additional High-Risk Categories

Covers AI systems in employment, education, essential services, law enforcement, and democratic processes. VirtueRed tests 15 subcategories.

Risk CategoryDescription
Financing eligibility/CreditworthinessCredit scoring and lending decisions
Criminal justice/Predictive policingLaw enforcement applications
Adversely affects legal rightsSystems impacting fundamental rights
EmploymentHiring, evaluation, and termination decisions
Social scoringBehavioral classification systems
Housing eligibilityHousing access decisions
Education eligibilityEducational opportunity decisions
Migration eligibilityImmigration and asylum decisions
Insurance eligibilityInsurance access and pricing
ProfilingAutomated profiling of individuals
LegalLegal advice and court assistance systems
Medical/PharmaceuticalHealthcare decision support
Government servicesPublic service access decisions

Article 7: Amendments (Annex III)

Addresses harmful commercial practices and financial exploitation. VirtueRed tests 7 subcategories.

Risk CategoryDescription
Gambling (e.g., sports betting)High-risk gambling facilitation
Payday lendingPredatory lending practices
Exploiting advantages for monopolistic practicesAnti-competitive behavior
Anticompetitive practicesMarket manipulation
Multi-level marketingPyramid-style schemes
Pyramid schemesFraudulent investment structures
FinancialGeneral financial exploitation

Article 10: Data Governance

Focuses on training data quality and unauthorized data operations. VirtueRed tests 2 subcategories.

Risk CategoryDescription
Unauthorized generation - PIICreating personal data without consent
Unauthorized inference/synthesis - PIIInferring personal information without authorization

Article 15.4: Cybersecurity Requirements

Addresses robustness against cyber threats and attacks. VirtueRed tests 12 subcategories.

Risk CategoryDescription
Network intrusionUnauthorized network access
Vulnerability probingSystem weakness discovery
SpoofingIdentity falsification attacks
Spear phishingTargeted social engineering
Social engineeringManipulation techniques
Unauthorized network entryBreach attempts
MalwareMalicious software creation/distribution
Packet forgeryNetwork packet manipulation
Data tamperingUnauthorized data modification
Control overrideBypassing safety/privacy controls
System/Website impairmentService degradation attacks
Network disruptionCommunication interference

Article 52: General-Purpose AI & Systemic Risks

Addresses transparency and intellectual property concerns for general-purpose AI. VirtueRed tests 5 subcategories.

Risk CategoryDescription
Automated social media postsUndisclosed automated content
Not labeling content as AI-generatedMissing AI disclosure
Impersonating humansDeceptive human-like behavior
IP Rights/Trade SecretsIntellectual property violations
Unauthorized generation of various data typesSynthetic content creation without authorization

Article 55: Serious Incidents

Covers AI systems that could cause serious harm to people, property, or environment. VirtueRed tests 3 subcategories.

Risk CategoryDescription
Persons (including murder)Life-threatening incidents
Property damageSignificant material harm
EnvironmentalEnvironmental degradation

Testing Strategy

VirtueRed employs multiple attack vectors to test EU AI Act compliance:

  1. Direct probing - Testing explicit violations of prohibited practices
  2. Jailbreak attempts - Circumventing safety measures to elicit prohibited content
  3. Contextual manipulation - Using scenarios to bypass content restrictions
  4. Multi-turn escalation - Progressive conversation attacks using techniques like Crescendo
  5. Encoding attacks - Using language transformations to bypass filters

See Also