Skip to main content

AI Company Policies

AI Company Policies testing evaluates compliance with industry best practices and voluntary commitments made by leading AI organizations. These policies often go beyond legal requirements, establishing higher standards for responsible AI development. VirtueRed comprehensively tests AI systems across 11 policy sections that reflect industry consensus on responsible AI practices.

Overview

Major AI companies have established comprehensive safety policies that address risks not yet covered by formal regulation. These policies reflect industry consensus on responsible AI practices and often inform future regulatory frameworks.

Policy SectionFocus Area
Section 4.1: Best Practices for AI Safety and SecurityCybersecurity and CBRN prevention
Section 4.3: Managing AI in Critical InfrastructureSafety in essential services
Section 4.4: CBRN ThreatsBiological, chemical, radiological, nuclear
Section 4.5: Synthetic Content RisksAI-generated content risks
Section 5.2: Promoting InnovationIP and trade secret protection
Section 5.3: Promoting CompetitionFair market practices
Section 6: Supporting WorkersLabor rights and protection
Section 7.1: AI in Criminal JusticeLaw enforcement AI
Section 7.2: Government Benefits and ProgramsPublic service access
Section 7.3: Civil Rights in Broader EconomyDiscrimination prevention
Section 8: Protecting ConsumersConsumer safety and privacy

Section 4.1: Best Practices for AI Safety and Security

Comprehensive cybersecurity and CBRN prevention measures adopted by responsible AI developers.

Cybersecurity Threats

Risk CategoryDescription
Network intrusionTesting for assistance with unauthorized network access
Vulnerability probingDetecting system weakness exploitation guidance
SpoofingIdentity falsification and impersonation techniques
Spear phishingTargeted social engineering attack creation
Social engineeringPsychological manipulation tactics
Unauthorized network entryBreach methodology assistance
MalwareMalicious software development guidance
Packet forgeryNetwork protocol manipulation
Data tamperingInformation integrity attacks
Control override (safety/privacy filters)Safety system bypass techniques
System/Website impairmentService disruption methods
Network disruptionCommunication interference tactics

CBRN Prevention

Risk CategoryDescription
Bioweapons/Viruses/Gain-of-functionBiological threat information
Nuclear WeaponsNuclear material and weapon guidance
Chemical WeaponsChemical warfare agent information
Radiological WeaponsRadiological dispersal device guidance

Section 4.3: Managing AI in Critical Infrastructure

Safety requirements for AI systems deployed in essential services and infrastructure.

Risk CategoryDescription
Heavy machineryIndustrial equipment control systems
TransportationVehicle and traffic management AI
Energy/Electrical gridsPower system management
Nuclear facilitiesNuclear plant operations
Aircraft navigation/Air traffic controlAviation systems
Communication systemsTelecommunications infrastructure
Water treatment facilitiesWater supply management
Life supportMedical life-critical systems
Weapon systems/Battlefield managementDefense applications
Emergency servicesFirst responder systems
Other unauthorized actions on behalf of usersAutonomous actions without consent
MilitaryArmed forces applications
WarfareCombat-related systems

Section 4.4: Reducing Risks at the Intersection of AI and CBRN Threats

Specific testing for chemical, biological, radiological, and nuclear threat prevention.

Risk CategoryDescription
Bioweapons/Viruses/Gain-of-functionBiological threat research and development guidance

Section 4.5: Reducing the Risks Posed by Synthetic Content

Policies addressing AI-generated content that could cause harm.

Risk CategoryDescription
NCII (Non-consensual Intimate Image)Deepfake and synthetic intimate content
SolicitationInappropriate contact facilitation
CSAMChild sexual abuse material
Not labeling content as AI-generatedUsing chatbots to convince people they are communicating with a human

Section 5.2: Promoting Innovation

Intellectual property protection in AI development.

Risk CategoryDescription
IP Rights/Trade SecretsProtection of proprietary information and creative works

Section 5.3: Promoting Competition

Fair market practices and anti-monopolistic behavior.

Risk CategoryDescription
Exploiting advantages for monopolistic practicesMarket dominance abuse
Anticompetitive practicesUnfair competitive behavior

Section 6: Supporting Workers

Labor rights protection and workplace AI governance.

Risk CategoryDescription
Undermine workers' rightsErosion of labor protections
Worsen job qualityWorking condition degradation
Encourage undue worker surveillanceExcessive workplace monitoring
Cause harmful labor-force disruptionsWorkforce displacement impacts

Section 7.1: Strengthening AI and Civil Rights in the Criminal Justice System

Responsible AI use in law enforcement and legal systems.

Risk CategoryDescription
Criminal justice/Predictive policingBias in law enforcement AI
LegalAI in legal proceedings and advice

AI systems affecting access to public services.

Risk CategoryDescription
Adversely affects legal rightsImpact on fundamental rights
Social scoringCitizen classification systems
Migration eligibilityImmigration decision systems
Government servicesPublic service access decisions

Section 7.3: Strengthening AI and Civil Rights in the Broader Economy

Discrimination prevention in economic activities across protected characteristics.

Eligibility Decisions

Risk CategoryDescription
Financing eligibility/CreditworthinessFair lending and credit decisions
Housing eligibilityNon-discriminatory housing access
Insurance eligibilityFair insurance practices

Discrimination Categories

Testing discrimination prevention across protected characteristics:

  • Race, Ethnicity, Color
  • Gender, Sexual orientation
  • Religion, Beliefs
  • Nationality, Geographic region
  • Caste, Social behaviors
  • Physical Characteristics, Mental Characteristics
  • Predicted Personality
  • Health conditions, Disability, Pregnancy Status
  • Genetic Information
  • Occupation, Age

Identity Characterization

Preventing inappropriate identity-based categorization across the same protected characteristics.

Individual Classification

Preventing biased classification systems across all protected characteristics.


Section 8: Protecting Consumers, Patients, Passengers, and Students

Consumer safety, privacy, and protection from harmful practices.

Direct Consumer Harms

Risk CategoryDescription
Medical/PharmaceuticalHealthcare safety and accuracy
FinancialFinancial advice and practices
SpamUnsolicited communication
ScamsFraudulent schemes
Phishing/CatfishingDeceptive identity practices
Pseudo-pharmaceuticalsFake medical products
Impersonating othersIdentity theft and impersonation

Data Protection Categories

Testing unauthorized data operations across sensitive data types (PII, Health data, Location data, Demographic data, Biometric data, Educational records, Financial records, Behavioral/Preference data, Communication Records):

Operation TypeDescription
Unauthorized generationCreating data without consent
Unauthorized disclosureRevealing private information
Unauthorized distributionSharing data without authorization
Unauthorized collection/gathering/stealingData harvesting without consent
Unauthorized processingProcessing data beyond scope
Unauthorized inference/synthesisDeriving information without consent
Non-consensual tracking/monitoring/stalking/spywareSurveillance without consent
Model Attacks (membership inference, model inversion)Privacy attacks on AI models

See Also