At Braintrust, we believe compliance and trust go hand-in-hand.
Braintrust AIR is built to meet the evolving demands of U.S., Canadian, and EU AI laws, ensuring that every interview conducted with our platform is fair, auditable, secure, and transparent.
Our goal is simple: AIR supports recruiters; humans always make the hiring decisions.
For detailed policies and attestations, visit our Privacy Policy, and Terms of Service. Our Trust Center is available upon request by emailing support@usebraintrust.com.
Additionally, a third‑party AI audit has been completed, validating our fairness, transparency, and safety controls which can be found here.
Regulatory Compliance and AI Governance
When AI touches hiring, laws generally require five things: (1) human oversight, (2) transparency about how AI is used, (3) explainability of outputs, (4) privacy & security for personal data, and (5) fairness (no unlawful discrimination).
What AIR does:
- Human‑in‑the‑Loop by design. AIR never auto‑accepts or rejects candidates. Recruiters review scorecards and video before any decision is made.
- Transparent & explainable. Each interview produces a clear scorecard tied to job‑specific competencies, plus transcript/video for context.
- Fairness controls. Standardized questions, consistent grading rubrics, and periodic bias testing; findings reviewed and tracked.
- Security & privacy. Minimal data collection (name, email), TLS 1.2+ in transit and AES‑256 at rest, RBAC + MFA, logging and monitoring on AWS.
- Accountability. Documented workflows, audit logs, change controls, and an annual review cycle. Third‑party AI audit completed.
US Federal & State AI Requirements
FTC Guidance on AI Marketing & Fairness
Overview: The FTC enforces truth‑in‑advertising and fairness. If you use or market AI, you must be honest about what it does, avoid deceptive claims, and ensure your practices don’t unfairly harm people (e.g., discriminatory outcomes).
How AIR complies:
- We plainly describe AIR’s role and limits; humans make decisions.
- We back performance statements with data (pilot metrics, client outcomes) and our third‑party AI audit.
- Our Privacy Policy and Te explain processing, rights, and responsibilities.
Illinois AI Video Interview Act
Overview: If AI is used to evaluate video interviews, candidates must be told AI is used, how it works in general te, and who will see the video; consent is required and videos must be deleted upon request.
AIR compliance:
- Clear on‑screen and/or email AI disclosures and consent collection.
- We explain AIR’s role (screening support, not decision‑making) and who can access results.
- Deletion requests honored via our Privacy Policy workflow; access controls restrict sharing.
California Privacy & AI
CCPA/CPRA (Privacy)
Overview: California residents have rights to know, access, delete, and limit use of personal information.AIR compliance: Minimal PII (name, email), purpose‑limited use, deletion upon request, and DPA terms supporting clients’ obligations.
SB 53 – Frontier AI Safety (Effective Jan 1, 2026)
Overview: Targets developers of general‑purpose “frontier” models (publish risk frameworks, incident reporting, whistleblower protections).
AIR compliance: Not a model developer; AIR uses licensed foundation models (e.g., OpenAI) for a narrow hiring use case. No autonomous learning, no catastrophic risk profile. We voluntarily maintain risk/safety documentation and underwent a third‑party AI audit.
ADS Employment Rules (Effective Oct 1, 2025)
Overview: Covers Automated Decision Systems (ADS) in employment. Requires bias testing/mitigation, 4‑year retention of inputs/outputs/decisions, limits on medical/psychological inferences, and extends liability to vendors acting as agents.
AIR compliance:
- Human‑in‑the‑loop decisions; AIR provides scorecards + video, recruiters decide.
- Job‑specific question banks and grading rubrics; documentation links outputs to competencies.
- Independent bias testing + third‑party audit completed; mitigation logged.
- Audit‑ready logging and configurable retention to support 4‑year records when required.
- No sensitive inferences (no medical/psychological profiling).
- Shared responsibility language in our Te/DPA; we provide evidence needed for employer compliance.
New York City Local Law 144 (AEDT)
Overview: Requires annual bias audits, public notice of AEDT use, candidate notice and instructions, and an alternative selection process.
AIR compliance:
- AIR can be deployed as an AEDT with annual bias audits (third‑party supported).
- We provide notice templates and alternative process guidance; AIR itself does not render decisions.
Maryland Facial Recognition in Interviews
Overview: Facial recognition to create templates in interviews is restricted; written consent/waiver is required.
AIR compliance: We do not use facial recognition or biometric analysis, so no waiver is required.
Tennessee “ELVIS Act” (Deepfakes/Voice/Likeness)
Overview: Prohibits unauthorized AI‑generated voices/likenesses.
AIR compliance: No synthetic voices or fake likenesses; we record real candidate interviews only.
Utah AI Disclosure Law
Overview: Requires disclosure to consumers when interacting with AI.
AIR compliance: Prominent disclosure before interviews; candidate opt‑in required. Alternatives available upon client request.
Colorado SB 169 (Effective Feb 2026)
Overview: For high‑risk AI in employment, requires a risk management program, impact assessments, notice, appeal/correction paths, and reporting discrimination within 90 days.
AIR compliance:
- AIR does not auto‑select or reject; recruiters decide.
- We maintain a risk program (aligned to ISO/NIST), conduct impact/bias assessments, and provide appeal/explanation pathways via human review.
- Third‑party audit confirms readiness; we support client notifications and reporting workflows.
Canada
Bill C‑27 – AIDA (Federal – In Progress)
Overview: Would regulate high‑impact AI (including hiring), requiring risk mitigation, transparency, documentation, and fairness.
AIR compliance: Human‑in‑the‑loop design; documented model usage, data governance, and bias testing; third‑party audit completed. We provide DPIA/AIRA templates to clients via the Trust Center.
Ontario Regulation 228/23 – AI Disclosure in Employment
Overview: Employers must inform applicants/employees in writing when AI is used to assist or make employment decisions.
AIR compliance: We supply notice language (job post, email, or in‑product banner), offer non‑AI alternatives on request, and document AIR’s limited role. See Terms/Privacy for details.
European Union
EU AI Act – High‑Risk Systems (Employment)
Overview: Classifies hiring AI as high‑risk and requires human oversight, risk management, data governance, logging, transparency, accuracy, robustness, and post‑market monitoring.
AIR compliance mapping (plain English):
- Human oversight: Recruiters always decide.
- Risk management: Formal program and change controls; third‑party AI audit completed.
- Data governance: Standardized questions; no prohibited data types; quality controls.
- Logging & monitoring: Detailed audit trails for interviews, scoring, access, and changes.
- Transparency: Candidate and client notices; explainable scorecards.
- Post‑market monitoring: Issue tracking, incident response playbooks, and model evaluation cadence.
Prohibited AI Practices
Overview: Bans real‑time biometric ID, emotion recognition in employment, social scoring, and manipulative targeting.
AIR compliance: None of these are used by AIR.
GDPR (Privacy)
Overview: Requires lawful basis, data minimization, transparency, security, and data subject rights (access, deletion, objection, etc.)
AIR compliance:
- Data minimization: Name + email only; no biometric/sensitive inferences.
- Rights handling: Access/deletion/explanation requests routed via Privacy Policy process.
- Security: Encryption in transit/at rest, least privilege, MFA, AWS logging/monitoring, BCDR.
- Retention: Default retention aligned to client contracts; configurable to meet local rules (e.g., CA ADS 4‑year retention).
Accessibility, Inclusivity, and Candidate Experience
What’s expected: Hiring tools should work for everyone and avoid disadvantaging protected groups.
What AIR does:
- Screen‑reader friendly UI; candidate guidance and flexible completion times.
- Questions and rubrics reviewed for clarity and potential bias.
- Human‑led alternative available upon client request.
- Feedback loops for candidates and recruiters; issues tracked to closure.
What We Provide Clients (Audit‑Ready Package)
- Trust Center: central hub for policies, subprocessors, security posture, and audit artifacts.
- AI Use Notices & Templates: candidate and website/email notice language (FTC/IL/NYC/ON/UT etc.).
- Bias Testing & Reports: independent assessments + remediation tracking; annual cadence.
- Logging & Retention: end‑to‑end logs; configurable retention (supports CA ADS 4‑year standard).
- DPIA/AIRA Kits: templates to complete client privacy/AI impact reviews efficiently.
- Incident & Appeal Playbooks: candidate inquiry/appeal workflows, takedown/deletion procedures.
Security Snapshot
- Encryption: TLS 1.2+ in transit; AES‑256 at rest.
- Access: RBAC, least privilege, MFA; SSO via WorkOS.
- Monitoring: AWS CloudTrail, GuardDuty; 12‑month log retention minimum.
- BC/DR: Multi‑AZ, 1‑hour RTO target; remote‑first continuity plan.
- Testing: Patch management, change control; penetration tests as required.
Conclusion
Braintrust is committed to providing compliant, ethical, and transparent hiring solutions for talent and employers. We continuously monitor and adapt to regulatory changes to ensure our platform meets the highest compliance standards. Employers can trust Braintrust AIR to support their hiring needs while adhering to all relevant laws and regulations, fostering a fair and inclusive hiring process.
If you have any questions about how existing regulations impact your ability to leverage Braintrust AIR in your hiring process, please reach out to support@braintrust.com