AI

Hiring Under The Algorithm: AI Employment Decisions And The New Compliance Domain

Solari

Across the European Union and the United States, AI systems used to recruit, screen, rank, and select workers have moved to a defined high-risk category.

Hiring Under The Algorithm: AI Employment Decisions And The New Compliance Domain

THE BRIEF

AI is now a routine input into hiring decisions. The Society for Human Resource Management's 2025 Talent Trends survey, conducted in February 2025, found that 43% of organizations use AI in HR functions, up from 26% in 2024; among AI-using organizations, 64% apply it to recruiting, interviewing, or hiring [1].

The regulatory environment around those systems has changed faster than the technology. The EU AI Act classifies AI used for recruitment, candidate evaluation, and workforce decision-making as high-risk under Annex III, point 4, with full compliance obligations applicable on August 2, 2026 [2][3].

In the United States, the statutory landscape has fragmented at the state level. Illinois HB 3773 became effective on January 1, 2026, amending the Illinois Human Rights Act to address AI in employment decisions [4]. California's Civil Rights Department Automated-Decision Systems regulations took effect on October 1, 2025 [5]. Colorado's original Artificial Intelligence Act (SB 205), first postponed from February 2026 to June 2026, was paused by federal court order in April 2026 and effectively replaced by a narrower notice-and-transparency bill (SB 189) in May 2026, with a proposed effective date of January 1, 2027 [6][17][18]. Texas enacted the Responsible Artificial Intelligence Governance Act effective January 1, 2026 [7].

Enforcement is no longer hypothetical. The U.S. Equal Employment Opportunity Commission announced a USD 365,000 consent decree resolving its discrimination suit against iTutorGroup, an algorithmic-screening case, on September 11, 2023 [8]. The Mobley v. Workday class action was conditionally certified as an Age Discrimination in Employment Act collective on May 16, 2025; the court estimated the collective could include hundreds of millions of applicants based on Workday's filings [9].


I. THE ADOPTION REALITY

The deployment of AI in employment decisions is no longer concentrated among large technology employers; it has become a routine feature of the broader labor market. SHRM's 2025 Talent Trends survey, conducted in February 2025 across U.S. organizations, found that 43% of organizations now use AI in human resources functions, up from 26% in 2024 [1]. The same survey found that publicly traded for-profit organizations lead adoption at 58%, with private for-profits at 45%, nonprofits at 38%, state and local governments at 35%, and federal agencies at 19% [1]. Among organizations using AI in HR, 64% apply it specifically to recruiting, interviewing, or hiring [1].

These figures describe the scale at which algorithmic systems are entering decisions that have historically been subject to extensive anti-discrimination law. Recruitment, candidate screening, ranking, interview scoring, and workforce performance evaluation are now functions in which AI systems are routinely deployed, often by vendors whose models are trained on proprietary datasets and whose decision logic is not visible to the deploying employer.

The regulatory response, until recently, lagged the deployment. That gap has now closed. By the second half of 2026, every employer deploying AI in employment decisions in the European Union or in major U.S. jurisdictions faces a structured set of statutory, regulatory, and enforcement requirements that did not exist three years earlier.


II. WHERE THE GAP BECAME VISIBLE: ENFORCEMENT AND LITIGATION

Two enforcement developments illustrate the transition from regulatory aspiration to active accountability.

The first is the U.S. Equal Employment Opportunity Commission's consent decree with iTutorGroup Inc., announced by the EEOC on September 11, 2023, resolving an EEOC suit filed in the Eastern District of New York. Under the decree, iTutorGroup agreed to pay USD 365,000 to applicants who were automatically rejected by tutor-application software programmed to reject female applicants aged 55 or older and male applicants aged 60 or older [8]. The EEOC alleged that the screening logic resulted in the automatic rejection of more than 200 qualified U.S.-based applicants in 2020 solely on the basis of age, in violation of the Age Discrimination in Employment Act [8]. The decree also imposed continuing training requirements, an anti-discrimination policy, and a five-year EEOC monitoring period should iTutorGroup resume U.S. hiring [8].

The second is the Mobley v. Workday litigation in the U.S. District Court for the Northern District of California (Case No. 23-cv-00770-RFL). The plaintiff, Derek Mobley, alleged that Workday's AI-based applicant recommendation system produced disparate outcomes for applicants on the basis of race, age, and disability, and that Workday is liable not as an employer but as an "agent" of the employers who deploy its tools [9]. On May 16, 2025, the court granted conditional collective certification of the Age Discrimination in Employment Act claim, finding that Mobley had adequately alleged a unified policy: the use of Workday's AI recommendation system to score, sort, rank, or screen applicants [9]. Based on Workday's filings indicating that 1.1 billion applications were rejected using its software during the relevant period, the collective could include hundreds of millions of members [9]. The court's willingness to consider an AI vendor as a potential agent of the employer is the central legal question being tested; its resolution will materially shape the allocation of liability between employers and AI vendors operating in employment decisions.

These developments establish that the legal exposure of AI in employment is not contingent on the enactment of a new statute; it operates within existing anti-discrimination frameworks. The Civil Rights Act of 1964, the Age Discrimination in Employment Act, the Americans with Disabilities Act, and the Equal Pay Act apply to algorithmically mediated employment decisions on the same terms as they apply to human-mediated decisions, a position the EEOC has formally articulated since 2022 [10][11].


III. THE REGULATORY ARCHITECTURE AS IT NOW STANDS

The compliance environment for AI in employment decisions is now structured across three layers: a comprehensive EU framework, a fragmenting state-level U.S. landscape, and a body of federal guidance and enforcement under existing anti-discrimination statutes.

At the EU level, the AI Act (Regulation (EU) 2024/1689), which entered into force on August 1, 2024, classifies as high-risk under Annex III, point 4, AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates; AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behavior or personal traits, or to monitor and evaluate the performance and behavior of persons in such relationships are also classified as high-risk [2]. Full applicability of the high-risk regime falls on August 2, 2026 [3].

Article 26 of the Regulation sets out the obligations of deployers of high-risk AI systems: use in accordance with provider instructions, assignment of competent human oversight, monitoring of system operation, log retention for at least six months, notification of providers and authorities upon identification of risks, and prior notification of workers' representatives and affected workers where the system is used in the workplace [12]. Article 27 imposes a fundamental rights impact assessment obligation on certain deployers, in particular bodies governed by public law and private entities providing public services, prior to first use of a high-risk AI system [13]. Maximum administrative fines under the Regulation reach EUR 35 million or 7% of global annual turnover for prohibited practices and EUR 15 million or 3% of global annual turnover for other specified violations [3].

At the U.S. state level, the landscape is moving in multiple directions simultaneously. New York City Local Law 144, in effect since July 5, 2023, requires employers using an automated employment decision tool to conduct a bias audit within the year preceding use, to publish a summary of the audit on the employer's website, and to provide notice to candidates that AEDTs will be used in the decision [14]. A December 2025 audit by the New York State Comptroller concluded that DCWP's complaint-handling process is "ineffective"; the Comptroller's office reviewed the same 32 companies that DCWP had surveyed and identified at least 17 instances of potential non-compliance with LL 144, compared with the single instance DCWP had identified [15]. The audit also found that DCWP did not use the formal procedures established under its memorandum of understanding with the NYC Office of Technology and Innovation and did not conduct additional educational outreach after May 2023 [15]. DCWP agreed to implement the majority of the Comptroller's recommendations [15].

Illinois HB 3773, which took effect on January 1, 2026, amends the Illinois Human Rights Act to make it a civil rights violation for an employer to use AI that has the effect of subjecting employees to discrimination on the basis of a protected class, and to fail to notify employees and applicants of the employer's use of AI in recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or terms, privileges, or conditions of employment [4]. California's Civil Rights Council finalized Automated-Decision Systems regulations effective October 1, 2025; the regulations apply to California employers using AI, machine-learning, or other data processing to facilitate employment decisions covered by the Fair Employment and Housing Act, treat use of an ADS that produces adverse impact as actionable absent demonstration that the selection practice is job-related and consistent with business necessity, and require retention of personnel and automated-decision-system records for at least four years [5]. The California regulations also extend liability to an employer's "agent," defined to include parties exercising functions traditionally exercised by the employer through the use of automated decision systems [5].

Colorado's Artificial Intelligence Act (SB 24-205), signed by Governor Polis on May 17, 2024, was originally set to take effect on February 1, 2026; on August 28, 2025, Governor Polis signed SB 25B-004, postponing the Act's effective date to June 30, 2026 [6]. The Act applied to developers and deployers of high-risk AI systems that make or are a substantial factor in consequential decisions including employment, imposing a duty of reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination, and contemplating risk management programs, impact assessments, and consumer notifications [6]. However, enforcement was paused by federal court order on April 27, 2026, and the Colorado legislature passed SB 189 on May 7-9, 2026, a replacement bill that drops the original risk management programs, annual impact assessments, and extensive algorithmic discrimination duties in favor of a narrower notice-and-transparency framework, with a proposed effective date of January 1, 2027 [17][18]. The Colorado experience illustrates both the regulatory momentum and the political complexity surrounding AI employment legislation. Texas HB 149, the Responsible Artificial Intelligence Governance Act, was signed by Governor Abbott on June 22, 2025 and takes effect on January 1, 2026; the law prohibits the development or deployment of AI systems with the intent to unlawfully discriminate against a protected class, with the K&L Gates client alert noting that disparate impact alone cannot establish intent to discriminate under the statute [7]. Enforcement is exclusive to the Texas Attorney General, with no private right of action; civil penalties range from USD 10,000 to USD 12,000 for curable violations, USD 80,000 to USD 200,000 for uncurable violations, and USD 2,000 to USD 40,000 per day for continuing violations [7].

At the federal level, the EEOC's May 18, 2023 technical assistance document, "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964," applies the existing Uniform Guidelines on Employee Selection Procedures to algorithmic decision-making tools and confirms that employers may bear Title VII liability for adverse impact produced by vendor-developed tools they deploy [10]. The May 12, 2022 EEOC technical assistance on the Americans with Disabilities Act addressed AI screening tools that may exclude candidates with disabilities or fail to accommodate them [11]. The National Institute of Standards and Technology's AI Risk Management Framework (AI RMF 1.0), published in January 2023, and its Generative AI Profile (NIST AI 600-1), published on July 26, 2024, articulate a four-function model (Govern, Map, Measure, Manage) that has become the most widely referenced AI governance standard in the United States [16].


IV. OPERATIONAL AND FINANCIAL CONSEQUENCES

The convergence of these regimes produces three operational consequences that are now material for any organization deploying AI in employment decisions.

The first is the redistribution of vendor liability. The Mobley v. Workday court's recognition that an AI vendor may be sued directly as an "agent" of the employing organization, combined with the EEOC's position that employers may bear Title VII responsibility for vendor-developed tools, means that the question "who is liable when an AI hiring tool produces discriminatory outcomes" no longer has a single defensible answer [9][10]. Vendor contracts that exclude AI bias indemnities, that disclaim warranties of non-discrimination, or that limit data access required to perform bias audits now carry materially higher legal exposure than they did before 2024.

The second is the documentation burden. Article 26 of the EU AI Act requires deployers to maintain logs for at least six months, monitor system operation against provider instructions, and document fundamental rights impact assessments where Article 27 applies [12][13]. California's CRD regulations require retention of audits, decision logs, and related records for at least four years [5]. NYC Local Law 144 requires annual bias audits and public summaries [14]. Illinois HB 3773 requires notice to employees and applicants whenever AI is used to influence employment decisions [4]. Texas HB 149 requires demonstration that AI use was not undertaken with intent to discriminate [7]. None of these regimes is satisfied by retroactive documentation produced in response to inquiry; each requires governance infrastructure operating continuously from the point of system deployment.

The third is the cost-of-incident asymmetry. The iTutorGroup settlement, at USD 365,000, was modest by federal employment-litigation standards; the operational and reputational costs of the settlement included a five-year monitoring period, training requirements, and revised hiring policies [8]. The Mobley v. Workday litigation, certified as a collective action, carries materially higher potential exposure both in damages and in industry-wide signaling [9]. EU AI Act maximum penalties of EUR 15 million or 3% of global turnover for non-compliance with high-risk system obligations exceed the cost of building a defensible governance program by an order of magnitude for any organization above the SME threshold [3]. The financial calculus that treats governance investment as a cost center no longer holds.


V. WHAT GOVERNANCE INFRASTRUCTURE NOW REQUIRES

The compliance architecture for AI in employment decisions cannot be assembled retroactively. It is, by the design of the regimes that govern it, a function of decisions made before deployment.

An AI inventory is the first prerequisite. An organization that does not know which systems it deploys, in which functions, on which categories of applicants and employees, cannot perform the classification analysis required by the EU AI Act, the impact assessment originally required by Colorado's AI Act (since narrowed by SB 189), the bias audit required by NYC Local Law 144, or the documentation contemplated by California's CRD regulations. The inventory must include vendor-developed systems used through procurement; the regulations attach to deployers, not only to developers.

A defensible bias-audit methodology is the second. NYC Local Law 144's standard is independent auditing within the preceding year; California's CRD regulations contemplate ongoing audit retention [5][14]. Where audits are vendor-supplied, the deploying employer must understand the auditor's independence, methodology, and protected-class coverage; an audit that tests for sex but not race, or that uses synthetic rather than empirical data, may not satisfy the regulatory standard.

A vendor-management discipline is the third. EU AI Act Article 26 imposes on deployers the obligation to use systems in accordance with provider instructions, monitor operation, and notify providers of risks identified during use [12]. California's regulations attach record-keeping obligations to vendors as well as employers [5]. The Mobley v. Workday agent theory raises the practical question of whether vendor contracts adequately allocate liability and provide audit cooperation in the event of regulatory inquiry [9]. Procurement contracts written before 2024 likely do not.

A human-oversight architecture is the fourth. Article 26 of the EU AI Act requires designated human oversight of high-risk AI systems; the EEOC's 2023 technical assistance, the U.S. Equal Employment Opportunity Commission's continuing position on algorithmic decision-making, contemplates that employers cannot delegate Title VII responsibility to algorithms [10][12]. Human oversight is not satisfied by the presence of a human reviewer in the workflow if that reviewer cannot in practice override the algorithmic determination. Oversight must be substantive: documented, trained, and authorized to act.

A notification and remediation system is the fifth. Illinois HB 3773 requires notice to employees and applicants when AI is used in employment decisions [4]. Colorado's original AI Act required consumer notice when high-risk AI systems make or substantially inform consequential decisions, though these requirements have been narrowed under the replacement bill SB 189 [6][17]. The EU AI Act requires notification of workers' representatives and affected workers prior to workplace deployment [12]. These notice obligations interlock with the documentation burden; an employer that cannot identify which decisions an AI system contributed to cannot perform the notice required by law.


VI. THE LEADERSHIP IMPERATIVE

Board-level and executive accountability for AI in employment decisions follows from the structure of the legal exposure. The EEOC's 2023 technical assistance places the employer, not the vendor, at the center of Title VII liability for adverse impact in AI-mediated selection [10]. The Mobley v. Workday litigation extends, but does not displace, that employer exposure; even a successful agent claim against the vendor does not relieve the employer of its independent Title VII duties [9]. EU AI Act fines accrue to the deploying organization; the Regulation imposes liability on deployers as organizations, with administrative enforcement conducted through Member State competent authorities [3][12].

Boards and senior leadership are now responsible for ensuring that AI deployment decisions are made within a governance architecture capable of producing the documentation, the impact assessments, and the audit trails on which compliance depends. The relevant questions are not technical. They are governance questions: who in the organization is authorized to deploy a new AI system in an employment function, what conditions must be satisfied before that authorization is granted, who maintains the inventory and the documentation, who reviews the vendor relationship annually, and who reports to the board on the state of the program.

These are decisions that cannot be delegated to procurement or to talent operations alone. They sit at the intersection of legal, HR, technology, and risk functions and they require explicit ownership at the executive level. The organizations that have built that ownership structure before August 2, 2026 will navigate the EU AI Act's high-risk applicability date as a planned compliance milestone. The organizations that have not will encounter it as an enforcement exposure.


VII. CONCLUSION

The transition of AI in employment decisions from an unregulated efficiency tool to a defined high-risk compliance domain has occurred in less than three years. The EEOC's iTutorGroup consent decree was announced in September 2023 [8]. The EU AI Act entered into force in August 2024 [2]. NYC Local Law 144 began operating in July 2023 [14]. The Mobley v. Workday conditional collective certification followed in May 2025 [9]. Illinois, California, and Texas all enacted or activated AI-in-employment regimes between October 2025 and January 2026; Colorado enacted comprehensive AI legislation in 2024 but has since narrowed its approach through replacement legislation in May 2026 [4][5][6][7][17]. The EU AI Act's high-risk obligations become fully applicable on August 2, 2026 [3].

For organizations deploying AI in employment, the question after May 2026 is no longer whether governance infrastructure is required. The regulatory architecture, the enforcement record, and the active litigation make that question answered. The question is whether the governance infrastructure now in place can survive August 2026, and whether the documentation it produces can withstand the inquiry that the law now contemplates.

The institutions that have built that infrastructure as a function of how they hire, rather than as a retrofit applied after a complaint, will find the new environment manageable. The institutions that have not will find that what looked like an efficiency tool has become a category of legal exposure they did not budget for.


RECOMMENDATIONS

Within 30 days:

Conduct an AI inventory across talent acquisition, workforce management, and performance evaluation functions. Identify every system that produces, ranks, scores, filters, or substantially informs an employment decision, including systems deployed by vendors and integrated through HRIS or applicant tracking platforms. The inventory is a prerequisite for every subsequent compliance step.

Within 60 days:

Review existing vendor contracts for AI-bias indemnification, audit cooperation, data access for impact assessments, and notice rights to the deployer of identified risks. Where the contracts do not adequately allocate liability or provide access, document the gap and prepare amendment proposals for the next renewal cycle.

Within 90 days:

Complete a fundamental rights impact assessment, or its U.S.-equivalent risk assessment, for each high-risk AI system identified in the inventory. The assessment must be specific to the deployer's use, not a generic vendor-provided document. Document the human-oversight architecture, the notice protocol for affected workers and applicants, the audit cadence, and the escalation path for adverse outcomes.

Within six months:

Establish executive ownership of the AI-in-employment governance program. Identify the responsible executive (chief people officer, general counsel, or chief risk officer, depending on organizational design), define the reporting cadence to the board, and align the program with existing enterprise risk management.

Continuously:

Track regulatory developments. The EU AI Act's high-risk obligations apply on August 2, 2026 [3]. Colorado's original AI Act has been replaced by SB 189, a narrower notice-and-transparency framework with a proposed effective date of January 1, 2027 [17][18]. The NYC enforcement environment is changing in response to the December 2025 Comptroller audit [15]. State-level activity has not stabilized.

Benchmarks that should change the recommendation:

A final judgment in Mobley v. Workday on the agent question; enactment or gubernatorial veto of Colorado SB 189; revisions to the EU AI Act's high-risk classification scheme; or enactment of comprehensive federal AI legislation. Each would materially shift the allocation of obligation between employer and vendor.


CAVEATS

Survey methodology: The SHRM 2025 Talent Trends figures cited in this analysis reflect a February 2025 U.S. survey. The figures describe self-reported adoption and may not capture all AI use in employment decisions, particularly where AI features are embedded within broader HRIS or applicant tracking system functionality that respondents do not separately identify as "AI."

Litigation status: Mobley v. Workday is in active discovery and has not been adjudicated on the merits. The court's certification rulings establish that the case may proceed as a collective action; they do not constitute findings of liability against Workday. The "agent" theory of vendor liability is a contested legal question and may evolve at the trial or appellate level.

Regulatory dates: Effective dates and applicability schedules for the EU AI Act, Colorado AI Act, Illinois HB 3773, California CRD regulations, Texas HB 149, and NYC Local Law 144 reflect the state of the regulatory record as of May 2026. Colorado's original AI Act (SB 205) was postponed, then paused by federal court order, and effectively replaced by SB 189 in May 2026; if signed by the Governor, SB 189 would take effect January 1, 2027 with a significantly narrower scope than the original legislation [17][18]. Regulatory frameworks may be further modified before their cited effective dates.

Federal U.S. landscape: This analysis covers federal anti-discrimination enforcement and state-level AI-in-employment legislation. It does not exhaustively cover sector-specific federal financial-services or healthcare regulatory regimes that may also apply to AI deployment within those sectors.

Cross-jurisdictional analysis: Organizations operating across multiple U.S. states and the EU face overlapping and occasionally divergent compliance obligations. This article identifies the principal regimes but does not substitute for jurisdiction-specific compliance review.




REFERENCES

[1] Society for Human Resource Management. "The Role of AI in HR Continues to Expand" (2025 Talent Trends). SHRM Research, 2025. (Survey conducted February 2025.) https://www.shrm.org/topics-tools/research/2025-talent-trends/ai-in-hr


[2] European Parliament and Council of the European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)." Official Journal of the European Union, July 2024. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng


[3] DLA Piper. "Latest Wave of Obligations Under the EU AI Act Take Effect: Key Considerations." DLA Piper Client Alert, August 2025. https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect


[4] Healy, M. Claire; Parker, Kathleen D.; Rigney, Erinn L. (K&L Gates LLP). "Illinois Anti-Discrimination Law to Address AI Goes Into Effect on 1 January 2026." K&L Gates Cyber Law and Cybersecurity Alert, 1 May 2025 (republished in National Law Review). https://natlawreview.com/article/illinois-anti-discrimination-law-address-ai-goes-effect-1-january-2026


[5] Kourinian, Arsen; Zadikany, Ruth; Merritt, Remy N. (Mayer Brown LLP). "California Adopts New Employment AI Regulations Effective October 1, 2025." Mayer Brown Insights, 2 September 2025. https://www.mayerbrown.com/en/insights/publications/2025/08/california-adopts-new-employment-ai-regulations-effective-october-1-2025


[6] Glasser, Nathaniel M.; Chung, Eleanor T.; Forman, Adam S.; Snyder Good, Rachel; Shah, Alaap B. (Epstein Becker Green). "Colorado's Historic AI Law Survives Without Delay (So Far)." Workforce Bulletin, 13 May 2025; updated 28 August 2025. https://www.workforcebulletin.com/colorados-historic-ai-law-survives-without-delay-so-far


[7] Hockaday, Brent D.; Lewis, Gregory T. (K&L Gates LLP). "Pared Back Version of the Texas Responsible Artificial Intelligence Governance Act Signed Into Law." K&L Gates US Labor, Employment, and Workplace Safety Alert, 25 June 2025. https://www.klgates.com/Pared-Back-Version-of-the-Texas-Responsible-Artificial-Intelligence-Governance-Act-Signed-Into-Law-6-24-2025


[8] U.S. Equal Employment Opportunity Commission. "iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit." EEOC Press Release 09-11-2023, 11 September 2023. https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit


[9] Brenner, Guy; Slowik, Jonathan; Morrison, Dixie (Proskauer Rose LLP). "AI Bias Lawsuit Against Workday Reaches Next Stage as Court Grants Conditional Certification of ADEA Claim." Law and the Workplace, 11 June 2025. https://www.lawandtheworkplace.com/2025/06/ai-bias-lawsuit-against-workday-reaches-next-stage-as-court-grants-conditional-certification-of-adea-claim/


[10] U.S. Equal Employment Opportunity Commission. "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964." EEOC Technical Assistance Document, 18 May 2023. https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial


[11] U.S. Equal Employment Opportunity Commission. "The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees." EEOC Technical Assistance Document, 12 May 2022. https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence


[12] European Parliament and Council of the European Union. Regulation (EU) 2024/1689, Article 26 — Obligations of Deployers of High-Risk AI Systems. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng


[13] European Parliament and Council of the European Union. Regulation (EU) 2024/1689, Article 27 — Fundamental Rights Impact Assessment for High-Risk AI Systems. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng


[14] New York City Department of Consumer and Worker Protection. "Automated Employment Decision Tools." NYC Rules, in effect 5 July 2023. https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-2/


[15] Office of the New York State Comptroller. "Enforcement of Local Law 144 – Automated Employment Decision Tools." Audit Report 2024-N-6, 2 December 2025. Audit covered July 2023 through June 2025. https://www.osc.ny.gov/state-agencies/audits/2025/12/02/enforcement-local-law-144-automated-employment-decision-tools


[16] National Institute of Standards and Technology. "AI Risk Management Framework (AI RMF 1.0)." NIST AI 100-1, January 2023; "Generative AI Profile," NIST AI 600-1, 26 July 2024. https://www.nist.gov/itl/ai-risk-management-framework


[17] Troutman Pepper Locke. "Colorado Legislature Passes Bill to Repeal and Replace Colorado AI Act." Troutman Privacy, May 2026. https://www.troutmanprivacy.com/2026/05/colorado-legislature-passes-bill-to-repeal-and-replace-colorado-ai-act/


[18] Baker McKenzie. "Colorado Two-Step: Already Facing Potential Amendments, a Federal Court Pauses Enforcement of Colorado's Forthcoming AI Law." Connect on Tech, April 2026. https://connectontech.bakermckenzie.com/colorado-two-step-already-facing-potential-amendments-a-federal-court-pauses-enforcement-of-colorados-forthcoming-ai-law/


[19] Cooley LLP. "State AI Laws: Where Are They Now?" Cooley Insights, April 2026. https://www.cooley.com/news/insight/2026/2026-04-24-state-ai-laws-where-are-they-now

Terms and Conditions Apply | Privacy Policy | Refund Policy

© 2026 Solari LLC | All Rights Reserved

© 2026 Solari LLC | All Rights Reserved