Employment-based immigrationEB-2 NIW for AI Safety & Responsible AI

October 30, 2025by Neonilla Orlinskaya

EB-2 NIW for AI Safety & Responsible AI — who this is for

The EB-2 National Interest Waiver (NIW) lets qualified applicants skip a job offer and PERM if their work advances the U.S. national interest and meets the three Matter of Dhanasar prongs. For AI safety, Responsible AI, and AI governance professionals, evidence maps cleanly to the NIST AI Risk Management Framework (AI RMF) with its functions Govern / Map / Measure / Manage.

This page provides: a matrix “AI RMF → evidence → NIW prongs”, clear KPIs, an interactive chart to plug in your numbers, a filing checklist, and a curated list of official U.S. government sources (.gov) with brief explanations.

EB-2 NIW AI safety Responsible AI NIW AI governance immigration NIST AI RMF Model Risk Management Red-teaming Interpretability

Terms: red-teaming — stress-testing models with adversarial scenarios; interpretability — techniques that explain model behavior; Model Risk Management — lifecycle processes that identify, assess, mitigate, and monitor model risks.

NIW essentials — the three prongs in plain English

Prong 1 — Substantial merit and national importance

  • Explain public benefits: protecting users and critical infrastructure, financial stability, healthcare safety, civil rights.
  • Align objectives with NIST AI RMF and federal directives on safe and responsible AI (OMB memoranda, GenAI profile).
  • Translate risk reduction into measurable metrics (see Block 3/4).

Prong 2 — Well positioned to advance the endeavor

  • Show credentials: publications, standards work, Responsible-AI policies you implemented, launched red-team programs, audit results.
  • Show outcomes: closed critical vulnerabilities, lower MTTR, broader Model/System-Card coverage, robustness/fairness improvements.

Prong 3 — On balance, waive the job offer/PERM

  • Argue that faster diffusion of safety practices yields outsized U.S. benefits compared with waiting for PERM.
  • Document ecosystem impact: open tools, workforce training, participation in NIST/AISIC and standardization efforts.

What to actually submit

  • Responsible-AI policies, risk registers, RMPs, incident response plans.
  • Red-team reports with severity levels and closure dates.
  • Drift/robustness/fairness metrics, independent validation results.
  • Model/System Cards, release gates, post-deployment monitoring logs.

For every document, name the AI RMF risk it addresses, the action you took, and the number that moved.

AI RMF → evidence → NIW prongs

Use this matrix to attach each exhibit to one AI RMF function and state which Dhanasar prong it satisfies.

AI RMF Function Evidence examples NIW prongs satisfied
Govern Responsible-AI policies; RACI; MRM procedures; risk register; audit trail; incident response plan.
governance
accountability
Prong 1: national importance via systemic-risk reduction; Prong 2: implementation maturity; Prong 3: scalable U.S. benefit.
Map Model/System Cards; data description; threat modeling; impact/harms mapping; model limitations.
documentation
threat modeling
Prong 1: connection to public safety and rights; Prong 2: early risk discovery expertise.
Measure Drift/robustness metrics; fairness assessments; independent validation; red-team reports and closures.
robustness
bias/fairness
Prong 1: verified risk reduction; Prong 2: safety KPIs achieved.
Manage Risk Management Plan (RMP); release gates; post-deployment monitoring; incident reports; staff training.
risk response
post-deployment
Prong 2: sustained results; Prong 3: faster diffusion across U.S. ecosystem.
GenAI Profile Jailbreak tests; toxicity; data-leakage safeguards; deepfake risks; socio-economic impact review.
content safety
data leakage
Prongs 1–3: user/infrastructure protection; specialized practices; alignment with federal priorities.

Metrics that matter

Enter “before” and “after” values for four indicators and click Apply to refresh the chart.

P1–P2 closure, %
share of critical red-team findings closed within 30/90 days.
MTTR, hours
mean time to resolve a model incident.
Drift alerts/qtr
number of drift triggers per quarter.
Coverage, %
releases covered by Model/System Cards and controls.

Evidence dossier and filing strategy

Include in your packet

  • Cover brief: three-prong sections with the “AI RMF → evidence → prongs” table and KPI references.
  • Forms: I-140 (EB-2 with National Interest Waiver request) plus EB-2 qualification proof (advanced degree or exceptional ability).
  • Technical exhibits: Responsible-AI policies, risk registers, Model/System Cards, red-team reports, RMPs, independent validation results.
  • Support letters: U.S. experts and organizations; quantify effects (e.g., “reduced MTTR by 58%”).
  • Charts and tables: dashboard exports with release dates and calculation methods.

Practical steps

  1. Collect KPIs from the last 3–6 releases; populate the interactive chart and save the export.
  2. For each risk, list the control (policy/test) and the measurable outcome (metric/threshold/time-to-close).
  3. Tie the work to U.S. priorities on safe AI (sources below) and show ecosystem impact: open-source, training, NIST participation.
  4. Define terms so a non-technical adjudicator can follow the logic end-to-end.

Official U.S. government sources (.gov) with brief explanations

Use these as primary references in the brief; cite specific sections and tie them to your exhibits and KPIs.

  • USCIS Policy Manual — EB-2 / National Interest Waiver (Vol. 6, Part F, Chapter 5)
    Authoritative criteria for NIW under Matter of Dhanasar: substantial merit/national importance, ability to advance, and balance test.
    https://www.uscis.gov/policy-manual
  • USCIS — Policy Alert (Jan 2025) on employment-based petitions and NIW
    Current USCIS guidance on evaluating public benefit and documentary support in employment-based filings, including NIW.
    https://www.uscis.gov
  • NIST — Artificial Intelligence Risk Management Framework (AI RMF 1.0)
    Voluntary, measurable risk management framework with Govern, Map, Measure, Manage functions applicable across the AI lifecycle.
    https://www.nist.gov/itl/ai-risk-management-framework
  • NIST — Generative AI Risk Management Profile
    Profile with tasks and controls specific to generative models: jailbreak testing, toxicity, data leakage, deepfakes, societal impact.
    https://www.nist.gov/itl/ai-risk-management-profile-generative-ai
  • NIST Artificial Intelligence Safety Institute (AISIC)
    Initiatives and working groups on red-teaming, evaluations, and safety benchmarks; participation evidences governance engagement.
    https://www.nist.gov/artificial-intelligence-safety-institute
  • White House / OMB — memoranda on responsible AI in federal agencies (e.g., M-24-10, M-24-18)
    Federal priorities and implementation guidance for safe, rights-respecting AI; useful to connect your work with national objectives.
    https://www.whitehouse.gov/omb
  • USCIS — Administrative Appeals Office (AAO) decisions
    Representative NIW decisions applying Dhanasar; cite reasoning patterns for public-benefit and ability-to-advance findings.
    https://www.uscis.gov/administrative-appeals/aao-decisions

Pre-filing checklist

  • Matrix “AI RMF → evidence → prongs” is complete and each item has numbers and dates.
  • The interactive chart reflects your KPIs; exports attached to exhibits.
  • Support letters quantify outcomes (metrics, closed risks, implemented controls).
  • All technical terms are defined plainly; .gov sources are listed in the appendix.

USCIS — U.S. Citizenship and Immigration Services; NIST — National Institute of Standards and Technology.

Neonilla Orlinskaya

Arvian Law Firm
California 300 Spectrum Center Dr, Floor 4 Irvine CA 92618
Missouri 100 Chesterfield Business Pkwy, Floor 2 Chesterfield, MO 63001
+1 (213) 838 0095
+1 (314) 530 7575
+1 (213) 649 0001
info@arvianlaw.com

Follow us:

CONSULTATION

Arvian Law Firm LLC

Vitalii Maliuk,

ATTORNEY AT LAW (МО № 73573)

Copyright © Arvian Law Firm LLC 2025