Ethics and Philosophy of AI as Cognitive Prosthetic for ADHD Developers
“Cognitive liberty is the right to self-determination over our brains and mental experiences.” — Nita Farahany, Duke University (2023)
1. Disability Rights Frameworks Applied to ADHD + AI
Three Competing Models
| Framework | View of ADHD | View of AI Tools | Implication for Developers |
|---|---|---|---|
| Medical Model | Disorder requiring treatment; deficit in executive function, attention regulation | Therapeutic intervention; compensatory tool for impairment | AI is prescribed accommodation — implies brokenness requiring fix |
| Social Model | Impairment exists, but disability is caused by environmental mismatch (inflexible workplaces, linear workflows, memorization-heavy coding culture) | Environmental modification; removes disabling barriers rather than “fixing” the person | AI restructures the environment, not the person — analogous to ramps, not surgery |
| Neurodiversity 2.0 / Interactionist Model | Natural cognitive variation with both genuine impairments AND genuine strengths; disability arises from person-environment interaction | Universal cognitive infrastructure that benefits all brains differently; ADHD brains may benefit disproportionately due to pre-existing scaffolding habits | AI is neither cure nor accommodation — it is cognitive infrastructure that shifts which traits are advantageous |
The Core Tension
The neurodiversity movement faces an unresolved dialectic:
- “ADHD is a disability requiring accommodation” — necessary for legal protections (ADA coverage), workplace accommodations, medical treatment access, and insurance coverage for medication
- “ADHD is neurodiversity requiring acceptance” — necessary for identity, self-worth, recognition of genuine cognitive strengths, and challenging deficit-only framings
These are not mutually exclusive but create practical friction. The social model of disability resolves this partially by distinguishing between impairment (the person’s cognitive difference) and disability (the social consequences of that impairment in an unaccommodating environment) (Disability Wales; Jillian Enright, Neurodiversified).
How AI Disrupts All Three Models
AI tools like Claude Code and Copilot do not fit neatly into any framework:
- Medical model users see AI as assistive technology compensating for executive dysfunction — similar to hearing aids or screen readers
- Social model advocates see AI as evidence that the problem was always the environment (requiring memorization, sequential planning, documentation) rather than the person
- Neurodiversity 2.0 proponents see AI as revealing that ADHD traits (divergent thinking, comfort with ambiguity, iterative exploration) were always valuable but previously suppressed by environmental demands that AI now handles
The interactionist approach, documented in Human Development (Karger Publishers, 2023), acknowledges contributions of both individual characteristics and society to disability — avoiding the false choice entirely.
Key citation: den Houting, J. (2019). “Neurodiversity: An insider’s perspective.” Autism, 23(2), 271-273. First major academic articulation of the neurodiversity paradigm’s tension with medical approaches.
2. AI as Accommodation vs. Universal Design
ADA/EEOC Guidance on AI in the Workplace
The EEOC issued formal guidance in May 2022 on AI and the Americans with Disabilities Act, establishing that:
- Employers using AI decision-making tools must provide reasonable accommodation to employees whose disabilities affect tool assessment accuracy (EEOC, “Artificial Intelligence and the ADA”)
- AI tools that screen out individuals with disabilities violate the ADA unless the criteria are job-related and consistent with business necessity (ADA.gov, “Algorithms, AI, and Disability Discrimination in Hiring”)
- Employers can be liable even when using third-party AI vendors — liability does not transfer to the tool provider
- EEOC data shows rising disability discrimination charges involving neurodivergence (Ogletree Deakins, 2024)
Individual Accommodation vs. Universal Design
| Dimension | Individual Accommodation | Universal Design |
|---|---|---|
| Who benefits | Specific person with documented disability | Everyone, with disproportionate benefit to disabled users |
| Trigger | Formal request + documentation | Built into environment by default |
| Legal basis | ADA reasonable accommodation | Proactive inclusion (no request needed) |
| AI example | ADHD developer gets AI assistant as workplace accommodation | Entire team uses AI assistants; ADHD developer benefits most |
| Stigma level | High (requires disclosure) | None (everyone uses it) |
| Scalability | Low (case by case) | High (systemic) |
The Curb-Cut Effect
The curb-cut effect — named after sidewalk ramps designed for wheelchair users that benefit everyone (parents with strollers, delivery workers, travelers with luggage) — is the strongest argument for universal AI deployment rather than individual accommodation (Stanford Social Innovation Review).
AI as curb cut examples:
- Code completion designed for executive dysfunction helps all developers avoid context-switching costs
- AI documentation generation designed for ADHD working memory limitations benefits entire teams
- Natural language code explanation designed for learning differences improves onboarding universally
- Quiet rooms originally for neurodiverse employees now help anyone needing a mental reset
- Closed captions created for deaf users now used by 80%+ of viewers in noisy environments
“When we design with access in mind, we make life easier for more people without making it harder for anyone.” — Curb-cut effect principle
Legal Implications
If AI coding assistants become standard workplace tools (universal design), then denying them becomes the accommodation issue — employers who restrict AI access may create new barriers for neurodivergent employees. This inverts the traditional accommodation framework: the accommodation is not providing the AI tool but ensuring it is not taken away.
An employer who standardizes AI tooling and then removes it from a specific employee could face ADA liability if that employee has a documented disability that the tool was effectively accommodating.
3. Algorithmic Bias Against Neurodivergent People
AI Hiring Tool Discrimination
The HireVue Controversies
HireVue’s AI video interview platform analyzes facial expressions, speech patterns, and word choice to score candidates. This system creates systematic bias against neurodivergent applicants:
- Atypical eye contact (common in autism, ADHD) misread as disengagement
- Speech pattern differences (pauses, non-linear responses) penalized
- Facial expression variations scored lower against neurotypical baselines
- Time limits and lack of clarification disproportionately affect ADHD candidates
March 2025 complaint: The ACLU and Public Justice filed a complaint with the Colorado Civil Rights Division and EEOC against Intuit and HireVue, alleging their AI hiring technology discriminated against a deaf Indigenous woman and others. The complaint specifically noted that HireVue’s system cannot accurately analyze speech of deaf applicants and struggles with non-white speakers (Public Justice, 2025; HR Dive, 2025).
The Aon/ACLU Cases
The ACLU filed EEOC and FTC complaints against Aon, alleging its AI hiring assessments discriminate based on race and disability:
- ADEPT-15 personality test: Questions overlap significantly with clinical autism screening tools — meaning the test effectively screens for autistic traits rather than job-relevant skills
- gridChallenge gamified cognitive assessment: Penalizes atypical processing styles
- Aon’s AI-infused video interviewing system: “Likely to discriminate based on race and disability” (ACLU complaint)
- The ACLU filed on behalf of a biracial autistic job applicant and a similarly situated class
- Aon’s personality test assessed traits like “positivity, emotional awareness, liveliness, ambition, and drive” — not job-related, but correlated with neurotype (Fisher Phillips, 2025; Bloomberg Law, 2025)
Scale of the problem: Nearly 70% of companies and 99% of Fortune 500 companies now use AI tools in their hiring processes.
Resume Screening Bias
A 2024 University of Washington study found that GPT-based resume screening tools rank resumes mentioning autism-related awards or memberships lower than identical applications without such credentials (Glazko et al., 2024, ACM FAccT).
Additional biases against neurodivergent candidates:
- Employment gaps (common with ADHD burnout cycles) penalized by pattern-matching algorithms
- Job-hopping patterns (ADHD-related role changes) flagged as instability
- Non-linear career paths filtered out by algorithms expecting conventional progression
- Unconventional resume formats (ADHD/dyslexia-related) rejected by parsing algorithms
LLM Communication Style Bias
Large language models systematically favor neurotypical communication patterns:
- LLMs trained on data “predominantly authored by those who are neurotypical” produce outputs that do not capture neurodivergent thought processes (Zheng et al., 2024, arXiv)
- Neurodivergent users report requiring many rounds of prompting to get outputs matching their communication style
- Negative associations embedded in models: Word embeddings show negative associations between autism-related terms and positive traits like honesty, “despite honesty being a common strength of autistic individuals” (Brandsen et al., 2024, Autism Research; Duke Center for Autism and Brain Development)
- Sentences describing disabilities like “I have autism” produce stronger negative associations than “I am a bank robber” in some embedding models
Facial Recognition and Emotion Detection
- Emotion recognition AI is “particularly bad at labeling the emotions of people with disabilities, people with neurodivergences, and people of color” — entrenching ableist emotional norms (EPIC, EU AI Act comments)
- Neurodivergent facial expressions systematically misclassified as negative emotions
- The EU AI Act explicitly recognizes that “expression of emotions vary considerably across cultures and situations, and even within a single individual”
Emergent Ableism
Kate Glazko and colleagues coined the term “emergent ableism” — discrimination that arises when pattern-matching algorithms encounter human cognitive diversity. Unlike intentional bias, emergent ableism is a structural property of systems trained on neurotypical-majority data (TechPolicy.Press, 2024).
Legislative Protections
| Legislation | Jurisdiction | Key Provision | Neurodivergent Impact |
|---|---|---|---|
| EU AI Act (Article 5(1)(f)) | European Union | Bans emotion recognition AI in workplaces and education (effective Feb 2, 2025) | Protects neurodivergent workers from facial expression scoring |
| Illinois BIPA | Illinois, USA | Requires consent for biometric data collection; private right of action | Protects against unconsented facial geometry analysis in hiring |
| Illinois HB 3773 | Illinois, USA | Prohibits AI that discriminates based on protected classes in employment (effective Jan 1, 2026) | Explicit prohibition on AI hiring discrimination |
| Colorado Anti-Discrimination Act | Colorado, USA | Basis for HireVue/Intuit complaint | Applied to AI video interview discrimination |
| ADA + EEOC Guidance | Federal, USA | Employers liable for discriminatory AI even via third-party vendors | Covers neurodivergent candidates affected by AI screening |
4. Neurodivergent Representation in AI Governance
WEF: Neurodivergent Minds Humanize AI Governance
The World Economic Forum published a 2025 analysis arguing that neurodivergent professionals are essential to AI governance:
“Neurodivergent individuals could be AI’s most important architects, yet most AI frameworks reflect neurotypical assumptions, excluding the very people who could help them break through the noise.” — WEF, July 2025
Key WEF findings:
- Neurodivergent cognition improves AI systems’ accuracy AND enhances ethical/human oversight
- Neurodivergent professionals identify algorithmic biases and logical blind spots that neurotypical reviewers miss
- A Temple University study found neurodivergent professionals produce “diverse annotations that are valuable for employers in digital data annotation work” — enriching training sets and mitigating bias
- Disability:IN 2025 framework reported measurable productivity gains when neurodivergent professionals were embedded in logic-based workflows like data annotation and model validation
Deloitte’s Productivity Research
Deloitte’s research on neurodiversity and innovation establishes:
- Teams with neurodivergent professionals are up to 30% more productive in innovation-focused roles (Deloitte Insights, 2022)
- UiPath internal analysis: Autistic team members were 150% more productive at AI data labeling and training tasks compared to neurotypical peers
- Companies like Microsoft, SAP, and Dell have redesigned hiring pipelines for neurodivergent AI talent
- Auticon (autistic-majority IT consulting firm) and similar companies demonstrate neurodivergent excellence in pattern recognition, data quality, and anomaly detection
”Nothing About Us Without Us” in AI
The disability rights principle “Nothing about us without us” — originating from South African disability activists in the 1990s — is increasingly applied to AI development:
- Mozilla Foundation advocates for disabled people at the table when AI systems are created and deployed: “If disabled people are at the table when AI systems are created and deployed, they can help account for the needs of all” (Mozilla Foundation)
- Autistic Self Advocacy Network (ASAN) calls for neurodivergent inclusion in technology policy
- Recommendations include establishing standing neurodivergent advisory councils with compensated, ongoing roles in AI development (TechPolicy.Press, 2025)
Practical Evidence: Neurodivergent Bias Detection
In AI development contexts, neurodivergent professionals have demonstrated specific advantages:
- Autistic analysts at SAP, IBM, and Auticon have tracked down gender, racial, and socioeconomic biases in training data for hiring and health prediction systems that persisted despite standard bias testing
- Neurodivergent testers identified fairness crises, hypothesis drift, and model collapse that non-neurodivergent teams overlooked
- Pattern recognition differences associated with autism and ADHD enable identification of data anomalies invisible to neurotypical reviewers
- Organizations recording higher process accuracy and retention when neurodivergent professionals embedded in AI validation workflows (Disability:IN, 2025)
The Stress-Testing Argument
Deloitte and others argue that neurodivergent people should be systematically included in AI stress-testing because:
- Different pattern recognition — neurodivergent brains notice different anomalies
- Literal interpretation — autistic testers catch ambiguities that neurotypical users “fill in” unconsciously
- Hyperfocus on inconsistency — ADHD interest-driven attention excels at finding things that “don’t fit”
- Experience with system failures — neurodivergent people have lifelong experience with systems not built for them, giving them intuitive understanding of exclusion patterns
5. Autonomy and Consent
AI Scaffolding vs. AI Substitution
The critical ethical distinction for ADHD developers is between:
| Dimension | Scaffolding (Ethical) | Substitution (Concerning) |
|---|---|---|
| Metaphor | Training wheels that come off | Wheelchair for someone who can walk |
| User’s role | Director, decision-maker | Passenger, approver |
| Learning | Skills develop over time | Skills atrophy over time |
| Dependency | Decreasing with mastery | Increasing over time |
| Autonomy | Enhanced | Diminished |
| AI behavior | High support initially, progressive withdrawal | Constant full assistance regardless of user capability |
Enhanced Cognitive Scaffolding embraces the principle of AI providing high assistance initially but progressively encouraging more user autonomy — the AI challenges the user appropriately and then steps back as the user masters the task (arXiv, 2025). This mirrors effective educational scaffolding where support is “offered when needed, withdrawn when not, and always subject to user override.”
The 17% Skill Atrophy Finding
Anthropic’s January 2026 randomized controlled trial is the most rigorous evidence for AI-induced skill atrophy:
- Study: 52 mostly junior Python engineers unfamiliar with Trio (async library)
- Finding: AI-assisted group scored 17% lower on comprehension tests (~2 letter grades) than manual coding group
- Worst affected: Debugging ability, followed by code reading and conceptual understanding
- Critical nuance: Six distinct AI interaction patterns identified — three scored under 40% (failing), three scored 65-86% (strong)
- Key insight: How you use AI determines whether you learn or lose; the variable is cognitive engagement, not AI use per se
(Anthropic Research, Jan 2026; Shen & Tamkin, 2026)
Why ADHD developers face elevated risk:
- Stronger temptation to offload (AI provides faster dopamine rewards than struggling)
- “Progressive offloading” maps to ADHD’s path-of-least-resistance tendency
- Already weaker executive function scaffolding means less buffer before dependency
- Less experienced learners (many ADHD people take non-traditional paths) are most susceptible
But also potential protection:
- ADHD developers often already have meta-cognitive strategies for managing external tools
- Comfort with “not knowing” may reduce anxiety-driven offloading
- Interest-driven hyperfocus can override efficiency-seeking shortcuts when engaged
Additional Deskilling Evidence
- Microsoft Research / Carnegie Mellon (2025): Knowledge workers reported AI made tasks cognitively easier, but researchers found they were ceding problem-solving expertise to the system
- Lancet Gastroenterology & Hepatology (2025): Endoscopists using AI had detection rates drop from 28.4% to 22.4% when AI was removed — demonstrating skill atrophy after routine AI dependence
- The Deskilling Paradox (ACM, 2025): Senior engineers gain productivity while junior engineers lose skill development — employment for developers aged 22-25 has fallen ~20% since late 2022 while positions for developers over 26 remain stable
Privacy and Surveillance with Neuroadaptive AI
Emerging neuroadaptive AI systems that detect attention drift, emotional state, and cognitive overload raise acute privacy concerns for ADHD users:
- Brain activity monitoring in workplaces is already happening for attention and fatigue levels (Farahany, 2023)
- ADHD-specific AI tools that track focus patterns, attention cycles, and distraction frequency create intimate cognitive profiles
- Such data could be used for: performance evaluation, insurance risk assessment, employment decisions, or behavioral prediction
- Neurodivergent individuals experience “heightened vulnerability in digital settings” — privacy breaches and behavioral data misuse can exacerbate stigma
Digital body doubling systems designed for ADHD represent both promise and risk: supporting attention as a “fluctuating, relational state” while potentially normalizing continuous cognitive surveillance (arXiv, 2025).
Consent Models for Brain-Computer Interfaces
As BCIs move from medical devices to consumer neurotechnology (EEG headbands, focus-tracking wearables), consent frameworks must address:
- Dynamic informed consent — BCI capabilities evolve, requiring ongoing consent processes rather than one-time agreements
- Vulnerability considerations — neurodivergent users may face pressure to adopt BCIs for “productivity enhancement”
- Five core ethical issues in BCI research: specificity, vulnerability, autonomy, comprehensiveness, and uncertainty (PLOS Biology, 2024)
- Privacy by design — neural data should never be collected, stored, or transmitted without consent
- Rethinking consent frameworks to empower users with BCI literacy around collection, use, sharing, and retention of neurodata (Future of Privacy Forum)
6. The Romanticization Problem
The Toxic Positivity Critique
The “ADHD superpower” narrative crosses from optimism to toxic positivity when it:
- Invalidates genuine suffering — romanticizing life-altering symptoms as superpowers “diminishes the struggles of children and adults fighting against ADHD myths and stigma” (ADDitude Magazine)
- Creates self-blame — “The superpower message suggests you should naturally excel, but when this doesn’t happen, you might blame yourself rather than recognizing that success with ADHD requires specific strategies and support”
- Reinforces ableist standards — disability scholars note superpower language “can unintentionally uphold ableist standards by suggesting that only those who demonstrate exceptional abilities are deserving of accommodations or respect”
- Creates false binaries — forcing a “superpower versus disability” frame that “ignores the spectrum of experiences”
- Enables denial of accommodations — if ADHD is a superpower, why would anyone need help?
The “Superpower Narrative” vs. Lived Reality
| Superpower Claim | Lived Reality |
|---|---|
| ”Hyperfocus is a superpower!” | Hyperfocus is involuntary, often misdirected, and followed by crash/burnout |
| ”ADHD makes you creative!” | Creativity without ability to execute is frustrating, not empowering |
| ”Divergent thinking is an advantage!” | Not when you can’t converge on a solution to ship by deadline |
| ”You think differently — that’s special!” | Different thinking in hostile environments produces suffering, not success |
| ”ADHD entrepreneurs are fearless!” | Risk-taking from impulsivity is not the same as strategic courage |
The Both/And Framework
The intellectually honest position is a both/and framework: ADHD creates real cognitive advantages AND real suffering simultaneously. This is not contradiction — it is the nature of a cognitive profile optimized for a different environment than the one most people inhabit.
Evidence for genuine strengths:
- Higher divergent thinking scores (fluency, flexibility, originality) associated with ADHD symptoms in general population (Frontiers in Psychiatry, 2022)
- 2025 designer study: ADHD participants generated more novel ideas but fewer high-quality ideas — however, they selected ideas of high novelty AND quality (International Journal of Design Creativity and Innovation, 2026)
- Deliberate mind wandering may explain heightened creativity; spontaneous mind wandering mediates functional impairments (European College of Neuropsychopharmacology Congress, 2025)
- Self-reported strengths: hyperfocus, divergent thinking, non-conformism, high energy, creativity, empathy (Cambridge University Press, Psychological Medicine)
Evidence for genuine impairment:
- Executive dysfunction is not romantic — it means missing deadlines, losing possessions, failing to start known-important tasks
- Emotional dysregulation causes real relationship damage and career consequences
- Time blindness creates genuine life-affecting problems that no amount of reframing resolves
- Most studies find divergent thinking benefits at subclinical ADHD symptom levels but not necessarily at clinical levels — severity matters
Application to the AI Thesis
This knowledge base must resist the romanticization trap. The claim is NOT “ADHD is a superpower and AI proves it.” The claim is:
AI tools restructure the programming environment in ways that reduce the impact of ADHD’s genuine weaknesses (executive dysfunction, working memory limitations, sequential processing demands) while amplifying the expression of ADHD’s genuine strengths (divergent thinking, pattern recognition across domains, comfort with ambiguity, iterative exploration).
This is an environmental change, not a personality validation. The same person may still suffer profoundly with ADHD in non-AI-augmented contexts.
7. Cognitive Liberty
Farahany’s Framework
Nita Farahany, Robinson O. Everett Distinguished Professor of Law and Philosophy at Duke University, articulates cognitive liberty through three interconnected rights in The Battle for Your Brain (St. Martin’s Press, 2023):
| Right | Protection | Threat |
|---|---|---|
| Mental Privacy | Right to keep brain data and mental states private; covers “all mental and affective functions” | Workplace brain monitoring, neuroadaptive AI tracking attention/emotion, consumer neurotechnology data harvesting |
| Freedom of Thought | Right to think without interference, manipulation, or punishment; covers “complex thoughts and visual imagery” | AI systems that infer intent from neural signals, emotion recognition penalizing neurodivergent expression, social credit systems |
| Self-Determination | Positive right to access information about your own brain and make changes to it | Restrictions on neurotechnology access, employer control over cognitive enhancement tools, prohibition of self-directed brain modification |
“Cognitive liberty should be recognized as both a legal and a societal norm and reflected in international human rights law by updating the definition of privacy to include mental privacy, and updating freedom of thought to include freedom from interception, manipulation, and punishment of thoughts.” — Farahany (TIME, 2023)
Implications for ADHD Developers
For ADHD developers using AI tools that may evolve toward neural monitoring:
-
Mental Privacy: ADHD-specific AI tools that track focus patterns, attention drift, and distraction frequency generate sensitive cognitive data. Under cognitive liberty, this data must remain under the individual’s control — not the employer’s.
-
Freedom of Thought: AI coding assistants that monitor attention to nudge focus represent a form of thought interference. Even benevolent nudging (reminding distracted developers to return to task) risks becoming thought policing if employer-controlled.
-
Self-Determination: The right to use or refuse cognitive tools (AI assistants, neurostimulation, focus-tracking wearables) must remain with the individual. Employers cannot mandate brain monitoring as a condition of employment.
Neurorights Legislation
Chile: Constitutional Neurorights Pioneer (2021)
Chile became the first country to constitutionally protect brain-related rights in 2021. The Chilean Supreme Court subsequently issued a unanimous decision ordering Emotiv (US neurotechnology company) to erase brain scanning data collected on former Senator Guido Girardi — establishing that neural data has constitutional protection (Frontiers in Psychology, 2024).
Colorado: US Neural Data Privacy (2024)
HB 24-1058 (signed April 17, 2024; effective August 7, 2024) makes Colorado the first US state with targeted neural data privacy legislation:
- Amends the Colorado Privacy Act to include neural data as “sensitive data”
- Neural data defined as: “information generated by the measurement of the activity of an individual’s central or peripheral nervous systems that can be processed by or with the assistance of a device”
- Biological data that can reveal “health, mental states, emotions, and cognitive functioning” receives heightened protections
- Developed in collaboration with the Neurorights Foundation
(Colorado General Assembly, HB24-1058; Hunton Andrews Kurth)
EU AI Act: Emotion Recognition Ban (2025)
Article 5(1)(f) of the EU AI Act, effective February 2, 2025, prohibits:
“The placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use is intended for medical or safety reasons.”
Prohibited examples include:
- Call centers using webcams and voice recognition to track employee emotions
- Education institutions using emotion recognition to infer student attention
- Emotion recognition AI systems used during recruitment
This directly protects neurodivergent workers whose emotional expressions are systematically misclassified by such systems (Wolters Kluwer, 2025).
Expanding Legislative Landscape
Additional jurisdictions advancing neurorights legislation:
- Brazil, Mexico, Uruguay — active legislative proposals
- Minnesota — BCI-specific privacy legislation
- JMIR (2025) published a comprehensive analysis of the “controversial push for new brain and neurorights” internationally
The ADHD Developer Cognitive Liberty Scenario
Consider this near-future scenario for ADHD developers:
- An employer provides neuroadaptive AI coding tools that use EEG headbands to detect attention drift and automatically adjust task difficulty, provide focus prompts, or restructure workflow
- The tool dramatically improves ADHD developer productivity (environmental accommodation via technology)
- But the tool also generates continuous data about the developer’s attention patterns, emotional states, cognitive load, and executive function fluctuations
- This data could reveal: ADHD severity, medication timing and effectiveness, stress triggers, optimal/suboptimal work periods, and cognitive decline patterns
Cognitive liberty demands:
- The developer controls this data, not the employer
- The developer can refuse neural monitoring without losing access to AI coding tools
- The developer’s neural data cannot be used in performance reviews, promotion decisions, or employment status
- The developer has the right to understand exactly what neural data is collected and how it is processed
- The developer can delete neural data without employment consequences
Synthesis: The Ethical Framework for AI as Cognitive Prosthetic
Key Principles
-
Environmental change, not personal validation — AI restructures the coding environment; it does not prove ADHD is “better” or “fixed”
-
Universal design over individual accommodation — AI tools should be available to all developers (curb-cut effect) rather than requiring disability disclosure for access
-
Scaffolding, not substitution — ethical AI use maintains progressive autonomy; the goal is cognitive partnership, not cognitive replacement
-
Cognitive liberty as foundation — ADHD developers must retain sovereignty over their neural data, attention patterns, and cognitive tool choices
-
Nothing about us without us — neurodivergent developers must be included in AI governance, tool design, bias testing, and policy formation
-
Both/and honesty — ADHD creates genuine advantages AND genuine suffering; AI helps with both but cures neither; romanticization is as harmful as pathologization
-
Algorithmic accountability — AI hiring tools, LLM outputs, and workplace monitoring systems must be audited for neurodivergent bias, with legal enforcement mechanisms
The Philosophical Position
AI as cognitive prosthetic for ADHD developers is ethically justified when it:
- Reduces environmental barriers (social model)
- Preserves and develops the user’s own capabilities (scaffolding principle)
- Remains under the user’s control (cognitive liberty)
- Is available to all, not just those who disclose disability (universal design)
- Does not generate surveillance data that can be used against the user (mental privacy)
- Is designed with neurodivergent input (participatory design)
AI as cognitive prosthetic becomes ethically problematic when it:
- Replaces rather than augments cognitive capacity (substitution)
- Creates dependency that worsens without the tool (skill atrophy)
- Generates intimate cognitive data controlled by employers (surveillance)
- Is used to justify removing other forms of support (“AI is your accommodation now”)
- Reinforces the idea that ADHD people need to be “fixed” to be productive (medical model overreach)
- Is designed exclusively by neurotypical developers without neurodivergent input (exclusion)
Sources
Primary Research and Legal Documents
- Anthropic Research. (2026). “How AI Assistance Impacts the Formation of Coding Skills.” Shen & Tamkin.
- Brandsen, S. et al. (2024). “Prevalence of bias against neurodivergence-related terms in artificial intelligence language models.” Autism Research. Wiley.
- Farahany, N.A. (2023). The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. St. Martin’s Press.
- Glazko, K. et al. (2024). “Identifying and Improving Disability Bias in GPT-Based Resume Screening.” ACM FAccT Conference.
- Colorado General Assembly. (2024). HB 24-1058: Protect Privacy of Biological Data.
- EU AI Act. (2024). Article 5(1)(f): Prohibited AI Practices — Emotion Recognition.
- EEOC. (2022). “Artificial Intelligence and the ADA.”
- ADA.gov. (2022). “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring.”
Policy and Industry Reports
- World Economic Forum. (2025). “How Neurodivergent Minds Can Humanize AI Governance.”
- Deloitte Insights. (2022). “Neurodiversity and Innovation: Unleashing Innovation with Neuroinclusion.”
- Disability:IN. (2025). Neuroinclusive Management Framework.
- Mozilla Foundation. “Nothing About Us Without Us: Disability Justice and AI.”
- Stanford Social Innovation Review. “The Curb-Cut Effect.”
Academic and Legal Analysis
- den Houting, J. (2019). “Neurodiversity: An insider’s perspective.” Autism, 23(2).
- Zheng, R. et al. (2024). “Exploring Large Language Models Through a Neurodivergent Lens.” arXiv.
- Chilean Supreme Court. (2024). Ruling on Protection of Brain Activity. Frontiers in Psychology.
- Public Justice / ACLU. (2025). Complaint Against Intuit and HireVue.
- ACLU. (2025). EEOC and FTC Complaints Against Aon.
- TechPolicy.Press. (2024). “When Algorithms Learn to Discriminate: The Hidden Crisis of Emergent Ableism.”
One ADHD + code insight per week
Research-backed, no fluff. Join developers who think different.
No spam. Unsubscribe anytime.