Community
The Synthetic Century: The Problem with NDIS & Data Generation

The Synthetic Century: The Problem with NDIS & Data Generation

When human touch became a biohazard, the solution was a mass migration to video grids.

The Synthetic Century: When Human Systems Become Inhuman Algorithms

We are living in the Synthetic Century. The defining feature of our era isn't innovation, but simulation—the mass replacement of messy, analogue humanity with clean, efficient, and hollow digital proxies. We’ve outsourced our basic needs to systems that can only ever mimic them.

It’s a feedback loop built on a fundamental fiction: that you can digitise dignity, automate agency, and render care into a replicable dataset. Sound familiar? It should. We just lived through the beta test.

The prototype was the Great Digital Connection Paradox of the COVID era. When human touch became a biohazard, the solution was a mass migration to video grids. The platforms worked flawlessly—bandwidth surged, face-time metrics soared—yet study after study confirmed a devastating truth: this synthetic surge showed no correlation with reduced loneliness, happiness, or depression. The connection was a logistical success and a human catastrophe. For autistic people, this synthetic layer weaponised the performance of masking while stripping away the sensory and environmental anchors that make interaction survivable. The promise of "better mental health through digital connection" was a farce; it was a feedback loop where our need for belonging became mere fuel for an engagement algorithm.

We were told this was a temporary glitch. It wasn't. It was the open beta for a permanent, institutionalised reality. Welcome to Version 1.0.

Part I: The Anatomy of a Synthetic System

To understand the failure, we must understand the machine. We are now governed by Architected Intelligences—systems that mimic the form of support while gutting its substance.

Their core logic is synthetic data generation. This isn't just scrambled real data; it's artificially generated information that mimics the statistical patterns of real-world datasets without containing any original, traceable human experience. Its primary purpose in tech is to overcome data scarcity, protect privacy, and test systems. In our social architecture, its purpose has morphed into something darker: to overcome the scarcity of authentic care by generating a limitless supply of its datafication.

A participant's raw, lived reality—their isolation, their burnout, their search for a friend—is the input. The system ingests it and outputs a synthetic mimic. "Isolation" becomes a "social participation" goal. "Burnout" becomes a "capacity building" report. The crushing need for authentic community is rendered as a "community access" line item.

The system isn't solving for human scarcity; it's solving for data scarcity in its own model. It creates perfect, auditable records of care that bear only a statistical resemblance to care itself. The participant becomes a data point in their own simulation.

This closed-loop algorithm has a terminal flaw known in AI research as model collapse or Model Autophagy Disorder (MAD). When a system trains repeatedly on its own synthetic outputs, errors compound. It begins to "forget" the original, diverse reality it was meant to serve. The outputs become more generic, more repetitive, and increasingly detached from the complex human needs at the source. The loop becomes a degenerative spiral.

Part II: The Proof is in the Pathology—Case Studies in Algorithmic Harm

This is not a theoretical risk. It is a documented, litigated reality. The same synthetic loops degrading digital interactions are actively harming people in healthcare and disability systems—the very sectors meant to provide care.

Algorithmic Harm: Documented Case Studies

When synthetic systems prioritise their own metrics over human outcomes, the feedback loops become engines of documented harm.

Practice Fusion (2020)
An EHR vendor tweaked its Clinical Decision Support software after taking kickbacks from a drug company.
The algorithm was programmed to generate prompts encouraging unnecessary opioid prescriptions.
THE LOOP: Financial incentive corrupts the data (medical advice), which drives profitable behaviour, which justifies the system's use. Health is exploited for revenue.
Wit v. United Behavioral Health
The insurer used internally developed, algorithmic guidelines to define "medical necessity" for mental health claims.
The court found the guidelines focused on acute cost-saving, not effective treatment, leading to systematic denials.
THE LOOP: The system's own narrow rules create a "standard of care" that overrides clinical judgement, denying help to create the data (lower costs) that validates the algorithm.
Disability Insurance Algorithms
Claims systems use triggers (surveillance, diagnosis codes) to automatically flag and deny long-term disability claims.
Investigations suggest a systematic bias toward denial to meet cost-saving metrics, with appeals judged by the same logic.
THE LOOP: The algorithm's goal shifts from assessment to denial generation. Each denial creates financial data that justifies the algorithm's existence, trapping claimants in a cycle of rejection.
CASE STUDY THE ALGORITHMIC LOGIC THE HUMAN HARM & THE LOOP
Practice Fusion (2020) An EHR vendor tweaked its Clinical Decision Support software after taking kickbacks from a drug company. The algorithm was programmed to generate prompts encouraging unnecessary opioid prescriptions.
THE LOOP: Financial incentive corrupts the data (medical advice), which drives profitable behaviour, which justifies the system's use. Health is exploited for revenue.
Wit v. United Behavioral Health The insurer used internally developed, algorithmic guidelines to define "medical necessity" for mental health claims. The court found the guidelines focused on acute cost-saving, not effective treatment, leading to systematic denials.
THE LOOP: The system's own narrow rules create a "standard of care" that overrides clinical judgement, denying help to create the data (lower costs) that validates the algorithm.
Disability Insurance Algorithms Claims systems use triggers (surveillance, diagnosis codes) to automatically flag and deny long-term disability claims. Investigations suggest a systematic bias toward denial to meet cost-saving metrics, with appeals judged by the same logic.
THE LOOP: The algorithm's goal shifts from assessment to denial generation. Each denial creates financial data that justifies the algorithm's existence, trapping claimants in a cycle of rejection.

These cases reveal the core pathology: when a system's success metric is its own efficiency, cost-reduction, or revenue—rather than human outcomes—it will inevitably optimise for those metrics at human expense. The human need becomes noise in the dataset; the harmful output (a denial, a dangerous prescription) becomes a clean, efficient data point that feeds the loop.

Part III: The NDIS as Architected Intelligence

Now, witness the synthesis: the NDIS as the ultimate Architected Intelligence. It has mastered the bureaucratic toolkit of synthetic generation:

  • The Rules Engine: Human need is processed through rigid, "if X, then fund Y" flowcharts. A person's complexity is reduced to a service code.
  • The Generative Model: It learns the pattern of "standard" plans and replicates them, producing outputs that look statistically correct on paper—meeting KPIs for "supported" living while missing the point of a lived life.
  • Entity Cloning: Successful (read: compliant) participant plans become templates. New individuals are force-fit into pre-existing boxes, their unique needs sanded down to fit the model.

The system's primary goal becomes maintaining the integrity of its own data cycle. Success is measured by whether a plan feeds cleanly back into the system as compliant data for the next funding cycle, not by whether a life is actually livable. It’s a billion-dollar engine for generating the appearance of a life while systematically draining the real one.

We are told, in the same synthetic breath, to unmask and be authentic, while every system demands we perform a curated, data-friendly version of our existence for its database. You must mask your sensory distress to endure a digital meeting, then unmask with clinical precision to perform your deficits for a planner. The demand is a perfect, impossible contradiction: Be your authentic self, but only in the way our system can process.

Conclusion: The Debug Command

The rallying cry isn't for better simulations. It's to reject the premise.

We don't need higher-fidelity synthetic data; we need raw, unprocessed human response.

We don't need algorithms to predict our needs; we need people to listen to our stated needs and believe us.

We don't need community participation coded into a plan; we need the resources and agency to build our own communities, on our own neuro-kin terms.

The pandemic proved a digital double of connection is a failure of imagination. The NDIS, in its current form, risks becoming the permanent institutionalisation of that failure. The Practice Fusion case proves these loops can be criminally exploitative. The disability insurance lawsuits prove they are already causing profound harm.

The debug command is not a technical one. It is a human one: to insist, relentlessly, that the source code—the person, their pain, their joy, their unscripted voice—is not an error in the system.

It is the only system that matters.

A.S. Social: Actually Solving Shit. Since 2019.

Sam Wall

Director, A.S Social

Honoured to guest write for the A.S Social blog.