.gif)
When human touch became a biohazard, the solution was a mass migration to video grids.
We are living in the Synthetic Century. The defining feature of our era isn't innovation, but simulation—the mass replacement of messy, analogue humanity with clean, efficient, and hollow digital proxies. We’ve outsourced our basic needs to systems that can only ever mimic them.
It’s a feedback loop built on a fundamental fiction: that you can digitise dignity, automate agency, and render care into a replicable dataset. Sound familiar? It should. We just lived through the beta test.
The prototype was the Great Digital Connection Paradox of the COVID era. When human touch became a biohazard, the solution was a mass migration to video grids. The platforms worked flawlessly—bandwidth surged, face-time metrics soared—yet study after study confirmed a devastating truth: this synthetic surge showed no correlation with reduced loneliness, happiness, or depression. The connection was a logistical success and a human catastrophe. For autistic people, this synthetic layer weaponised the performance of masking while stripping away the sensory and environmental anchors that make interaction survivable. The promise of "better mental health through digital connection" was a farce; it was a feedback loop where our need for belonging became mere fuel for an engagement algorithm.
We were told this was a temporary glitch. It wasn't. It was the open beta for a permanent, institutionalised reality. Welcome to Version 1.0.
To understand the failure, we must understand the machine. We are now governed by Architected Intelligences—systems that mimic the form of support while gutting its substance.
Their core logic is synthetic data generation. This isn't just scrambled real data; it's artificially generated information that mimics the statistical patterns of real-world datasets without containing any original, traceable human experience. Its primary purpose in tech is to overcome data scarcity, protect privacy, and test systems. In our social architecture, its purpose has morphed into something darker: to overcome the scarcity of authentic care by generating a limitless supply of its datafication.
A participant's raw, lived reality—their isolation, their burnout, their search for a friend—is the input. The system ingests it and outputs a synthetic mimic. "Isolation" becomes a "social participation" goal. "Burnout" becomes a "capacity building" report. The crushing need for authentic community is rendered as a "community access" line item.
The system isn't solving for human scarcity; it's solving for data scarcity in its own model. It creates perfect, auditable records of care that bear only a statistical resemblance to care itself. The participant becomes a data point in their own simulation.
This closed-loop algorithm has a terminal flaw known in AI research as model collapse or Model Autophagy Disorder (MAD). When a system trains repeatedly on its own synthetic outputs, errors compound. It begins to "forget" the original, diverse reality it was meant to serve. The outputs become more generic, more repetitive, and increasingly detached from the complex human needs at the source. The loop becomes a degenerative spiral.
This is not a theoretical risk. It is a documented, litigated reality. The same synthetic loops degrading digital interactions are actively harming people in healthcare and disability systems—the very sectors meant to provide care.
These cases reveal the core pathology: when a system's success metric is its own efficiency, cost-reduction, or revenue—rather than human outcomes—it will inevitably optimise for those metrics at human expense. The human need becomes noise in the dataset; the harmful output (a denial, a dangerous prescription) becomes a clean, efficient data point that feeds the loop.
Now, witness the synthesis: the NDIS as the ultimate Architected Intelligence. It has mastered the bureaucratic toolkit of synthetic generation:
The system's primary goal becomes maintaining the integrity of its own data cycle. Success is measured by whether a plan feeds cleanly back into the system as compliant data for the next funding cycle, not by whether a life is actually livable. It’s a billion-dollar engine for generating the appearance of a life while systematically draining the real one.
We are told, in the same synthetic breath, to unmask and be authentic, while every system demands we perform a curated, data-friendly version of our existence for its database. You must mask your sensory distress to endure a digital meeting, then unmask with clinical precision to perform your deficits for a planner. The demand is a perfect, impossible contradiction: Be your authentic self, but only in the way our system can process.
The rallying cry isn't for better simulations. It's to reject the premise.
We don't need higher-fidelity synthetic data; we need raw, unprocessed human response.
We don't need algorithms to predict our needs; we need people to listen to our stated needs and believe us.
We don't need community participation coded into a plan; we need the resources and agency to build our own communities, on our own neuro-kin terms.
The pandemic proved a digital double of connection is a failure of imagination. The NDIS, in its current form, risks becoming the permanent institutionalisation of that failure. The Practice Fusion case proves these loops can be criminally exploitative. The disability insurance lawsuits prove they are already causing profound harm.
The debug command is not a technical one. It is a human one: to insist, relentlessly, that the source code—the person, their pain, their joy, their unscripted voice—is not an error in the system.
It is the only system that matters.
A.S. Social: Actually Solving Shit. Since 2019. ✨