OpenResearch's Experiment and the Hidden Dynamics of Power
Examining the Flaws, Conflicts of Interest, and Broader Implications of a Tech-Driven Guaranteed Income Model
I know it’s hard to keep up with everything going on in a week, especially an Olympic opening week. However, I can't help but ponder the results released by OpenResearch, the research arm of OpenAI.
According to the research, the study officially began in 2019. They issued monthly checks to individuals between the ages of 21 and 40 living in Texas and Illinois. To qualify, their family income in 2019 had to be below 300% of the federal poverty line; that would be $77,250 for a family of four, or $37,470 for an individual. The average family income of participants in 2019 was approximately $30,000. One thousand people were randomly assigned to the treatment group and received $1,000 per month, while another 2,000 were part of a control group that received $50 per month.
To get the project off the ground, Altman says he spent $14 millions of his own money to fund it. Another $10 million came from OpenAI, $15 million from Dorsey's public fund for global COVID-19 relief, and $6.5 million from Sid Sijbrandij, founder of the open-source software platform GitLab. The rest came from foundations, federal grants, and personal and anonymous donations.
The study aligns with the wishes expressed by Sam Altman himself. Earlier this year, Altman also proposed "another kind of" basic income plan, which he called "universal basic computation." In this scenario, Altman said people would get a "portion" of the computational resources of the large language model GPT-7, which they could use as they wished.
However, this is an idea that has been on Altman's mind since his time as president of the startup accelerator Y Combinator. In a blog post nearly a decade ago, he made a unique call to researchers: "We would like to fund a study on basic income," he wrote. "The idea has intrigued me for a while and, although there has been a lot of discussion, there is quite a bit of little information about how it would work."
The problem, as almost always happens, is the narrative. Once they released the results, the media was quick to publish headlines like "Here’s What Happens When You Give People Free Money," "A Sam Altman-Backed Group Studied Universal Basic Income For 3 Years. Here’s What They Found" and "What if AI puts everyone out of work? This software company funded research on universal basic income".
Of course, I don't blame them, not only because journalism is going through an era of decline, but because it was OpenResearch itself that named its own experiment "Sam Altman’s universal basic income experiment (OpenResearch)".
Their initial findings, as more will be published, reveal that people who received this money tended to spend it on basic needs, healthcare, and helping others. Future articles will focus on topics such as children, mobility, crime, and politics.
Theirs is not the first attempt to measure the benefits of a guaranteed income, but the OpenResearch study is one of the largest of several dozen pilot programs around the world. The largest is a 12-year trial in Kenya that began in 2017 and is funded by the philanthropic organization GiveDirectly. Countries like the United States and Canada have also flirted with the concept. Since the 1980s, Alaska residents have received annual payments generated by the state's oil and gas royalties. And last year, California launched its first state-funded guaranteed income test, which will focus on young people who have been in foster care.
But what's wrong with that?
The study isn't about universal basic income
First and foremost, universal basic income (UBI) is not the same as a minimum income scheme. The book "En defensa de la renta bĂ¡sica: Por quĂ© es justa y cĂ³mo se financia" by Jordi Arcarons Bullich, Julen Bollain Urbieta, et al., explains it best:
"However, there is a significant conceptual difference between minimum incomes—or conditional subsidies in general—and basic income, which is expressed in terms of freedom. Minimum income programs help people once they have 'failed.' Moreover, they offer ex post assistance in exchange for some form of consideration for the benefits received. It is precisely this mere ex post assistance that inevitably leads to the loss of effective freedom for those who live on a salary, forcing them to accept the status quo or to submit to forms that are particularly harmful to their interests in the political configuration of the markets. With basic income, on the other hand, being a monetary benefit that would be received by the entire population as a right of citizenship, material existence would be guaranteed from the outset. In this way, the unconditional logic of measures that act ex ante is embraced, preventing a large number of people from being forced to behave as 'submissive supplicants.' He who begs is not free. Moreover, the fact of guaranteeing material existence ex ante increases the bargaining power of the majority of the population who are not strictly rich, by increasing their effective freedom. While basic income is the language of human rights, conditional subsidies are the language of 'help' and compensation for the 'failed'. According to the definition, basic income is a simple and straightforward idea: an income paid by the State, as a right of citizenship, to every full member or resident of society."
In this way, we can see that the OpenResearch experiment does not meet the conditions of UBI: universality, individuality, and unconditionality.
OpenResearch: A clear conflict of interest
The lack of transparency that has characterized OpenAI is once again evident in this study. While they provide data on how they conducted the sampling, we know nothing about a peer review team or analysis that has been responsible for analyzing and contrasting the results. As we know, the premise of a researcher being both judge and jury in a scientific study represents a serious conflict of interest. This situation compromises the objectivity and validity of the results obtained, undermining the fundamental principles of scientific research. To date, Elizabeth Rhodes, who holds a joint doctorate in social work and political science and directed the study for OpenResearch, has not made any statement on the matter.
The TESCREAL narrative is on the rise
TESCREAL is an acronym coined by computer scientist Timnit Gebru and philosopher Émile P. Torres. This term combines several schools of thought that are shaping the future vision of many people, especially in the technological realm.
What does each letter stand for?
Transhumanism: The idea that humanity can improve itself through technology, overcoming biological limitations.
Extropianism: A philosophy that promotes indefinite technological progress and the improvement of the human condition through reason and science.
Singularitarianism: The belief in the coming of a technological singularity, a point in the future where technological progress accelerates exponentially and fundamentally changes human civilization.
Cosmism: A philosophy that seeks the expansion of life and consciousness throughout the cosmos.
Rationalism: The reliance on reason as the primary source of knowledge and the best guide for action.
Effective altruism: A philosophical approach that seeks to maximize positive impact on the world, using evidence and reasoning to identify the most important causes and the most effective interventions.
Longtermism: An ethics that prioritizes the interests of future human and non-human generations.
Leaders of the AGI (Artificial General Intelligence) movement subscribe to this set of ideologies, which emerged directly from the modern eugenics movement and have thus inherited similar ideals. These harmful ideals have given rise to systems that perpetuate inequality, centralize power, and harm the very groups that were the target of the first-wave modern eugenics movement.
Am I against UBI?
Of course not, in fact, I also support the abolition of work. My point here is that the owners of tech companies occupy a privileged place in the architecture of contemporary power. These corporations, through their platforms, algorithms, and networks, have not only transformed economic dynamics but have also reconfigured subjectivity and social relations. The algorithms designed by these companies are not neutral; they are imbued with the logic of capital and accumulation. Allowing the managers of these companies to control UBI would perpetuate a power dynamic that has already proven to be deeply unequal and exploitative.
For me, the proposal of some tech magnates to fund and manage UBI may seem, at first glance, an altruistic gesture. However, this approach conceals a strategy of legitimization and control. By positioning themselves as the benefactors of humanity, it seems that these entrepreneurs seek to divert attention from the exploitative practices and value extraction that are the basis of their wealth. This is the true face of cognitive capitalism: a network of biopower that manages life under the guise of benevolence.
UBI managed by tech companies would be nothing more than another mechanism of control. The same entities that have stripped workers of their autonomy through precarization and digital surveillance would now present themselves as the saviors of the same.
Under this structure, UBI would become a tool to maintain the status quo, where individuals are seen as mere cogs in the machinery of capital. The supposed economic independence provided by UBI would, in reality, be an even greater dependence on the platforms and services of these corporations. UBI must be a tool for emancipation, not for subjugation. This implies democratic and collective control over resources and the distribution of wealth. The management of UBI must be public, transparent, and participatory, allowing communities to define their own needs and priorities.
The struggle for a truly emancipatory UBI is also a struggle for freedom. The growing automation and digitalization should not lead to greater alienation but to an opportunity to reimagine work and life.