Learning from Timnit Gebru

In this blog post, I talk about my learnings from Dr. Timnit Gebru’s talk on ‘Eugenics and the Promise of Utopia through Artificial General Intelligence’.
Author

Kent Canonigo

Published

April 26, 2023

Introduction: Dr. Timnit Gebru

On Tuesday, April 24th, Dr. Timnit Gebru will give a talk on the “Eugenics and the Promise of Utopia through Artificial Intelligence” at Middlebury College. Dr. Gebru, a renowned computer scientist who specializes in data mining and algorithmic bias, has long advised for the ethical uses of AI. Dr. Gebru is also a founder of DAIR (Distributed Artificial Intelligence Research insitute), an independent AI research institute to counter the pervasive influences of large tech corporations and ther injustices. In her short tenure in Google in 2018, she co-led a team focused on ethical artificial intelligence. During this time, Dr. Gebru warned of the dangers of a large language model (being the basis of the Google search engine) through her findings in this paper. However, she faced pushback from higher Google executives to withdraw her paper and remove her and other Google team members’ names from the list of co-authors. What spurred from this event was a sequence of back-and-forth arguments with Dr. Gebru paving the way for ethical uses of AI — particularly in her and Joy Buolamwini‘s groundbreaking study of Gender Shades which criticized facial recognition technologies’ flaws in its inability to recognize Black women. Needless to say, the 2020 protests during the Black Lives Matter movement further emphasized the harms and dangers of facial recognition technology against vulnerable communities of color.

Dr. Gebru’s voice in the ethics of AI poses a need to break down the systemic structures that allow AI to exploit communities of color. How long must the development of AI go unchecked as linguistic boundaries regarding the transparency of AI technology blur the rights and wrongs?

Tutorial on Fairness, Accountability, Transparency, and Ethics (FATE) in Computer Vision

In her talk in the 2020 conference on Computer Vision and Pattern Recognition, Dr. Gebru brings up a quote by Mimi Onuoha to emphasize that it is “imperative to remember on both sides we have human beings” within data sets involving people as subjects or objects. I found this quote to be effective as later on, Dr. Gebru brings up an anecdote given by Seeta Pena Gangadharan, as they reflect on the abstraction of human beings into mathematical equations; this anecdote was important in separating human subjects rather than a means of gathering statistics and mathematical computations. As researchers abstract the reality that people live in into simple data, they fail to recognize the importance of acknowledging the social and power structures that influence and empower algorithmic biases. Dr. Gebru provides several studies such as the Gender Shades project, a wedding classification, and gender classification focused on gender norms.

Additionally, the datasets on which models are created may be biased which poses important questions about class imbalance. While there is an inherent need to diversify datasets, there are fundamental questions to how data should be ethically obtained. Dr. Gebru observes that companies have crossed the boundaries of data collection which infringes people’s civil liberties online data being retrieved and used for AI models (e.g. facial recognition) without their consent. Surveillance is a big part of the push to develop facial recognition technologies, and the idea of policing the identities of citizens is consistent with the idea of supremacy and order in society. Thus, federal regulations must be up to date to check the powers of large companies on infringing the consent and privacy of users. Additionally, it’s important that the design of AI models are constructed inclusively with collaboration of communities of color.

Lasty, Dr. Gebru talks about the ethical implications of computer vision as its used today. The continuous receipts of corporations to publish flawed algorithms that harm the historically marginalized emphasizes the need to regulate these algorithms to be more ethically aligned. It’s important to also note that the lack of procedures to test algorithms and systems are not exclusive to AI or tech, and that these exclusions can also be traced to the history of medical research. The development of computer vision benefits the large corporations who have the manpower, resources, and finances to support the systems of weaponry, policing, and oppression by the military and government agencies.

tldr; Computer vision as it’s used today transforms the intent of the designer to perpetuate systems of oppression we live in today, furthering the harms of those historically marginalized and excluded.

Questions

Here is my question I would like to ask Dr. Gebru:

What role should the designer play in the way AI processes are created, maintained, and commercialized? This question is in the context of design — specifically the design of AI and its role in perpetuating systems that enhance racial and gender biases.

Part 2: Dr. Timnit Gebru’s Talk at Middlebury College

In-Class Virtual Talk

As a precursor to Dr. Gebru’s virtual talk at Middlebury College, our class had a chance to talk to her via questions that my peers had written up. Some important takeways from the brief introduction to Dr. Gebru were that:

  1. There are no incentives to large corporations publishing flawed algorithms other than maximizing profit.
  2. The development of larger and larger language models pushes all company resources towards the direction of the next “big” thing rather than smaller alternatives.
  3. The military is the BIGGEST investor of AI. So, when the foundation of what you’re building is driven by military applications, the main motivation then become a question of how can we better weaponize + strategize our military?
  4. The rapid development of AI is grounded on worker exploitation to do the ‘classifiying’ on data sets with zero compensation.
  5. Human agency to participate in online activity is taken away from us — either you participate, or you get flagged as ‘high-risk’ for not participating.

Listed above are just some of the takeaways I found important in grounding what the virtual talk would feel like later that night. I found it most shocking, yet so understanding, that the military — out of all institutions – would be the biggest financial supporter for the development of AI technologies. It just made sense that the U.S., whose federal spending on national defense accounts one-sixth of the total national spending, is pushing billions of dollars to develop technologies that are still yet to be heavily regulated and maximize AI applications to warfare, surveillance, and autonomous vehicles.

Hillcrest Virtual Talk

Dr. Gebru’s virtual talk addresses the main question about the development of Artificial General Intelligence (AGI): Who decides what utopia exists, and for whom? AGI as a term is not well-defined, and generalizing what AGI could do for humanity, it seems as the development of AGI is equivalent to building some sort of God. She argues that the utopian rhetoric surrounding the advancement of AGI, as suggested by key figures in the tech industry such as Sam Altman, Elon Musk, etc., is grounded on dangerous eugenic-based ideologies referencing to a second-wave of eugenics, the TESCREAL bundle. As a result, the realities of what AGI could be are exaggerated to mimic that of a sci-fi movie and distracts from the actual realities of worker exploitation, fraudulent datasets, and power vacuums.

Dr. Gebru first introduces the talk with the history and myths of eugenics. Before her talk, if I heard the word ‘eugenics’, I would think of the Holocaust with the Nazis and forced sterilizations of the disabled, Blacks, and other vulnerable communities. However, eugenics is not equivalent to the Nazis, nor did it certainly go away after WWII. She explains that in the first wave of eugenics, powerful statisticians, physicians, and scientists paved the way in defining eugenics as a way of improving the “human stock.” Eugenics can be classified into two categories: positive and negative. Negative eugenics would be the idea of getting rid of undesirable traits (e.g. the Nazis in WWII). However, the rhetoric of eugenics reinvented itself to form the second-wave of eugenics by providing positive ways to “improve the human stock.” Rather than sterilizing groups of people, scientific advancements posed the option to provide people the ability to “design” their own children. I thought about the movie Gattaca, which is about a dystopian future where genomic interventions had been 100% successful in “designing” the genetic makeup of children and exposes the racial discriminations of the “legacy-human” and “post-human”, exploring the ideology of genetic determinism led by Francis Galton. Dr. Gebru then elaborates on the reinvention of eugenics through the TESCREAL bundle (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism). While I won’t define each category of the bundle, it’s important to note that the TESCREAL bundle all have a direct line going back to the first-wave eugenics with motivations in “transcending” humanity. Additionally, they all want to “radically transform and modify human organism through AI.” By “transcending” humanity, AGI is able to create a utopian paradise where AGI is able to do anything for everyone — again, a God.

However, there are major ethical and practical issues with building an “AGI paradise”, as Dr. Gebru advises that:

  1. An AGI paradise in the form of very very large-scale language models, centralizes power and monopolizes all the services you can do through this AGI utopia. This centralizing of power risks the further separation of classes as the rich can have real doctors, whereas the poor can go to this AGI paradise to solve their problems. However, this is not the case as even an AGI utopia will not be able to fix, for instance, climate crisis refugees who are forcibly moved out of their homes due to uncontrollable circumstances with the burning of fossil fuels and greenhouse gases. An AGI utopia simplifies all problems that anyone can have as something that is fixable, ignoring the historically identified structures of oppressions.
  2. The labor and computing costs for developing an AGI paradise will exceed what is humanly possible. Horrendous worker exploitation will continue, and the need for larger data sets vacuum fundings from smaller institutions doing real work to improve their local communities. Furthermore, an AGI paradise is not a well-scoped system that has narrow and well-defined use cases.
  3. “Safe AGI” is unsustainable and is a manipulation of rhetoric to develop AGI that is aligned with humanity’s values. Additionally, a universal design approach being a one model fits all simplifies what different users actually need.
  4. The unregulated field of AGI conducts a revisioning of technology. What is considered non-normal will now become normal. Also, there is an inherent “truth” in the tech industry that whatever is being spewed out will and should work, however no proof is ever needed to test XYZ.

Dr. Gebru’s talk left me pondering throughout the rest of the night with how powerful language and rhetoric is in shifting public understanding of AGI and its potential uses. While it’s heavily controversial to consider that the development of AGI is in line with eugenics, I agree with her argument to some extent. I think the delays in regulating AGI development and accountability for Big Tech is long overdue, and it’s quite surprising how long corporations have evaded the responsibilities and punishments for actions simply unacceptable such as stealing data from small artists. I think one thing that people should learn from Dr. Gebru’s discussion is that there is no real future in suggesting that AGI could transcend humanity and fix every problem out there, and we should be able to re-claim agency about what is considered “good” for humanity’s advancement; as Dr. Gebru quotes, “It is not an equitable place for corporations to choose what type of problems they want their language models to tackle.”

Reflection of Dr. Gebru’s Work

From this experience, I grew to appreciate Dr. Gebru’s work even more beyond what I regularly believed engineers in Computer Science do with a 9-5 job to build software without a care for its applications. Learning from Dr. Gebru has taught me the importance of critically thinking about who is benefitting from software and why they would want to pursue its developments. Diving into the intersectionality of ethics and technology has found itself to be a frustrating experience personally, because there is just so much work to do. However, this has definitely peaked my interest in being more intentional with large language models and informing myself of the ethical implications of AGI.