Back to All Events

Black in AI Safety and Ethics (BASE) – Paper Reading Vol 3.

This paper reading session will examine the systemic risks posed by incremental advancements in artificial intelligence, developing the concept of `gradual disempowerment', in contrast to the abrupt takeover scenarios commonly discussed in AI safety. We analyze how even incremental improvements in AI capabilities can undermine human influence over large-scale systems that society depends on, including the economy, culture, and nation-states. As AI increasingly replaces human labor and cognition in these domains, it can weaken both explicit human control mechanisms (like voting and consumer choice) and the implicit alignments with human interests that often arise from societal systems' reliance on human participation to function.

The paper reading for those in US time zones is Thursday, Oct 30, at 5:30 PM PST. The paper is Designing Artificial Intelligence: Exploring Inclusion, Diversity, Equity, Accessibility, and Safety in Human-Centric Emerging Technologies.

To register: https://luma.com/viugpgnx

Next
Next
November 6

Black in AI Safety and Ethics (BASE) – Paper Reading Vol 4.