OpenAI Deleted Ban on Military Use of GPT, Swamped with Protesters Opposing AI and Military Involvement

Protesters
Unsplash/Colin Lloyd

Dozens of protesters gathered outside OpenAI headquarters in San Francisco on Monday evening, opposing the company's artificial intelligence (AI) development.

The demonstrations were organized by two groups, Pause AI and No AGI, urging OpenAI engineers to stop working on advanced AI systems like the chatbot ChatGPT and stop developing artificial intelligence that could lead to a future that surpasses human intelligence, known as artificial general intelligence (AGI), and to avoid any further military involvement.

The event was organized in response to OpenAI deleting language from its usage policy last month that prohibited AI usage for military purposes. Shortly after this alteration, OpenAI was reported to have started working with the Pentagon as a client.

An Event Demanding OpenAI to Refrain from Accepting Military Clients

Event organizers announced on February 12th that they would demand OpenAI to terminate its ties with the Pentagon and refrain from accepting military clients, emphasizing, "If their ethical and safety boundaries can be revised out of convenience, they cannot be trusted," in an event description.

No AGI's Goals

VentureBeat interviewed protest organizers to understand their goals for the demonstration and what success would mean for each organization. Sam Kitchener, head of No AGI, replied that the organization aims to raise awareness against developing AGI, advocating for alternatives like whole brain emulation and prioritizing human thought at the forefront of intelligence.

Pause AI's Goals

Holly Elmore, lead organizer of Pause AI (U.S.), told VentureBeat that her group seeks a global, indefinite halt on AGI development until it's deemed safe, emphasizing the importance of ending the relationship with the military as a crucial boundary.

Distrust Around AI Development

The protest occurs at a crucial moment in discussions about AI ethics. OpenAI's changes to its usage policy and collaboration with the Pentagon have ignited debates about AI militarization and its possible repercussions.

The protesters are mainly worried about AGI, which could complete any intellectual task humans can but with incomprehensible speed and scale. Their concern isn't just job loss or autonomous warfare but also how AGI could fundamentally alter power dynamics and decision-making in society.

Elmore emphasized the necessity for external regulation, citing OpenAI's pattern of retracting promises, pointing out instances like Sam Altman's conflicting statements about board authority, where Sam Altman bragged that the board could fire him in June. They couldn't fire him in November. Elmore highlighted the same thing going on with the usage policy [and military contract], questioning the effectiveness of policies if they do not limit OpenAIs from doing anything they want.

Pause AI and No AGI's Different Approach

Pause AI and No AGI aim to stop AGI development but differ in approach. Pause AI is willing to consider AGI if safety can be ensured. At the same time, No AGI firmly opposes its creation, highlighting concerns about psychological threats and loss of meaning in human lives. Both groups indicate that this is probably not their final protest.

Others worried about AI risks can engage through their websites and social media. But for now, Silicon Valley continues its journey toward an uncertain AI future.

Real Time Analytics