Ex-OpenAI Board Members Casting Doubt on Altman’s Leadership, Cite Profit-Driven Pressures Necessitate External Oversight

OpenAI
(Photo : Unsplash/Mariia Shalabaieva)

Two former OpenAI board members believe that artificial intelligence (AI) cannot regulate itself, necessitating external oversight to ensure its accountability.

Helen Toner and Tasha McCauley doubt that self-governance cannot withstand profit-driven motives, which adds more controversy for OpenAI, especially after the recent concerns involving Sam Altman.

AI Self-Governance Over Profit-Driven Motives

Former board members of OpenAI, Toner, and McCauley, resigned in November amid a turbulent effort to oust CEO Sam Altman, who quickly resumed his roles as CEO and board member after five months. In a recent op-ed in The Economist, Toner and McCauley reiterated their stance on removing Altman, referencing allegations from senior leaders accusing him of promoting a toxic culture of dishonesty and engaging in behavior that could be described as psychological abuse.

Since Altman's return to the board in March, OpenAI's dedication to safety has been under scrutiny, especially after using an AI voice resembling actor Scarlett Johansson for Chat GPT-4. Toner and McCauley asserted that OpenAI, under Altman's leadership, cannot be relied upon to ensure its accountability and expressed concerns about his reinstatement to the board and the departure of key safety-focused personnel, suggesting that these developments are unfavorable for OpenAI's attempt at self-governance.

The former board members emphasized the importance of government intervention in creating robust regulatory frameworks. They argued that achieving OpenAI's mission to benefit all of humanity is impossible without external oversight. While they initially believed in OpenAI's capacity for self-governance, their experience showed that self-regulation is not consistently resilient against profit-oriented pressures.

Ex-OpenAI Board Directors Calling For AI Regulation

While government regulation is essential, Toner and McCauley recognized that poorly crafted laws could hinder "competition and innovation" by placing excessive burdens on smaller companies. They emphasized the importance of policymakers acting independently from major AI companies when formulating new regulations. They stressed the need for vigilance against loopholes, regulatory barriers that protect early adopters from competition, and the risk of regulatory capture.

Altman's Vision: An All-Knowing AI

Sam Altman has faced additional scrutiny due to other controversies. According to Vox, employees departing from OpenAI encountered extensive and extremely restrictive exit agreements, where employees would risk losing their vested equity in the company if they hesitated to sign. Such a practice is unusual and stringent even by Silicon Valley standards, which essentially compelled ex-employees to decide between giving up potentially significant sums of earned equity or consenting to an indefinite non-disparagement agreement.

READ ALSO: Open AI Modifies Departure Process, Releases Internal Memo About Retracting The Controversial NDA

The reported news sparked considerable turmoil within OpenAI, a private firm valued at around $80 billion. Similar to many Silicon Valley startups, OpenAI compensates its employees significantly through equity, which they typically anticipate will become permanent once it vests according to their contract.

Altman's vision of AI has also made headlines recently as he imagines a future where AI becomes deeply embedded in our daily lives. In an interview with MIT Technology Review, Altman depicted the ideal AI as a "super-competent colleague" who knows every aspect of our lives, including our emails and conversations, stressing that this AI would be proactive, swiftly handling simple tasks and dealing with more complex ones with little need for user input. Altman's vision goes beyond chatbots, seeing AI actively accomplishing real-world tasks.

RELATED ARTICLE: OpenAI Disbands Superalignment Team Responsible for Controlling AI Risks Amid Leadership Friction

Real Time Analytics