Archived link

Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. They’ve touted the company’s unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.

But this week, the news broke that OpenAI will no longer be controlled by the nonprofit board. OpenAI is turning into a full-fledged for-profit benefit corporation. Oh, and CEO Sam Altman, who had previously emphasized that he didn’t have any equity in the company, will now get equity worth billions, in addition to ultimate control over OpenAI.

In an announcement that hardly seems coincidental, chief technology officer Mira Murati said shortly before that news broke that she was leaving the company. Employees were so blindsided that many of them reportedly reacted to her abrupt departure with a “WTF” emoji in Slack.

WTF indeed.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I don’t know whether Altman or the board is better from a leadership standpoint, but I don’t think that it makes sense to rely on boards to avoid existential dangers for humanity. A board runs one company. If that board takes action that is a good move in terms of an existential risk for humanity but disadvantageous to the company, they’ll tend to be outcompeted by and replaced by those who do not. Anyone doing that has to be in a position to span multiple companies. I doubt that market regulators in a single market could do it, even – that’s getting into international treaty territory.

    The only way in which a board is going to be able to effectively do that is if one company, theirs, effectively has a monopoly on all AI development that could pose a risk.