I'd been planning some kind of deep dive take on the firing/rehiring of Sam Altman at OpenAI a week or so ago, as it's become a big tasty stew of ludicrous hype, pseudo-religions, blinkered philosophies, charismatic leaders, bad hair, "really, THAT guy?" moments, vast amounts of money and influence, and some actually very useful if flawed technology and software - but now that the dust has settled I've found a few articles that do a much better job of it than I ever would:
A lot of the conflict at OpenAi seems to be about differing views of the power of AI and how best to approach and develop it, which Silicon Valley hype has turned into breathless religiosity. Emily Gorenski has a good deep backgrounder on "Making God" which goes back to the Middle Ages and the United States' sense of God-given Manifest Destiny before tying it all up with the new extra-nihilistic versions of "The California Ideology"
GenAI solved two challenges that other Singularity-aligned technology failed to address: commercial viability and real-world relevance. The only thing standing in its way is a relatively small and disempowered group of responsible technology protestants, who may yet possess enough gravitas to impede the technology’s unrestricted adoption. It’s not that the general public isn’t concerned about AI risk. It’s that their concerns are largely misguided, worrying more about human extinction and less about programmed social inequality.
Molly White has a more specific backgrounder on the "Effective Altruism" movement which would be funnier if it wasn't so popular among so many wealthy powerful people.
Both ideologies ostensibly center on improving the fate of humanity, offering anyone who adopts the label an easy way to brand themselves as a deep-thinking do-gooder. At the most surface level, both sound reasonable. Who wouldn’t want to be effective in their altruism, after all? And surely it’s just a simple fact that technological development would accelerate given that newer advances build off the old, right?
But scratching the surface of both reveal their true form: a twisted morass of Silicon Valley techno-utopianism, inflated egos, and greed.
Same as it always was.
Max Read gives a detailed overview of everything that's happened so far in "The Interested Normie's Guide To OpenAI", with some extra background on the key players and the technologies, containing an appropriate level of sass and snark.
There are plenty of places you can go to ascertain some of these facts. But only Read Max, writing from its brother-in-law’s place in Atlanta, can communicate to you the vibe. Are the people involved cool? (No.) Is anything about this important? (Not really.) Is everything that has happened so far actually very funny? (Yes.)
That article was mentioned in an even breezier and more snarky summary from Rusty Foster at Today in Tabs:
Altman has had a career that any Silicon Valley exec would absolutely kill for, and he hasn’t accomplished a single worthwhile thing. He founded one failed company, ran a factory that bought predatory amounts of equity for virtually nothing from every Stanford dropout with an idea for software to replace something Mommy used to do, then founded a company to build a product that he himself believes could eventually destroy humanity. He managed to do such a bad job organizing that company that he nearly got it taken away from him by the guy who runs the Christian babies Q&A site and Joseph Gordon-Levitt’s wife before he was rescued by a nightmare blunt rotation of Microsoft, Marc Benioff’s Number One Boy, and Larry Fucking Summers. And the Quora guy even got to stick around!
Ever since the days of suck.com (RIP) some of the best content on the internet has been snarky takes on current events. I also love deep nerdy specialist dives that synthesize multiple historical trends to bring a different angle to current situatsion. So far Large Language Models haven't been very good at either of that kind of thing. Until they are - if they ever are - I think at least my corner of the internet is pretty safe.