3 Replies to “We Had A Good Run”

  1. I ran across an Atlantic article the other day that says it all boils down to how quickly AI is adopted, as it relates to the elimination of many white collar jobs. If it’s slow, like over a decade or more, we can adapt and deal with it. The author mentions the introduction of ATMs, Photoshop, and Excel, technologies that were believed at the time to be the end for bank tellers, graphic designers, and accountants. (In fact, these actually CREATED more jobs in those fields.)

    But if AI adoption goes fast, we’re well and truly fucked. Also not helping is the fact that the government is completely ignoring this issue.

  2. Shumer’s viral post is getting some blowback.

    As of Friday morning, Shumer’s post has been viewed more than 80 million times on X alone. In a Substack post expanding on his criticisms, Marcus called Shumer’s post “weaponized hype.”

    “The general impression that he conveys is basically that the sky is falling now, and at most, I think what’s really happening is the junior people are under some threat, and I think that threat is actually exaggerated,” Marcus told Business Insider.

    Marcus said that the more likely outcome in the short-term is not that AI will replace junior employees but rather that executives think it’s capable of doing so — and make what could ultimately prove to be a costly gamble.

    “The biggest thing I think junior people have to worry about right now is a misapprehension by the C-suite that these techniques work better than they actually do,” Marcus said.

    “AI can do a small subset of the tasks, and that sometimes speeds up human beings and things like that, but it rarely does all of what a human being can do in any particular domain,” he told Business Insider. “This will change over time, just to be clear. It is likely that AI will replace most human labor over the next century, but it’s not likely that it will over the next year or two.”

    Companies that move too quickly to replace jobs with AI are likely to find themselves in a similar position as Klarna, Marcus said. In 2024, Klarna touted an AI assistant that could do the equivalent work of 700 people. By May 2025, CEO Sebastian Siemiatkowski, long a proponent of AI, said that “as cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality.” He added that “investing in the quality of the human support is the way of the future for us.”

  3. I’m inherently suspicious of apocalyptic talk. We’ve seen the imminent end of humanity a number of times in history.

    Anyway, right now I think the danger of AI isn’t so much that it becomes smarter than humans. I’ve read some compelling arguments on the limitation of large language models (don’t remember where, I’ll try to link later). I think it’s more a case of “a little education is a dangerous thing.” AI is great at organization, but not much beyond that. As such, the danger is that it could be like that happy-go-lucky ex-chicken farmer, Heinrich Himmler: great at organizing and administrating, but susceptible to whatever noxious political ideas are floating around.

    As for the alleged limitations of LLM’s: they seem likely to be a dead-end because thought is more than language. That’s my dumbed-down take on it anyway.

Leave a Reply

Your email address will not be published. Required fields are marked *