Everyone S Pretending Ai Isn T Changing Everything
Remember that feeling when you first saw ChatGPT write a coherent essay? That slight chill down your spine, the sudden realisation the ground beneath your professional feet might not be as solid as you thought? We're all living through that feeling now, stretched out over years instead of moments. It's exhausting. It's exhilarating. And most of all, it's relentless.
Heraclitus got there first, of course. Twenty-five hundred years ago, he told us we can't step into the same river twice. But here's what the old Greek didn't mention: sometimes the river speeds up. Sometimes it becomes a torrent. And right now, with AI reshaping everything from how we write emails to how we think about thinking itself, we're not just unable to step in the same river twice. We're struggling to find our footing at all.
We tell ourselves that this is just another technological shift, like the internet or mobile phones. We adapted to those, didn't we? But this feels different because it strikes at something more fundamental. When a machine can write your reports, code your programmes, create your art and increasingly even reason through your problems, the question isn't just about job security. It's about identity. What makes us uniquely valuable when our unique abilities keep getting replicated and surpassed?
I've been watching many tie themselves in knots over this. One day we’re extolling the promise, the next the risks, the next? Well nothing to be fair. The irony would be funny if it weren't so revealing. We're simultaneously racing toward AI integration and desperately trying to maintain the old boundaries. We want the productivity gains without the existential crisis.
We want transformation without actually changing. But change doesn't negotiate. It doesn't care about our comfort zones or our carefully constructed professional identities. And here's the thing nobody wants to admit: our resistance to AI isn't really about the technology. It's about what psychologists call loss aversion. We're wired to feel potential losses twice as intensely as equivalent gains.
So when we look at AI, we don't see the possibilities first. We see what we might lose. Our expertise. Our relevance. Our sense of being special. This psychological weight is why so many of us are stuck in what I call the middle ground of doom.
We're not fully embracing AI's potential, but we're not ignoring it either. We're dabbling. We're hedging. We're doing just enough to say we're "AI-aware" while secretly hoping this will all blow over. It won't. Professor of Cognitive Neuroscience, Bangor University
Guillaume Thierry does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. Bangor University provides funding as a member of The Conversation UK. We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity. But here’s the truth: it possesses none of those qualities. It is not human.
And presenting it as if it were? That’s dangerous. Because it’s convincing. And nothing is more dangerous than a convincing illusion. This essay articulates what many of us are experiencing: the tension between excitement and existential uncertainty. The insight that resonated most?
--> The future belongs to those who master continuous transformation, not just AI tools. When machines can code, write, and analyze, our 'soft skills' become our crucial assets. ~~ Wisdom to know what's worth creating. ~~ Judgment to evaluate outputs. ~~ Emotional intelligence to lead through uncertainty. The liberal arts skills we used to defend emerge, rapidly, looking prophetic.
The future belongs to those who pair technical literacy with human wisdom. Worth a read. https://lnkd.in/e8RrBTXV I’ve been thinking a lot about generative AI lately. It’s kind of hard not to with the latest ChatGPT announcement. The technologies we’re witnessing are powerful, impressive, and developing fast.
Everything you’re been seeing on screen, for example, is footage from OpenAI’s new video-generating model, Sora. But let’s peel back the algorithmically patterned wallpaper for a moment, and take a hard look at the structure behind it. The AI of the 2020s isn’t new. But its consequences are. If you’re watching this, they’ve already affected you. So how should we, the public, respond to tools that rely upon more data than we could ever fathom?
How can they change our relationship to work? And…do we need to panic? A lot of people are both excited and scared about the state of AI right now, and rightfully so. One of my goals with this channel, though, is to provide you with reasons to remain optimistic. Today, I’m going to try to put the recent explosion of interest in AI into context. Before we get into it, I want to be clear.
When I use the word “AI,” I’m specifically referring to generative AI. That includes large language models, or LLMs, like ChatGPT, and image generators like Midjourney. Basically, these programs are meant to perform specific tasks. And to describe the way they work as simply as possible, they identify patterns. When they find patterns in a given input that matches the data they’ve been trained on, they use that data as a springboard to form a new output. Or at least that’s the idea.1
What’s key is that these tools are not examples of artificial general intelligence (AGI), or the Marvins and HALs of sci-fi spaceships. They’re far more narrow than that. Overeager or not, tech companies do recognize that AGI is still a goal.23 So keep that Oliver Cheatham choreography in your back pocket for now. New technologies almost always create lots of problems and challenges for our society. The invention of farming caused local overpopulation. Industrial technology caused pollution.
Nuclear technology enabled superweapons capable of destroying civilization. New media technologies arguably cause social unrest and turmoil whenever they’re introduced. And yet how many of these technologies can you honestly say you wish were never invented? Some people romanticize hunter-gatherers and medieval peasants, but I don’t see many of them rushing to go live those lifestyles. I myself buy into the argument that smartphone-enabled social media is largely responsible for a variety of modern social ills, but I’ve always maintained that eventually, our social institutions will evolve in ways that... In general, when we look at the past, we understand that technology has almost always made things better for humanity, especially over the long haul.
But when we think about the technologies now being invented, we often forget this lesson — or at least, many of us do. In the U.S., there have recently been movements against mRNA vaccines, electric cars, self-driving cars, smartphones, social media, nuclear power, and solar and wind power, with varying degrees of success. The difference between our views of old and new technologies isn’t necessarily irrational. Old technologies present less risk — we basically know what effect they’ll have on society as a whole, and on our own personal economic opportunities. New technologies are disruptive in ways we can’t predict, and it makes sense to be worried about that risk that we might personally end up on the losing end of the upcoming social and... But that still doesn’t explain changes in our attitudes toward technology over time.
People Also Search
- Everyone's Pretending AI Isn't Changing Everything
- Ok, so maybe AI isn't changing everything - POLITICO
- We need to stop pretending AI is intelligent - here's how
- AI Won't Replace Everyone, But It Will Change Everything
- Everyone's Pretending AI Isn't Changing Everything | Greg Meyer
- When AI Stops Pretending and Starts Being (More Than Just Code)
- AI Just Changed Everything … Again - Undecided with Matt Ferrell
- I love AI. Why doesn't everyone? - by Noah Smith
Remember That Feeling When You First Saw ChatGPT Write A
Remember that feeling when you first saw ChatGPT write a coherent essay? That slight chill down your spine, the sudden realisation the ground beneath your professional feet might not be as solid as you thought? We're all living through that feeling now, stretched out over years instead of moments. It's exhausting. It's exhilarating. And most of all, it's relentless.
Heraclitus Got There First, Of Course. Twenty-five Hundred Years Ago,
Heraclitus got there first, of course. Twenty-five hundred years ago, he told us we can't step into the same river twice. But here's what the old Greek didn't mention: sometimes the river speeds up. Sometimes it becomes a torrent. And right now, with AI reshaping everything from how we write emails to how we think about thinking itself, we're not just unable to step in the same river twice. We're ...
We Tell Ourselves That This Is Just Another Technological Shift,
We tell ourselves that this is just another technological shift, like the internet or mobile phones. We adapted to those, didn't we? But this feels different because it strikes at something more fundamental. When a machine can write your reports, code your programmes, create your art and increasingly even reason through your problems, the question isn't just about job security. It's about identity...
I've Been Watching Many Tie Themselves In Knots Over This.
I've been watching many tie themselves in knots over this. One day we’re extolling the promise, the next the risks, the next? Well nothing to be fair. The irony would be funny if it weren't so revealing. We're simultaneously racing toward AI integration and desperately trying to maintain the old boundaries. We want the productivity gains without the existential crisis.
We Want Transformation Without Actually Changing. But Change Doesn't Negotiate.
We want transformation without actually changing. But change doesn't negotiate. It doesn't care about our comfort zones or our carefully constructed professional identities. And here's the thing nobody wants to admit: our resistance to AI isn't really about the technology. It's about what psychologists call loss aversion. We're wired to feel potential losses twice as intensely as equivalent gains.