Earlier today, in a particularly enthusiastic scroll-sesh, I read a reply to an article about AI use that read something to the effect of:
Bluesky has a toxic AI discourse problem…
And man, I actually feel you — hard-lining doesn’t always feel useful. Especially when you find yourself open-armed, open-eyed, willing to approach this burgeoning space with unbridled optimism, but upon returning to the village find many quite upset with the fruits of your labor — it must be especially frustrating.
But I think rather than “people are too emotional or angry about AI”, the more salient point to arrive at is something else. Love it or hate it, a lot of the tech under the AI umbrella have well-documented trade-offs and outright harms. It’s fairly reasonable, then, to accept that most folks insist upon “solutions that don’t knowingly create new problems”, especially given that the world looks *gestures wildly at everything* like this, these days.
AI data centres can warm surrounding areas by up to 9.1°C
Hundreds of millions of people live close enough to data centres used to power AI to feel warmer average temperatures in their local area
Let us take you at your word, that you are someone who believes the value of AI/LLMs/etc is the solutions they bring to People. If you see that many People feel emotionally reactive towards these technologies, and your value system believes in People, wouldn’t you be better served by moving your scope to include those People? That is, wouldn’t you be better off viewing “solutions don’t knowingly create new problems” as creative constraint for the way that you solve problems?
I value you, person-who-believes-AI-creates-good, for thy optimistic spirit. I just also believe you ought to spend your energy in realms where that spirit has more economy. There is something noble about the work of common-ground reframing — considering the possibility you might do more good by inventing solutions that people also want, than by inventing ones that they don’t and then criticizing those doubters for their lack of nuance.
I’ll say this: I’m frankly not a wise enough person to magically solve the schism here. I reckon there is a bit — and even a lot — of alignment between folks who think they can make the world better using AI, and people that don’t use AI because they don’t want the world doesn’t get any worse. I just mean to encourage those big-hearted folks with technological optimism to spare to reassess their thinking — “making the world better with AI” should be more about the world than about AI.
Further reading
I’m sure it’s clear from this note, but I’m generally AI-skeptical (though I do try not to be dogmatic about that position) — if you’re into that point of view, I’ve written more about the implications of a technology like LLMs, specifically from a labor angle, elsewhere on the blog.