I have a friend who is a futurist. He helps businesses analyze possible outcomes and adjust decision-making accordingly. They call it “strategic foresight.”
The other night, at an event he hosted, I participated in a group activity whereby we considered four possible futures before dividing into small groups for 45 minutes of brainstorming and drafting presentations based on our strategic approach to navigating the future we were randomly assigned.
It being 2025, someone in my group suggested we run our challenge through ChatGPT. They snapped a picture of the handout — which outlined specific steps our mini think tank should undertake — and then uploaded it for the AI to take a crack at.
A minute later the guy1 said “look what ChatGPT came up with” and passed around his phone. The app had addressed the key points and by all accounts did a tremendous job addressing the challenge. Forty-two minutes left and our task was complete. One of my teammates wrote out the LLM’s key takeaways on poster board while we chatted loosely around the edges of the topic. We had fundamentally abdicated our responsibility and the machine did a fine job in our stead. Good even, viewed from just the right angle.
One by one, the other tables rose to give their presentations. Each passed with flying colors. Their explanations were sensible, creative, interesting. They’d clearly put their heads together and dived deep. They developed nuanced solutions to the challenges at hand. I learned from each of them.
Then it was our turn.
Our AI guy was the presenter. (I want to reiterate that he was a lovely human being who did a very nice job.) Our presentation sounded good and checked off all the required boxes. We did the thing we were instructed to do.
But when it was over, all I could think was, “Task failed successfully.”
Everything about what the AI spit out was totally appropriate. It had done the work and if we’d have been graded on the presentation we certainly would have passed. So, by most metrics, it was a rousing success.
But by the most important metrics, in my opinion, it was an abject failure.
Yes, the presentation seemed to make sense, but ultimately there was no there there. It was essentially meaningless. Just surface, nothing deep. Certainly nothing half as interesting as what the humans in the other groups had come up with. It was complete, and we had passed, but had we really done anything?
My gut insists no.
I honestly can’t remember what our takeaways were. This, I think, is a side effect of the fact that the AI’s key insights weren’t especially insightful, and that we didn’t meaningfully discuss the challenge at hand. We didn’t participate, so we didn’t learn shit.2
I’m guessing, sincerely, that the people who listened to our presentation didn’t learn much either. They probably felt like oh that was nice enough, it sounded right, but in the grand scheme of things there wasn’t much to hold onto. It’s like our presentation looked good from a distance if you squint a little. It would’ve been perfect to watch while laying on the couch thumbing through your phone.
That’s when it struck me: in no way whatsoever is this better. In fact I think it’s notably worse.
As the evening wrapped, I wrote in my notebook:
“Used AI. Done first. Sounded good. Least meaningful.”
I’m fairly certain everyone in the room would agree with my assessment. Acceptable, but in no way notable.
It simply didn’t work. And I have a theory as to why.
There’s something I’ve been thinking about a lot lately, and this experience absolutely reinforced it.
AI is not better. It’s just faster.
This is particularly true when it comes to creative endeavors.
“Oh I use it for ideas, for inspiration!”
Sure, we hear that a lot. But you know what else works wonders for inspiration? Going for a walk. Throwing pebbles in a pond. Staring at the sea. Doing the things humans have done for millennia in search of inspiration. These human things work incredibly well, they just take more time than the machine.
Some of you are surely shouting, “But faster is better!”
I don’t believe so. At least not when it comes to creative work. Art doesn’t care about speed. Faster only matters in commerce.
Of course, AI is great at formulaic tasks, tedious problem solving, fitting data together. In those methodical cases, I will concede that faster might be deemed better.
But nobody ever says, “Man, that Van Gogh sure painted quickly.”
In artistic endeavors — and I use this term broadly, where any sort of “creative thinking” is employed — it seems like AI is only faster. Because the results are often worse. Maybe not a lot worse, but still, even if they’re 98% as good… that’s worse. It’s astonishing that a machine can do it in five seconds, but it’s still worse.
So the way I interpret it is to say this: when you need “possibly good enough” and “immediately,” AI works great.
I also recognize that this equation is likely to change. I’m not so stubborn as to insist I’ve got it all figured out.3 But it’s been several weeks that I’ve been keeping an eye out for “faster, not better” and the evidence keeps piling up. Even the companies selling AI apps and services are typically touting a faster mousetrap, not a better one.
I think we humans get excited and confused by the party trick of AI. We consider “acceptable and incredibly fast” way better than “incredibly good and kind of slow.” I suppose it depends on your needs.4
This is why business owners and management types are so into AI. It’s faster, which means greater output and higher productivity. A profiteer’s dream.
Which brings me back to creative industries, never known for “really fast but fairly plain.” The opposite is the goal, mostly. Slow and special. It takes a long time to make a great movie, but it’s worth it.
Same goes for a great American novel. Or even a thoughtful magazine profile, a lovely haiku, or a life size butter sculpture of a Jersey cow. They keep saying that, any day now, AI is gonna be better than people at all these things and more. I just don’t see it.
Faster? Sure. Better? No way.
We can’t beat the machines on speed. But, at least in my world, we have them beat on quality. We humans just have to keep them in check. We do that by not confusing “fast” with “good.”

I am not anti A.I. But given its constant threat to my livelihood I think about it a lot. And at this point I think it’s fair to say I’ve become a bit of a skeptic. I know AI can do amazing things, and I know it’s getting better quickly. But I also know the devil is in the details, and we are glossing over a whole lot of details. We can’t keep waving problems away with, “Oh, they’ll fix that in no time.” They might. Or, it might be that it can’t do everything better than humans. And the thing it currently can’t do better than us is to be an outlier. An outlier in terms of quality, an outlier in terms of creativity. It can’t be truly innovative, because everything it does has to be rooted in something that’s been done before. Unlike human innovation which often is special precisely because we’ve never seen anything like it.5
This innovation conundrum makes me wonder about one possible future of AI.
Since every AI result we get is based on existing knowledge, every LLM output is fundamentally derivative. Ask a question, ChatGPT scours many sources in seconds, and outputs a mashup of an answer that is often surprisingly good — particularly when speed is factored in.
But because those results are based on existing data, be it text or visuals, each new AI result is a little bit like a photocopy — a reasonable facsimile, done quickly, and just a little bit worse.
For example, let’s say you ask an LLM a question about the alphabet and it gets 25 letters correct, omits one and hallucinates a 27th letter. If I incorporate this newly incorrect alphabet into whatever I’m working on, and whatever I’m working on goes back into the pool of input data… Can you see where I’m going? If enough of our “content” becomes AI generated, soon the input will be too.
With each iteration, then, we’re getting farther from actual human knowledge, from meaningful data, from actual facts. We’ll be corrupting our knowledge base. It’s like using a photocopier over and over, making a copy of a copy of a copy. In the end we’re several generations removed from actual quality, actual data, actual fact.
We will replace hard won human knowledge with discounted “sounds about right” AI approximation.
My foresight experience last week reinforces my belief that AI sacrifices nuance in its quest for speed. We were done first, but we did the worst. Lesson learned.
When I told my wife about it, her response was not to blame AI but to call out our own “human failing.” I see her point. But because it’s humans who put AI to use, I think my argument still stands. We’re going to misuse it, unless we’re exceedingly specific about accurately defining its capabilities.6
She suggested that if we had better used our human brains in relation to what the AI generated — considering its output as only a jumping off point, for instance — maybe we could have found deeper meaning or done a better job. But whatever it is that a given AI application is good at, it’s only so good as we make it. Whatever it gives us, it’s always going to get better when we apply our human-ness to it. We’re going to take its output and use it as a starting point to find deeper meaning, or to become truly creative in how we address a challenge. We can’t rely on the machines to do these things for us. We can only count on them to do some of the things, slightly worse than we would do them, but a whole lot faster.
Who I am not trying to throw under the bus, truly. He did what a lot of people are doing.
If this is what students in classrooms are doing, we’re cooked.
My brand, frankly, is not having much of anything figured out and sharing my struggles in hopes we can all smartify together.
If you’re poring over tons of data, or working on something formulaic, faster (without sacrificing accuracy) is better.
The painter Piet Mondrian drew a distinction between “abstraction” and “pure creation.” It’s a similar thing here. AI’s “new” output is largely abstraction of existing things. LLMs seem to struggle with “1 + 2 = Frog” and similar non-sequitor nonsense.
Have you noticed how AI results are incredibly impressive in everything except in the area of your expertise? If you don’t know much about the subject, the machine’s surface level talking points seem impressive. But when your knowledge is vast, the AI results are often a letdown. Which makes me wonder: what kind of “mostly correct” information are we all accepting as gospel truth because the machine said so? Oh god, it’s going to make us even dumber, isn’t it?
Pencil, paper, and the slide rule got us from the Wright brothers to jet propelled flight in 30-40 years. Home computers went from 5mb hard drives to multi-terrabyte computation and greater speed in about the same time. Computers are now, in some sense, inventing themselves faster than could the human mind.
At a barbeque last week many guests were denizens of the photo industry, some retired, some winding down, and some lost jobs to computer processing. Some cranking out imaging for later processing by computers, numbing creativity. And AI? It keeps getting better as computational operations re-invent themselves faster than we can.
While I felt at the mercy of this process, I held out hoping, really, that the innate ability of humans to appreciate essence, the moment, to perceive as only humans can and express from that experience, would always distinguish our process from the computational process. The human process would always be superior, elegant , and unique.
2 things:
Our marketplace is being eroded by clients who only need so much quality. This is not new; at the frayed edge of our work we have often been displaced by a secretary, now an admin, who had a camera, now a cell phone, who could produce something that was "good enough." Apps and algorithms, freely available online, can do needed processing. You and I can see the difference but the immediacy and economy of the processes now available are more attractive. As such, our market has shrunk.
I recently read a novel by Berkeley's Ursula K. LeGuin, the Dispossessed, where a physicist expressed a model where a rock was thrown at a wall. Each time it was thrown it could only reach halfway. Thrown again, it reached halfway, rinse and repeat. While it never actually reaches the wall it gets really, really close. This is Zeno's Paradox. For many purposes, that was good enough. Does AI get us halfway "there" and where will "there" be after innumerable iterations?
•••
I recall back in the 90s when computational processes were rapidly being applied to imaging and image processing and delivery. A mentor, at the end of his career, said and I paraphrase: "I won’t have to deal with this. Now it is in your hands." I think that we did it ably and sustained out craft.
I am retiring on June 30, just a few weeks away, after a long career of analogue and digital imaging and production, and years of teaching and bringing a Photo Department and its Studio to its' zenith. I won’t have to deal with this. Now it is in your hands. I can only wonder.
Really interesting. I like that you bring in Mondrian. His work was pretty much out of left field. It had never been done before, so it would never have been contemplated by AI. I view AI as linear, as in if 'A', then 'B', and if 'B', then 'C', etc. What artists can do is go from 'A' to 'X' in one leap, and that is why AI will not get there. In part because those using AI will not accept the leap, given that it is 'black box', so you don't actually see how it goes from 'A' to 'B' to 'C', etc. You just see the logical conclusion. If 'X' is not logical, it will be rejected by the person consulting AI as it will be rejected by AI itself. That is why we must love an artist's mind, exactly for the reason that it is non-linear and can go straight to 'X'. Does that make sense?