A friend of mine was at a party here in Austin recently, chatting with a data scientist from Medium about AI-generated content.
“Technology can’t really detect it yet,” was the basic gist, “but humans seem to be able to.”
Fascinating.
What this person meant, I gather, is that any individual human has barely a coin-flip’s chance of spotting AI content. Tech isn’t much better yet. But if you step back, and look at how large groups of people interact with human- and AI-generated content, it’s different.
In other words, it sounds like data teams are finding novel ways to detect AI.
II.
There’s been a lot of debate over whether platforms like Google will punish writers for using AI once they can reliably spot it.
My opinion: Of course they will.
So far, most platforms have taken a relatively soft stance, advocating for “the appropriate use of AI or automation,” as Google puts it.
I see some people interpret this as though Google is indifferent to or even supportive of AI-gen content.
But I think the smarter interpretation is to view this as a “tactical retreat.”
AI has the ability to completely undermine the trust and utility Google has built with users. They have a strong incentive to control its appearance in search.
But they can’t reliably detect it yet. A stronger policy would be meaningless without the ability to enforce it, so it looks to me like they’ve traded space for time while they hone the systems needed to deal with this new technology.
Once those exist – and they will – I believe we’ll see platforms like Google get much more bold about cracking down on AI content.
III.
Medium offers an interesting example.
Early in 2023, they updated their terms to say that they, “welcome the responsible use of AI-assistive technology,” so long as writers were transparent about it.
Later that year, as they got a better feel for the negative effects AI had on users’ experience, they took a firmer stance:
“Medium is for human writing, full stop,” they wrote. AI augmentation would be allowed – technically using a tool like Grammarly counts as AI augmentation – but not welcome.
Humans were the priority.
Behind the scenes, they’ve put enormous resources into refining their distribution system to limit the reach of 100% AI-generated content.
Fascinatingly, that has meant re-introducing humans into the loop. Their boost program uses two layers of human review to help decide what gets increased visibility on the platform.
Then, a couple of weeks ago, they came out and said that any partner using AI in their paywalled content – disclosed or otherwise – will be removed from the partner program.
IV.
So what does all this mean for you?
There’s no doubt AI plays a role in the future of human creativity and content marketing. Personally, I enjoy Brett Hurt’s take on this – something he’s calling Renaissance 2.0
But never forget the incentives of the platforms.
Their goal is to serve up the best content for end-users. Full stop. So experiment with AI, but only to the extent that it helps you create stuff that’s better than what’s out there now.
And be careful not to get over-invested in it.
More than one great company has been killed overnight by Facebook or Google choking off distribution. And as platforms find new ways to spot AI content, they’ll be bolder about eliminating it.
The future is a lot more human than you think.