When people dismiss AI-generated content as "not good enough," they're implicitly saying two things.
First, they're claiming that they can produce better content than AI, of course, if they had the time. This is the good-old Dunning-Kruger effect at play—our innate tendency to overestimate our own intelligence or skill, especially relative to a faceless algorithm. Admitting that AI might be equally capable feels like a blow to our ego, diminishing our sense of uniqueness or superiority.
Second, this critique tends to hold on to the signaling pattern from the pre-AI world. There's a well-understood pecking order: a piece from Ben Thompson naturally carries more weight than something I produce, and mine more than that of an intern. Trust in content correlates with the 'perceived' source credibility.
But here's the twist—AI is arming the rebels. Equipped with the right set of acquired knowledge and context, AI allows me to craft content as compelling as Ben’s, and empowers an intern to surpass what I could typically deliver. The traditional pecking order is disrupted; credentials and reputations matter less when intelligence is democratized by algorithms.
The question isn't whether AI is "good enough." Rather, are we ready to acknowledge how deeply it challenges our assumptions about intelligence, authority, and value?
Cheers
Rohit