Stop Complaining AI Isn't Perfect
Start asking if it’s better than your "Best Available Human Option"
The Ongoing Double Standard
I was moderating a panel on Higher Education, and the subject of AI and writing arose. One professor was concerned that AI was getting so good that it was difficult to tell if papers being submitted by students were self-authored or written by AI.
A panelist dismissed the concern out of hand, saying, “AI is never going to write like Shakespeare, so I think your fears are overblown.”
Putting aside the fact that AI is actually pretty decent at writing like Shakespeare, the panelist made another error I think more important. Namely, judging AI against perfection or an ideal standard rather than what the average human can produce. In this example, we’re not judging students by “Can they write as well as Shakespeare?” but “Can they write as well as most students at their level of education?” By applying a double standard, we mistakenly write off what AI can do.
When AI hallucinates, we say, “See. You can’t trust AI.” Yet we don’t do that with humans. When people we trust make a rare mistake, we excuse it with, “Well, everyone is human.” By expecting perfection from AI but giving humans a pass, we are distorting how we should think about AI’s value, its risk and its adoption.
That is the heart of the argument I am making. Clearly, AI is not perfect. But that’s the wrong standard by which to hold it. The right standard is, “Is relying on it better than relying on the Best Available Human Option (BAHO)?”
AI should not be judged against an ideal of perfect output. It should be judged against the Best Available Human Option in that specific situation.
Why We Default to “Perfection”
Our instinct to demand perfection from AI is rational and historically based. Ever since calculators and computers were developed, we’ve expected them to provide the right answer. If they failed to do so, we would examine the software and hardware, figure out the “bug”, and fix it. Perfection was built in and could be depended on.
AI is run on computers and so we expect perfection. However, AI is a fundamentally different animal than traditionally programmed computers. As Dan Barclay, Executive Director of the Center for Humane Technology said, “One essential fact of AI development is that AI systems are not built so much as grown.”
AI is based on probabilities vs. traditionally deterministic computing programs. It is created by accumulating knowledge from a myriad of sources and then trained by humans. In that sense, it is much more like us as humans than traditional computer programs.
Given that, we need to update our mental model. We don’t expect perfection form humans and we should not expect it from AI.
The Right Benchmark: The Best Available Human Option
What is the right standard? It’s not the ideal human expert with unlimited time, perfect information, and no competing demands. That person does not exist in the real world of work and personal problems.
The benchmark is the realistic human substitute given your constraints, such as time, cost, availability, and expertise. Let’s call this the Best Available Human Option, or BAHO.
A personal example. Our Homeowners Association is repaving our road. There is a dispute about how to pay, by owner or by lot. The discussion is getting a little heated. I could spend hundreds of dollars talking to a lawyer to understand my options, but that will cost about as much as the road paving no matter which option we take. Or I could get some advice from AI. I chose the latter.
This is not to say I would always go the AI route. There would be high-stakes issues where I want solid legal advice from an excellent lawyer and not use AI. Whether to use AI or not is situational and can’t be generalized. It really depends on the circumstances.
What Changes When We Use BAHO
AI Looks Better Than You Think
When we replace the perfection standard with the BAHO standard, AI starts looking quite capable across a wide range of contexts. It’s not because AI itself has improved but because we’re now judging it more appropriately.
In many realistic work and personal situations, AI outperforms the BAHO on access, speed, cost and breadth of knowledge. That’s why so many people are using it for advice on relationships, work problems, finances, and more.
We Understand AI Risk Better
This framework doesn’t require us to assume AI is always right. It often isn’t, as we know from hallucination data. But neither are humans. Both make mistakes, albeit for different reasons.
Thinking about AI in terms of BAHO, we shift from, “Can I trust AI?” to “How do I manage AI risk relative to the risks I would ordinarily accept from a human?” This is something we do every day with the people we live and work with. We can apply that same approach to think about AI.
Adoption Decisions Become More Clear
Perhaps the biggest practical benefit of the BAHO framework is that it clarifies decisions on how and when to use it. When we ask, “Will AI always give me the right answer?” we are forced to pass up the opportunity to use AI. When we ask, “Is AI better than my best human option in this situation?” I now can make a reasoned judgement on whether to use AI or not.
The question we should be asking is not “Do I trust AI?” but “Do I trust AI more than I trust the best human alternative I currently have?”
Where BAHO Breaks Down
Now let’s not get carried away and take this argument too far. There are many high stakes situations in which we must shoot for perfection as best we can. Air safety. Nuclear power. Certain medical procedures. Sentencing of alleged criminals.
Here, the stakes are so high and tolerance for error so low that we likely need layers of checks and balances to ensure safety. In these cases, we may end up with humans and AI working together to prevent bad outcomes.
There are other situations where AI use may not be appropriate. For example, if I’m penning a poem to my wife to celebrate an anniversary, I’m not going to use AI to create it. I am going to write it.
The question really is, “Can AI, either by itself or in concert with humans, deliver the outcomes we want?” The BAHO framework still applies but sometimes the answer will be that we still need the human in the loop or (as in writing poems) “only a human in the loop.”
The Right Question
AI is not a calculator nor a traditional computer as it makes mistakes. Just like humans do. But when compared with us, it can often be faster, cheaper, more accessible, and more knowledgeable. This means we need to reprogram ourselves as humans to judge AI using the right comparative benchmark given the situation. The Best Available Human Option.
The people and organizations that get this right will not be the ones that wait for AI to be perfect. Instead, they will be the ones that look honestly at their current human-driven processes, compare them fairly with AI, and decide when to leverage AI’s strengths while keeping in mind its risks.
The question is not whether AI is perfect. It’s whether it’s better than what you or another human would have done instead. Answer that question appropriately, and the path forward becomes a lot clearer.
Full disclosure - I’ve had this idea percolating in my mind for a while but an audience member helped crystallize it for me by giving it a name. He called it, “the Best Average Available Human.” I tweaked the name a bit to come up with “Best Available Human Option” to get what I felt was a more precise descriptor and a better acronym.
Mark McNeilly is a Professor of the Practice at UNC Kenan-Flagler Business School and chairs the UNC Provost’s AI Committee. He writes on AI, leadership, and strategy at Mimir’s Well on Substack.
This article was written by me and improved with additional input from AI.


