My boss wants his team to use AI more. “Copilot is everything,” he says. “Be more efficient,” he says. “Leverage Copilot,” he says. “Let it help you think,” he says.

A few days later, he demonstrated the perceived power of AI efficiency. He created our 2026 objectives using Copilot. His email explained it all, “Copilot wrote our objectives.”

I was awe struck. They were flawless. Perfectly structured. Impeccably worded. Strategically aligned. Action-oriented. Outcome-focused. Metric-adjacent.

They meant absolutely nothing.

If you’ve never used AI to write professional objectives, let me explain how this works. You type something vague like: “Create objectives for a role focused on collaboration, risk management, and stakeholder engagement.” Copilot might respond with something like:

  • Drive cross-functional alignment to enhance enterprise risk posture
  • Leverage strategic partnerships to optimize compliance outcomes
  • Deliver measurable value through proactive engagement models

The irony is that AI has mastered the dialect of leadership without understanding the duty of it. You provide the nouns—’Risk,’ ‘Compliance,’ ‘Stakeholder’—and the AI provides the high-octane verbs like ‘Leverage,’ ‘Optimize,’ and ‘Synergize.’ The result is a linguistic soufflé: it looks beautiful on the plate, but the moment you try to put a fork in it, it collapses into hot air. When my boss hit ‘send’ on those objectives, he wasn’t leading; he merely curated a list of sophisticated-sounding vibrations.

At the team meeting, everyone nodded. Yet no one asked what it meant; who is doing what; why does it matter; or how does any of this make our company (or team) better? I wanted to ask, “Do you really believe this replaces critical thinking?” It sounds impressive. And yet, sounding impressive has become a substitute for thinking. If everyone is nodding at a hallucination, who is left to steer the ship?

We’ve traded our internal compass for a GPS that only tells us what we want to hear in a voice that sounds exactly like a promotion. We feel productive because the inbox is empty and the slides are pretty. But beauty without substance isn’t a strategy—it’s a eulogy. We are ‘leveraging’ ourselves right into irrelevance.

The problem is that AI does not experience friction. It does not experience failure. It does not care if the objective works in the real world. AI is oblivious to the kind of mental callousing that happens when you wrestle with a difficult problem. You sweat over the wording of an objective because you are trying to reconcile two competing realities. That struggle is the work. When we bypass that struggle, our mental muscles atrophy. Bypassing creates editors versus authors. We spend our days tweaking punctuation on ideas we didn’t have, to solve problems we no longer fully understand. AI doesn’t know what happened in that last Q3 budget meeting. It doesn’t know why the last ‘strategic alignment’ initiative ended in a pile of redirected emails and frustrated workers.

By outsourcing the ‘What,’ we’ve effectively murdered the ‘Why.’ We aren’t just managing data; we are managing people’s lives and health. When an objective is ‘Metric-adjacent’ but ‘Context-absent,’ it doesn’t just fail the team—it fails the mission. We are replaced by a ghost in the machine that can spell ‘efficiency’ but can’t define ‘accountability’. When my boss used AI to create objectives, he outsourced thought and absorbed output. He accepted the output, sight unseen. There was no testing. There was no vetting. There was no collaboration on whether the idea helps promote vision. He accepted AIs’ vision, without question.

We’re losing cognitive ability. In doing so, many of us are doomed to failure when we and our jobs lay in AIs’ dustbin. When AI writes your objectives (or life), you become productive without having done the hard work. And in the end, you’ll outsource your own ability to think.