• 76 Posts
  • 705 Comments
Joined 3 years ago
cake
Cake day: June 24th, 2023

help-circle






  • Ladies and gentlemen,

    Today I want to talk about something many people are excited about: artificial intelligence. AI can help us write emails, summarize reports, generate ideas, and yes—draft speeches. It’s a powerful tool. But like any powerful tool, it reveals something important about us: technology can assist judgment, but it cannot replace it.

    That brings me to a very public example: Pete Hegseth.

    If you’ve been paying attention to recent public discourse, you may have seen speeches and statements associated with him that sparked debate—not just about the content itself, but about how they may have been written. Many people suspect that AI tools were involved. And when those speeches fall flat, contradict themselves, or sound oddly mechanical, critics jump to one conclusion: “AI wrote this.”

    But here’s the truth we should understand: bad speeches are not a failure of AI. They’re a failure of the human using it.

    AI can generate structure, language, and ideas, but it cannot replace authenticity, judgment, or responsibility. A strong speech comes from clarity of thought, understanding of the audience, and a genuine message. If someone simply copies and pastes machine-generated words without reflection, editing, or ownership, the result will sound hollow—no matter how advanced the technology is.

    So when people say that certain speeches are a “terrible advertisement for AI,” they’re actually pointing to something deeper. AI doesn’t stand at a podium. AI doesn’t decide what values to defend or what message to send. Humans do.

    The lesson isn’t that AI makes communication worse. The lesson is that AI magnifies the communicator.

    A thoughtful speaker can use AI to research faster, refine language, and test ideas. A careless speaker will use it as a shortcut—and the audience will hear that shortcut immediately.

    Public speech has always required responsibility. The tools change—typewriters, teleprompters, word processors, and now AI—but the core requirement remains the same: the speaker must mean what they say.

    So instead of blaming the technology when a speech fails, we should remember a simple principle:

    AI can help you write words. But it cannot help you believe them.

    And the audience always knows the difference.

    Thank you.

    (sorry, I can’t resist replying to posts like that with AI-generated examples of what they’re complaining about; in this case, the above was generated by ChatGPT)