Charlotte-Ann Schwark Charlotte-Ann Schwark

AI Isn’t the Problem. Low-Quality Thinking Is.

AI can produce polished, professional output in seconds. But when speed replaces thinking, something important gets lost. This piece explores the idea of “AI workslop” and why the real risk for small nonprofits isn’t the technology itself, but the quiet erosion of clarity, judgment, and meaningful work behind it.

I was listening to an episode of Harvard Business Review’s IdeaCast recently when they used a phrase that has stayed with me ever since: AI workslop.

They were not talking about something malicious or intentionally deceptive. They were talking about something subtler than that, and perhaps more dangerous because of how easily it blends in. AI workslop is the kind of output that looks polished, sounds competent, and appears productive, but lacks depth, originality, and real thought. It creates the appearance of progress while quietly hollowing out the substance underneath.

The more I have sat with that phrase, the more I have come to believe that this is one of the biggest risks facing small nonprofits as they begin adopting AI. Not because AI itself is the problem, but because it makes it very easy to confuse output with insight, speed with clarity, and completion with understanding.

What concerns me most is that AI workslop rarely looks bad. A grant draft can sound professional and still say nothing new. A strategic plan can be filled with perfectly acceptable language and still fail to reflect any real strategic thinking. Social media posts can be polished, consistent, and technically correct while never actually connecting with another human being. Reports can summarize information without ever analyzing it. The surface improves while the depth disappears.

For small nonprofits, that risk is amplified. Large institutions can often absorb inefficiency, redundancy, or layers of low-value output. Small organizations cannot. When you only have one to three staff members, limited time, and constant pressure to produce, AI becomes a multiplier. But it multiplies whatever is already there. If your thinking is clear, it can extend your capacity. If your thinking is rushed, vague, or disconnected, it can generate more noise at a faster pace.

And yet, my own experience with AI while working through this article only reinforced for me that the answer is not to reject it. In fact, the very mistake that happened while I was trying to think through this piece became a perfect example of the more nuanced point I am trying to make.

I do a great deal of voice-to-text because it works well for how my brain operates. My thoughts move faster than my typing does, and speaking helps me get ideas out before they disappear. While working through my thoughts for this article, I was trying to ask a practical question about my own writing process. I was thinking about how to prioritize my time between the LinkedIn articles I am writing and the book I am working on. More specifically, I was trying to ask whether it makes sense to let AI do more of the surface-level writing work on my LinkedIn articles so that I can protect more of my deep thinking time for the book.

At one point, the transcription shifted so far from what I intended that it created an entirely different question. What I was trying to work through became something else entirely.

A small misinterpretation that led to a clearer idea.

It wasn’t even close—and yet, in working through the mistake, I found myself getting closer to the idea I had been trying to articulate all along.

Somehow, voice-to-text misunderstood what I said and turned it into something about regulating my AI tools through LinkedIn. I am still not entirely sure how it got there. It was not even close to what I meant. But in that mistake, something interesting happened. The wrong transcription created a strange side path in the conversation, and by following it, I actually got closer to the heart of what I was trying to articulate in the first place.

That is what feels important to me.

Because this is not just a story about flawed technology. It is also a story about flawed humanity, about unfinished thinking, about accidents and misfires and misinterpretations that become something more when we remain open to them. It is the artist who accidentally spills paint on a canvas and, rather than scrapping the work, lets that accident open a deeper conversation about the nature of the piece itself. It is the novelist who begins with a plan and then discovers that the characters have taken on a life of their own, so the writer stops forcing the original outline and follows where the story wants to go. It is the ancient Greek idea of the muses, that strange and intangible sense that creation is not always a linear act of control, but also an act of listening, noticing, and receiving. It is the history of mistaken experiments becoming world-changing discoveries: Alexander Fleming’s accidental observation that led to penicillin, or the failed adhesive experiment that eventually became the Post-it Note. Human progress has never been built on perfection alone. It has often been built on our willingness to notice possibility inside the unexpected.

That is part of why perfection stunts growth. We cannot account for every possibility in advance, and the more tightly we grip our need for clean execution, the more likely we are to miss what is emerging at the edges. If we remain open, however, we sometimes find that the thing that looked like an error was actually an invitation to think differently.

To me, AI is another tool humanity has created to explore that intangible space. Not to replace human thought, and certainly not to replace human judgment, but to interact with it. Sometimes that interaction is efficient and straightforward. Sometimes it is messy. Sometimes it misunderstands us. Sometimes it reflects our own confusion back to us in ways that force us to clarify what we really mean. And sometimes, in that imperfect exchange between human intention and machine approximation, something useful and even wonderful emerges.

But that only happens if the human remains present.

That is the line I keep coming back to. AI can do a remarkably close approximation of how humans write, organize, and communicate. It can recognize patterns, generate language, and mimic coherence. But it cannot connect the dots the way a human being can, because it does not possess a human life. It does not have emotional memory, embodied experience, relational context, or years of social understanding. It does not know what it feels like to live inside uncertainty, grief, joy, responsibility, or hope. It can imitate the patterns of those things in language, but it cannot originate from them.

That is why AI can support deep work, but it cannot replace it.

It is also why the distinction between deep work and surface work matters so much. Surface work is often where AI shines. It can help organize, summarize, translate, reformat, and structure. In my own writing, that is exactly how I use it. I use voice-to-text to get my thoughts out quickly. I review the transcription to make sure I was understood clearly, or to correct where I was misunderstood. Then I use AI to help translate my rambling or nonlinear thoughts into something more coherent so that other people can follow the thread. As someone with ADHD, that matters a great deal. It helps me bridge the gap between how my mind moves and how ideas need to be communicated to others. In that sense, AI is helping me communicate more clearly with other human beings. It is a support tool. It is not the source of the thought itself.

The danger comes when we use AI to replace the deep human work instead of supporting it. When we stop questioning the output. When we let speed take priority over discernment. When we accept something because it sounds complete rather than because it is true, meaningful, or grounded. Leadership, especially in small nonprofits, is not merely a matter of producing more words more quickly. It is deciding what matters, interpreting what is happening in a community, making judgment calls with incomplete information, and staying anchored in values while navigating complexity. That kind of thinking cannot be automated away without losing something essential.

So no, I do not think the answer is to avoid AI. I think the answer is to use it with intention, with boundaries, and with an honest understanding of what it can and cannot do. It can accelerate surface work. It can help translate, refine, and structure. It can even, through its imperfections, create unexpected openings that move our thinking somewhere better than where it began. But it cannot substitute for reflection, discernment, or the slow and often uncomfortable process of working through an idea until it becomes real.

That is the deeper issue underneath the conversation about AI slop. The problem is not that AI exists. The problem is that it can tempt us to disengage from the very human work that gives our words meaning in the first place.

And for small nonprofits, where clarity, trust, and thoughtful leadership matter so deeply, that is not a small risk.

AI is just another tool humanity has created to explore an experience that has always been part of creation: the strange interplay between intention and accident, control and discovery, structure and emergence. Used carelessly, it can flood our work with noise. Used thoughtfully, it can help us communicate more clearly, think more expansively, and remain open to possibilities we would not have found alone.

But only if we do the deep work.

The organizations that will benefit most from AI will not be the ones that use it most aggressively. They will be the ones that know how to pair it with judgment, restraint, humanity, and thought.

Because AI is not the problem.

Low-quality thinking is.

Read More