Charlotte-Ann Schwark Charlotte-Ann Schwark

When Strategy Meets Reality

Your strategy isn’t failing. It is being asked to operate inside an environment that was never designed to support it.

Your strategy isn’t failing. It is being asked to operate inside an environment that was never designed to support it.

You may have found yourself at the end of a long week thinking, “If I just had a better plan, things would actually move forward.” That thought feels logical, especially when you have already put in the work to build a strategy. You have likely spent hours clarifying priorities, aligning with your board, documenting direction, and translating your mission into something actionable. On paper, it makes sense. It is clear. It is something you believe in.

And then the day-to-day work returns.

What follows is not a breakdown in understanding, but a slow erosion in execution. Your time becomes fragmented across emails, meetings, and constant small interruptions. You begin to second-guess decisions that were already made, not because they were wrong, but because the context around you keeps shifting. Priorities change depending on who is asking or what feels most urgent in the moment. Work starts, but it is rarely completed without interruption.

Even if you maintain a to-do list, whether in a system or in your head, it often no longer reflects the priorities outlined in your strategic plan. By the end of the day, you have been busy, responsive, and engaged, yet there is a persistent sense that nothing truly moved forward.

This is the point where strategy and environment come into direct conflict. Strategy defines direction. It sets the goals and outlines what matters most. But environment determines what actually happens on a daily basis. It shapes how time is spent, how decisions are reinforced or undone, and whether work is carried through to completion.

In most small nonprofit settings, the environment is not intentionally designed for execution. It is shaped by immediacy, responsiveness, and the constant pull of competing needs. It rewards reaction over focus, availability over follow-through. Within that kind of system, even the clearest strategy will struggle to hold.

The issue, then, is not a lack of planning. It is that execution is being filtered through a system that was never structured to support it. Strategy is typically created in a space of clarity, often removed from the pressures of daily operations. Execution, on the other hand, happens in the middle of interruption, context switching, and limited capacity. The gap between those two environments is where progress begins to unravel.

When that unraveling happens, it is easy to misdiagnose the problem. It appears as though the strategy itself is insufficient. The natural response is to refine the plan, to make it more detailed, or to introduce new tools that promise better alignment. Over time, this often leads to a collection of disconnected systems. One platform manages grants, another tracks fundraising, another holds marketing efforts. Each one serves a purpose, but none of them are fully integrated into how work actually flows.

The result is not increased clarity, but increased friction. Instead of executing on priorities, leaders and teams spend more time navigating between systems, re-establishing context, and revisiting decisions that were already made. The more fragmented the environment becomes, the harder it is for any strategy to remain intact.

Part of this challenge stems from how strategies are built. Many strategic plans operate at a high level, offering strong direction but limited translation into daily execution. They define what should happen, but not how that work is supported, reinforced, or measured in real time. Without that connection, there is no mechanism to catch misalignment early. There is no structure to bring work back on track before it drifts too far from the original intent.

What this reveals is that the problem is rarely the strategy itself. The issue is the absence of a foundation that can carry it forward. Without a stable environment, there is nothing anchoring the plan in place. There are no guardrails to prevent drift, no systems that reinforce decisions once they have been made, and no consistent way to ensure that daily work aligns with long-term direction.

A strategic plan is meant to function as a map, but a map is only useful if the conditions around you allow you to follow it. When the environment is unstable, reactive, or fragmented, even the most well-designed plan loses its ability to guide.

Before rewriting a strategy, it is worth stepping back to examine the environment it is expected to operate within. Where is time being pulled away from focused work? Where are decisions being reopened instead of carried through? Where is work consistently interrupted before completion? These are not peripheral issues. They are the conditions that determine whether a strategy can be executed at all.

Execution does not fail because of a lack of clarity. It fails when there is no system strong enough to support that clarity over time.

This is where most conversations about tools, including AI, begin in the wrong place. The question is not what to add, but what the environment can actually support. When the underlying system is not designed to hold execution, introducing new tools often adds another layer to manage rather than reducing the load. Used well, they can reinforce structure and reduce friction. Used without that foundation, they simply accelerate the same patterns that were already breaking progress.

Read More
Charlotte-Ann Schwark Charlotte-Ann Schwark

AI Isn’t the Problem. Low-Quality Thinking Is.

AI can produce polished, professional output in seconds. But when speed replaces thinking, something important gets lost. This piece explores the idea of “AI workslop” and why the real risk for small nonprofits isn’t the technology itself, but the quiet erosion of clarity, judgment, and meaningful work behind it.

I was listening to an episode of Harvard Business Review’s IdeaCast recently when they used a phrase that has stayed with me ever since: AI workslop.

They were not talking about something malicious or intentionally deceptive. They were talking about something subtler than that, and perhaps more dangerous because of how easily it blends in. AI workslop is the kind of output that looks polished, sounds competent, and appears productive, but lacks depth, originality, and real thought. It creates the appearance of progress while quietly hollowing out the substance underneath.

The more I have sat with that phrase, the more I have come to believe that this is one of the biggest risks facing small nonprofits as they begin adopting AI. Not because AI itself is the problem, but because it makes it very easy to confuse output with insight, speed with clarity, and completion with understanding.

What concerns me most is that AI workslop rarely looks bad. A grant draft can sound professional and still say nothing new. A strategic plan can be filled with perfectly acceptable language and still fail to reflect any real strategic thinking. Social media posts can be polished, consistent, and technically correct while never actually connecting with another human being. Reports can summarize information without ever analyzing it. The surface improves while the depth disappears.

For small nonprofits, that risk is amplified. Large institutions can often absorb inefficiency, redundancy, or layers of low-value output. Small organizations cannot. When you only have one to three staff members, limited time, and constant pressure to produce, AI becomes a multiplier. But it multiplies whatever is already there. If your thinking is clear, it can extend your capacity. If your thinking is rushed, vague, or disconnected, it can generate more noise at a faster pace.

And yet, my own experience with AI while working through this article only reinforced for me that the answer is not to reject it. In fact, the very mistake that happened while I was trying to think through this piece became a perfect example of the more nuanced point I am trying to make.

I do a great deal of voice-to-text because it works well for how my brain operates. My thoughts move faster than my typing does, and speaking helps me get ideas out before they disappear. While working through my thoughts for this article, I was trying to ask a practical question about my own writing process. I was thinking about how to prioritize my time between the LinkedIn articles I am writing and the book I am working on. More specifically, I was trying to ask whether it makes sense to let AI do more of the surface-level writing work on my LinkedIn articles so that I can protect more of my deep thinking time for the book.

At one point, the transcription shifted so far from what I intended that it created an entirely different question. What I was trying to work through became something else entirely.

A small misinterpretation that led to a clearer idea.

It wasn’t even close—and yet, in working through the mistake, I found myself getting closer to the idea I had been trying to articulate all along.

Somehow, voice-to-text misunderstood what I said and turned it into something about regulating my AI tools through LinkedIn. I am still not entirely sure how it got there. It was not even close to what I meant. But in that mistake, something interesting happened. The wrong transcription created a strange side path in the conversation, and by following it, I actually got closer to the heart of what I was trying to articulate in the first place.

That is what feels important to me.

Because this is not just a story about flawed technology. It is also a story about flawed humanity, about unfinished thinking, about accidents and misfires and misinterpretations that become something more when we remain open to them. It is the artist who accidentally spills paint on a canvas and, rather than scrapping the work, lets that accident open a deeper conversation about the nature of the piece itself. It is the novelist who begins with a plan and then discovers that the characters have taken on a life of their own, so the writer stops forcing the original outline and follows where the story wants to go. It is the ancient Greek idea of the muses, that strange and intangible sense that creation is not always a linear act of control, but also an act of listening, noticing, and receiving. It is the history of mistaken experiments becoming world-changing discoveries: Alexander Fleming’s accidental observation that led to penicillin, or the failed adhesive experiment that eventually became the Post-it Note. Human progress has never been built on perfection alone. It has often been built on our willingness to notice possibility inside the unexpected.

That is part of why perfection stunts growth. We cannot account for every possibility in advance, and the more tightly we grip our need for clean execution, the more likely we are to miss what is emerging at the edges. If we remain open, however, we sometimes find that the thing that looked like an error was actually an invitation to think differently.

To me, AI is another tool humanity has created to explore that intangible space. Not to replace human thought, and certainly not to replace human judgment, but to interact with it. Sometimes that interaction is efficient and straightforward. Sometimes it is messy. Sometimes it misunderstands us. Sometimes it reflects our own confusion back to us in ways that force us to clarify what we really mean. And sometimes, in that imperfect exchange between human intention and machine approximation, something useful and even wonderful emerges.

But that only happens if the human remains present.

That is the line I keep coming back to. AI can do a remarkably close approximation of how humans write, organize, and communicate. It can recognize patterns, generate language, and mimic coherence. But it cannot connect the dots the way a human being can, because it does not possess a human life. It does not have emotional memory, embodied experience, relational context, or years of social understanding. It does not know what it feels like to live inside uncertainty, grief, joy, responsibility, or hope. It can imitate the patterns of those things in language, but it cannot originate from them.

That is why AI can support deep work, but it cannot replace it.

It is also why the distinction between deep work and surface work matters so much. Surface work is often where AI shines. It can help organize, summarize, translate, reformat, and structure. In my own writing, that is exactly how I use it. I use voice-to-text to get my thoughts out quickly. I review the transcription to make sure I was understood clearly, or to correct where I was misunderstood. Then I use AI to help translate my rambling or nonlinear thoughts into something more coherent so that other people can follow the thread. As someone with ADHD, that matters a great deal. It helps me bridge the gap between how my mind moves and how ideas need to be communicated to others. In that sense, AI is helping me communicate more clearly with other human beings. It is a support tool. It is not the source of the thought itself.

The danger comes when we use AI to replace the deep human work instead of supporting it. When we stop questioning the output. When we let speed take priority over discernment. When we accept something because it sounds complete rather than because it is true, meaningful, or grounded. Leadership, especially in small nonprofits, is not merely a matter of producing more words more quickly. It is deciding what matters, interpreting what is happening in a community, making judgment calls with incomplete information, and staying anchored in values while navigating complexity. That kind of thinking cannot be automated away without losing something essential.

So no, I do not think the answer is to avoid AI. I think the answer is to use it with intention, with boundaries, and with an honest understanding of what it can and cannot do. It can accelerate surface work. It can help translate, refine, and structure. It can even, through its imperfections, create unexpected openings that move our thinking somewhere better than where it began. But it cannot substitute for reflection, discernment, or the slow and often uncomfortable process of working through an idea until it becomes real.

That is the deeper issue underneath the conversation about AI slop. The problem is not that AI exists. The problem is that it can tempt us to disengage from the very human work that gives our words meaning in the first place.

And for small nonprofits, where clarity, trust, and thoughtful leadership matter so deeply, that is not a small risk.

AI is just another tool humanity has created to explore an experience that has always been part of creation: the strange interplay between intention and accident, control and discovery, structure and emergence. Used carelessly, it can flood our work with noise. Used thoughtfully, it can help us communicate more clearly, think more expansively, and remain open to possibilities we would not have found alone.

But only if we do the deep work.

The organizations that will benefit most from AI will not be the ones that use it most aggressively. They will be the ones that know how to pair it with judgment, restraint, humanity, and thought.

Because AI is not the problem.

Low-quality thinking is.

Read More