AI adoption starts with problems, not tools
A framework for aligning AI strategy with organizational needs
My article last week shared initial learning from developing an app for mission-driven organizations that helps with automating various bottlenecks.
One of my takeaways was the looming question of whether these tools would actually be adopted by organizations.
To be sure, it was a helpful exercise in breaking through on the technical front (e.g., coding, UX, site workflow, etc.), but it soon became clear that to effectively build useful tools, more upfront understanding of the end user was needed.
This is not new concept. Karen Spinner articulates this very well (do take some time to read through her experience) and many others understand this intimately.
It bothered me enough that I paused my build and turned my attention this week to thinking more upstream:
What would an assessment tool (and facilitation process) look like to help organizations identify problem-based AI solutions, and ultimately, begin to develop an AI strategy?
First, a few important principles that help frame the process, particularly as leadership teams are engaged in the early stage conversations:
Start now, refine later
They hype train around AI can make it difficult accept and its easy in these early stages to defer integration. But the hype is not unfounded: AI will change fundamentally how we operate. Organizations must begin exploring how AI can be integrated into their operations, even if the end goal right now is still unclear. The worst thing an organization can do right now is not commit to learning and some level of experimentation.
Toolbox over silver bullet
AI integration isn’t about finding the one solution that’s going to solve everything, but focusing on smaller, lighter-weight AI solutions that functionally help staff address bottlenecks in key workstreams. This approach allows organization to test and learn about AI integration, without making large up-front investments. Organizations will be in a much better position to refine their position and adoption of AI after meaningful engagement with the technology.
AI concerns are real
There are very genuine, real concerns about AI adoption: data security, generic content, environmental tolls, unplanned costs. These concerns need to be formalized and explored at the outset. These conversations will help set the general parameters around an organization’s policy posture on integrating AI, but also will help the organization set protocols on when and how AI can be used within those general parameters.
Staff engagement as problem solvers
Staff at all levels of the organization must be consulted and engaged as problem solvers. Leadership teams can guide big picture decision making and strategic direction, but managers, associates, and interns understand an organization’s workflow details better than anybody. They will be best positioned to identify practical bottlenecks and genuine solutions.
Being seduced by AI
AI is an incredible technology that appears to only be getting stronger. It is easy to be seduced by its power. But AI is ultimately a tool driven by humans and in this context, to support real-world organizational priorities. Any AI strategy and tool development must maintain a human-centric approach in all phases of design, testing, and deployment.
AI is an iterative process
Even once AI tools are developed and integrated, continuous learning and adaptation will be needed. Much like social media continues to evolve, AI tools will improve and look different than they did five years ago. Organizations that commit to AI integration now are tied to a long-term relationship with the technology.
Systematic learning
Organizations will get the most out of AI over the long run if they can be systematic and disciplined in their learning about AI. The technology will continue moving at breakneck speeds and many of the AI tools on the market won’t be applicable to an organization’s needs. Those organizations that can maintain a clear understanding of their needs through continual, systematic learning about AI integration will be able to avoid bad investments in tools that aren’t needed.
The process
Here are the primary outcomes of this process as I see it:
Formalize concerns, red lines, or gaps in understanding around AI integration
Identify and prioritize existing, non-AI workflows and bottlenecks within the organization and across departments / teams, as well as existing use of AI
Identify and prioritize possible AI integrations with clear justifications and linkages that demonstrates how automation will add value
Prioritize solutions and establish clear protocols for carrying forward AI outputs into the wider workstream (e.g., manual review, edits, and approvals)
Formalize a functional AI strategy / policy / set of guidelines
Begin build and testing (or explore existing tools)
Let’s look at an illustrative example of how this might play out from an organizational communications perspective of a donor-funded organization (a very real problem based on my experience).
Concern: “I’m worried that AI writing outputs are too generic and won’t reflect our carefully crafted brand perspective.”
Existing workflows & bottleneck: “In my role as a communications lead, I often help our business development team compile case studies or qualifications vignettes for proposals. But this process eats up so much of my time because they are time-sensitive and we don’t have a strong database of this content. Even through we have a decent digital storage system for previous reports and other assets [that make up the basis for the summaries], they are in so many different formats, often long reports or presentations. The time I spend repackaging these materials for proposals is tremendous, and often at the detriment to my routine communications responsibilities.”
Potential solutions: “What if AI could help us generate summaries of reports, presentations, talking points, etc. that use a standard format that we build ourselves. The idea is to build a database of these materials that we can easily take and further tailor for proposals. Each summary would have key data, talking points, context, and language on the overall impact. If set up well, this tool would help save hours a week for manual writing and would allow me to proactively build the database, as opposed to responding only when a proposal is due. I think I’m okay with AI generated responses for this use because they reflect existing content and because we will review and update for final integration into the proposal response.”
Wider workstream integration: “My team should be able to commit an hour each week to working through our archive of materials over the next three months. Additionally, as new assets are developed, we have a standard procedure to generate a summary, archive it, and share with the wider team. In terms of integrating with the business development team, I will work with them on a regular basis to understand their pipeline and identify potential comms summaries, as needed. We can collectively flag important reports or qualifications and work together to manually polish any AI generated summary. I’ll commit to a quarterly presentation to the wider team on new comms summaries and a discussion on how to continue improving this process.”
Initial building: “Here is what we would need the tool to do for us to adopt it (e.g., structure, workflow, opportunities to prompt AI to improve output, final output formatting).”
Do you have insights or experience in this process?
I’d love to hear from you.
-the AI civilian


Loved the example at the end! Agree AI adoption makes sense only when it’s solving a real problem. (Solutions implemented because “everyone is using AI now” tend to make work more complicated.)
Also, thanks for the shout out!
This brings to mind so many thoughts