Don't Worry, AI will write it . . .
Unpacking the workflows behind the simplest AI promise proves it not so simple
During a planning meeting for a piece of software designed to help non-experts navigate a complex set of decisions in a science-y context, I put on my product management hat and asked: “who’s going to write the copy for this?” We had some experts on hand who were quite good at communicating science to a general audience and I wanted to make sure we had their time and they could hit the deadlines we needed for review and testing. The answer was jarring:
“Don’t worry about that, AI will write it for us”
I was only on the hook for strategy and some design work. Pushing back on the issue would only make me a buzzkill, so I moved on.
As you might guess, the copy didn’t get written by AI, and the process wound up being completely un-supported by AI.
Writing copy is a very simple task, frequently cited in the LLM/ChatGPT discourse. Unpacking the workflow against AI tools, is useful, especially in situations where the core team doesn’t have a writer allocated for writing or editing. So let’s lay it out.
Start with the players on a team. I’ll keep it general so that it can apply to any organization or business:
Project Lead – the person ultimately responsible for the delivery of the product in which the copy lives
Domain Experts – the person or people who ensure that the information in the copy is accurate and finds the right balance of technical precision and general audience readability.
Brand/Voice Caretakers – those who ensure that copy is ‘on brand’ and appropriate to the readership. This may fall to the project lead or domain experts, but getting that voice has to be sorted multiple times: through reviews, rewrites and revisions.
Prompt Writer/AI Caretakers – the person who writes the prompts, engages the tool, decides when to circulate the AI-generated copy for review.
Rewrite/Revision person – Once the brand/voice, domain, and lead people have weighed in, there likely comes a point at which the AI is too blunt an instrument for the last and final drafts. This is usually overlooked when people write about productivity gains derived from their personal AI stack, because they eventually take it over and tweak it to perfection themselves, but in the team described, it’s not clear who that person is.
Once this is thought out properly, and it should really only take a couple minutes. There are three questions:
Do we need a writer for the 5th role?
Who is the prompt writer and what are the attributes that person needs to do it right?
How much time and money have we saved with or without the writer?
Think about the two workflows:
Writer does a draft, gets feedback, revises, repeats until done.
Someone writes a prompt, circulates the resulting copy, distills feedback into prompts, shares revisions and relies on feedback-prompt-copy until the final draft.
Friends and colleagues are starting to share sessions with AI tools in which they are yelling at the AI through text to get the tool focused properly. If any of you have used these tools for copy purposes you know it can be really hard to keep the voice of a piece consistent while tweaking individual sections within it. You also realize that taking individual comments on individual sentences or individual word and tone choices throughout a document and working them out through a prompt would really suck.
In another post, I describe how write real use cases upon which you can make decisions on investment and implementation of technology. I reference three attributes: is it real? is it actionable? is it measurable. Let’s test it.
Is it real? Barely. Sure, AI can synthesize text that looks like copy and, certainlym having a thousand words of copy written based on a 50 word prompt seems valuable. But can it happen in your business? Notice the lack of agency in the use case “AI will write it” – who in the business will make it happen? Which AI tool? How will it get circulated and how will feedback be processed?
Actionable – Nope, not even close. After writing the prompt, there’s a whole lot of action needed that hasn’t been thought out – who will review the first response? Who will write the subsequent prompts? Who can explain how we got to the final result if there are questions? Who will take this across the finish line?
Measurable - there are a variety of metrics that come into play. Quality and appropriateness of voice are usually the most important. In general, AI “use cases” assume the quality is equal or superior to the (presumably) more expensive human-written copy. Quality is a soft metric, but time and money saved is very tangible. Do we have enough specificity in the workflow to assess whether we saved money or time? A harder question to ask, but one to track, would be: do we know that this is equal or better quality? or have AI hypesters made us so awe-struck at a truly impressive technology that we’ve lost objectivity and unconsciously lowered our standards and turned off our editorial thinking? Has our commitment to living the AI dream given us bad incentives to convince ourselves that “it’s all working great!”?
The point: AI won’t ever write new and engaging copy for you from first draft to final product. (In fact, it will never write new copy, only string together existing copy snippets and patterns.) You will always need a person with a good handle on writing to get it across the finish line, and maybe even through every draft after the first one. (This applies to code as well.) Those extra steps need to be measured and priced – you may find that the amount of effort needed to clean up a draft in versions 2 through x will cost more than just having the person do version 1 and sheperd it through all the stakeholders noted above.
Hypesters will say “it will get better” “it will get cheaper” “someday it will”. The first two are always true, but until it is better or cheaper, it’s a leap of faith to rely on “AI will do it”. “Someday it will” is simply fantasy, and should be ignored.
It doesn’t matter to your work or business that the tech will get better, it matters what it does now. You need better tools and processes than “, don’t worry AI will do it.” AI might help, but that’s a weak, weak use case.