Note that this version includes red-pen edits to show our transparency about policy changes as the AI landscape evolves. Click here for a clean version without edits.
Dragonfly Editorial created our first policy on ethical AI use in May 2023. Since its inception, the ethical AI landscape has evolved, along with our thinking on the promises and pitfalls — and possible best practices — of generative AI use in creative and editorial work.
As we all adjust and adapt to some aspects of AI, and remain steadfast in others, we wanted to update our policy and show the evolution of our thinking.
It’s going to be a little messy, and we’re not experts — yet. But we’re all in it together.
For the current, clean version of our policy, click here. The following inline discussion provides points from the prior version (in black), with notes about our current thinking (in blue).
Dragonfly Editorial is powered by human creativity, human identity, and each of our unique human lived experiences.
We still agree.
We know that generative AI has the potential to assist us in our work. If it helps us work more efficiently, saving our clients time and money, then we’re all in.
We still agree.
But we think it’s important to approach AI with a clear understanding of its limits and a healthy skepticism about its ability to create rather than regurgitate. For example, although AI produces content in seconds, that content is often inaccurate and dripping with falsehoods. Because it relies on outdated data sets, it often produces content that doesn’t jive with today’s realities. And because it draws exclusively from already-published material, it often outputs content that is rife with plagiarism and bereft of fresh ideas.
We note the frequent use of the word “often” in this section. Phrases like “dripping with falsehoods,” “rife” and “bereft” also imply an extreme, all-or-nothing situation that looked a little more nuanced by 2024. Fortunately, errors in accuracy or currency aren’t as frequent as they were when this was written. While it’s getting better, it’s still not perfect, so skepticism is still warranted. Even if “often” is now more like “sometimes,” and even if AI models now reference current, internet-based data sets, it’s still critical to double-check the results that AI gives us.
It’s also still true that AI’s foundation lies in published material. However, as we’ve learned more about AI’s analytical capabilities, we have seen situations where AI can combine existing material from multiple sources and suggest something new. In our work, we’ve seen more applications of this use-case to content analysis than to content creation. (For example, we’re not sure an AI model could take last year’s Dragonfly field guide on the Chicago Manual of Style and update it, accurately, with this year’s additions. But let’s revisit that one next year.)
At the end of the day, echoing the Marketing AI Institute’s Responsible AI Manifesto, we believe that AI technologies should be assistive, not autonomous. We believe that the best content doesn’t just summarize past publications, but rather combines novel research and newly born thoughts. And we believe, with all our heart, that anything worth creating is best created with human soul, spirit, and intellect intertwined.
We still agree. And we’ve heard more frequently this year from our marketing and editorial colleagues that true thought leadership — content that articulates a specific viewpoint or opinion, or that incorporates original research — cannot, by definition, be created with AI. And we heartily agree. Could AI possibly improve the piece, by fixing grammatical errors, suggesting a headline, or sketching a graphic? Sure. But humans need to be the drivers of thought leadership creation.
Taking all this into consideration, here are our policies on the use of AI in our organization:
We will never use AI to:
-
- Replace our staff or contract Dragonflies
- Produce copy completely for our customers
- Create entire images, designs, or videos for our customers
- Edit content for our customers without careful review of all suggested changes
- Replace credible sources of information, such as original research articles, personal interviews, or trusted news outlets
- Analyze content that is internal, confidential, or proprietary to our clients
- Analyze any client-provided content without their permission
We may use AI to:
-
- Generate story ideas
- Generate an outline for a story
- Suggest social media posts
- Suggest headlines
- Do a pre-edit check for punctuation, spelling, and simple grammar so our editors can focus on readability, clarity, and conciseness
- Summarize long reports
- Provide SEO recommendations
- Modify images, designs, or videos for our customers
- Provide basic information about a new topic, knowing that the information may be untrustworthy and that we will need to vet it in the same way we would vet information from Wikipedia
- Suggest ways to clarify confusing sentences
- Suggest possible interview questions we may have missed
- Transcribe interviews (but not without the client’s specific permission to upload the original audio file)
The AI tools we use are private to Dragonfly, and the data is not used to train global AI models. Any suggestions provided to us by AI will be carefully evaluated by our human creatives.
Additional resources
For reference, here is how other organizations we respect are addressing the use of large language models in generating content.
AI AND CONTENT WRITING
-
- WIRED: How Wired will Use Generative AI Tools
- Marketing AI Institute: Responsible AI Manifesto
- Harvard Business Review: How Generative AI Can Augment Human Creativity
AI AND SCIENTIFIC WRITING
-
- Nature, along with all Springer Nature Journals: Tools such as ChatGPT threaten transparent science: here are our ground rules for their use
AI AND HEALTH CARE COMMUNICATIONS
-
- CommunicateHealth: This Wasn’t Written by a Bot
AI AND CREATIVE WRITING
-
- Justine Bateman, writer and filmmaker: AI in the Arts Is the Destruction of the Film Industry
AI AND LEGAL WRITING
-
- The New Republic: ChatGPT Fought the Law, and the Law Won: Here is an object lesson in how not to use generative A.I.
More info on the promise and threat of AI
-
- Center for AI Safety: What is AI Risk?
- Center for AI Safety: Statement on AI Risk
- Nielsen Norman Group: AI: First New UI Paradigm in 60 Years
- Open access journal arXiv: The Curse of Recursion: Training on Generated Data Makes Models Forget, and The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content