Looking for a red-pen edit of the previous versions of our AI policy? Check it out here.
Dragonfly Editorial is powered by human creativity, human identity, and each of our unique human lived experiences.
We know that generative AI has the potential to assist us in our work. If it helps us work more efficiently, saving our clients time and money, then we’re all in.
But we think it’s important to approach AI with a clear understanding of its limits and a healthy skepticism about its accuracy and its ability to create rather than regurgitate. For example, although AI produces content in seconds, that content is sometimes inaccurate. AI models that rely on data sets with fixed endpoints will often produce outdated content that doesn’t jive with today’s realities. And because it draws exclusively from already-published material, AI-driven output is often rife with plagiarism and bereft of fresh ideas. Further, true thought leadership – content driven by a specific viewpoint, an informed opinion or original research – cannot, by definition, be created by AI.
At the end of the day, echoing the Marketing AI Institute’s Responsible AI Manifesto, we believe that AI technologies should be assistive, not autonomous. We believe that the best content doesn’t just summarize past publications, but rather combines novel research and newly born thoughts. And we believe, with all our heart, that anything worth creating is best created with human soul, spirit, and intellect intertwined.
Taking all this into consideration, here are our policies on the use of AI in our organization:
We will never use AI to:
- Replace our staff or contract Dragonflies
- Produce copy completely for our customers
- Create entire images, designs, or videos for our customers
- Edit content for our customers without careful review of all suggested changes
- Replace credible sources of information, such as original research articles, personal interviews, or trusted news outlets
- Analyze content that is internal, confidential, or proprietary to our clients
- Analyze any client-provided content without their permission
We may use AI to:
- Generate story ideas
- Generate an outline for a story
- Suggest social media posts
- Suggest headlines
- Do a pre-edit check for punctuation, spelling, and simple grammar so our editors can focus on readability, clarity, and conciseness
- Summarize long reports
- Provide SEO recommendations
- Modify images, designs, or videos for our customers
- Provide basic information about a new topic, knowing that the information may be untrustworthy and that we will need to vet it in the same way we would vet information from Wikipedia
- Suggest ways to clarify confusing sentences
- Suggest possible interview questions we may have missed
- Transcribe interviews (but not without the client’s specific permission to upload the original audio file)
The AI tools we use are private to Dragonfly, and the data is not used to train global AI models. Any suggestions provided to us by AI will be carefully evaluated by our human creatives.
Additional resources
For reference, here is how other organizations we respect are addressing the use of large language models in generating content.
AI AND CONTENT WRITING
- WIRED: How Wired will Use Generative AI Tools
- Marketing AI Institute: Responsible AI Manifesto
- Harvard Business Review: How Generative AI Can Augment Human Creativity
AI AND SCIENTIFIC WRITING
- Nature, along with all Springer Nature Journals: Tools such as ChatGPT threaten transparent science: here are our ground rules for their use
- Cureus: Artificial Hallucinations in ChatGPT: Implications in Scientific Writing
AI AND HEALTH CARE COMMUNICATIONS
- CommunicateHealth: This Wasn’t Written by a Bot
AI AND CREATIVE WRITING
- Justine Bateman, writer and filmmaker: AI in the Arts Is the Destruction of the Film Industry
AI AND LEGAL WRITING
- The New Republic: ChatGPT Fought the Law, and the Law Won: Here is an object lesson in how not to use generative A.I.
More info on the promise and threat of AI
- Center for AI Safety: What is AI Risk?
- Center for AI Safety: Statement on AI Risk
- Nielsen Norman Group: AI: First New UI Paradigm in 60 Years
- Open access journal arXiv: The Curse of Recursion: Training on Generated Data Makes Models Forget, and The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content