Dragonfly’s Policy on Ethical AI Use 

“My worst fear is we cause significant harm to the world. If this technology goes wrong, it can go quite wrong.”

Sam Altman, CEO of OpenAI, the company responsible for the large language model GPT-4, during his May 2023 testimony in front of US lawmakers.

Dragonfly Editorial is powered by human creativity. The copy our writers pore over, the words our editors choose, and the images our designers craft are all informed by our lived experience as human beings. What we create is inextricably tied to the color of our skin, the gender we embrace, the streets we grew up on, the books we read under the covers late at night, the loves we’ve drowned in, the bones we’ve broken, and the scars we’ve learned to live with. 

We know that generative AI has the potential to assist us in our work. If it helps us work more efficiently, saving our clients time and money, then we’re all in. 

But we think it’s important to approach AI with a clear understanding of its limits and a healthy skepticism about its ability to create rather than regurgitate.

For example, although AI produces content in seconds, that content is often inaccurate and dripping with falsehoods. Because it relies on outdated data sets, it often produces content that doesn’t jibe with today’s realities. And because it draws exclusively from already-published material, it often outputs content that is rife with plagiarism and bereft of fresh ideas. 

Taking all this into consideration, here are our policies on the use of AI in our organization:

We will never use AI to:

      • Produce copy for our customers

      • Create images, designs, or videos for our customers

      • Edit content for our customers 

      • Replace credible sources of information, such as original research articles, personal interviews, or trusted news outlets

    We may use AI to:

        • Generate story ideas

        • Suggest social media posts

        • Suggest headlines

        • Summarize long reports

        • Provide SEO recommendations

        • Provide basic information about a new topic, knowing that the information may be untrustworthy and that we will need to vet it in the same way we would vet information from Wikipedia

        • Suggest ways to clarify confusing sentences

        • Transcribe interviews

      Any suggestions provided to us by AI will be carefully evaluated by our human creatives.  

      In fact, we’re already doing that today — and you probably are too. We’re already “checking behind an AI” every time we accept a wording suggestion made by Grammarly, reject a goofy layout idea from PowerPoint, or accept an autofill from Gmail.  

      At the end of the day, echoing the Marketing AI Institute’s Responsible AI Manifesto, we believe that AI technologies should be assistive, not autonomous. We believe that the best content doesn’t just summarize past publications, but rather combines novel research and newly born thoughts. And we believe, with all our heart, that anything worth creating is best created with human soul, spirit, and intellect intertwined.

      Additional resources

      For reference, here is how other organizations and individuals we respect are addressing the use of large language models in generating content.

      AI and Content Writing

        AI and Scientific Writing

        AI and Health Care Communications

        AI and Creative Writing

        AI and Legal Writing

        More info on the promise and threat of AI

          Introducing ChatGPT, the most popular topic among most B2B organizations right now.

          Author

          Related Posts

          Learn more about making Dragonfly part of your team

          Enjoying our content?

          Sign up for our monthly newsletter, with tips on writing, editing, and design.