Amid increasing concerns about the use of Artificial Intelligence in news media globally, Christian Daily International (CDI) has published a policy on its website on what the news outlet considers permissible use of Artificial Intelligence (AI) tools for its editorial staff, and where to draw clear lines.
Earlier this week, Reuters reported that “global concerns about the use of AI in news production and misinformation are growing.” Surveys of nearly 100,000 people across 47 countries, found “that consumers are suspicious about the use of AI to create news content, particularly for sensitive subjects such as politics.”
“According to the survey, 52% of U.S. respondents and 63% of UK respondents said they would be uncomfortable with news produced mostly with AI. The report surveyed 2,000 people in each country, noting that respondents were more comfortable with behind-the-scenes uses of AI to make journalists' work more efficient,” according to Reuters.
Recognizing both the value and risk of using AI in journalism, the CDI editorial team has decided to make its policy publicly accessible to its readers.
“Artificial Intelligence (AI) continues to bring about revolutionary changes to many areas in society as well as to our personal and professional lives. Debates are ongoing regarding the proper use of AI, potential risks, and how to draw healthy boundaries while recognizing that AI is here to stay,” the introduction says.
“In journalism, AI can bring great benefits in increasing efficiency in some areas, while posing great risks to accuracy and credibility when applied wrongly. Therefore, Christian Daily International’s editorial team wants to be transparent about what is permissible and what is not permissible use of AI when it comes to news reporting on its website,” it continues.
The policy then lists several areas where AI tools can assist journalists, such as in transcription of audio recordings, summarizing of lengthy reports for personal use, translating of content available only in foreign languages, and similar work. AI can also be used to improve everyday tasks unrelated to reporting.
On the other hand, the policy does not permit using AI tools to write articles from scratch, among other reasons because of the “inherent risk that AI may use copyrighted material and plagiarize from existing sources without proper attribution.” Furthermore, any kind of information provided by AI must be verified for accuracy, “as AI can ‘hallucinate’ and make things up that do not exist.”
The policy also touches on photography, noting that AI image creation can be useful for certain very limited purposes, such as visualizing abstract issues. It states, however, that “AI may not be used to create visuals that could mislead the audience, such as visuals that resemble photographs of real people or situations or to alter real photographs beyond simple corrective improvements (such as contrast or brightness).”