Tehran Times CEO calls for concerted efforts to adapt media to AI advancements

September 16, 2024 - 16:29

MOSCOW – Mohammad Mahdi Rahmati, CEO of the Tehran Times, urged media outlets in the Asia-Pacific region to adapt their approaches to better serve the evolving preferences and requirements of their audiences in the face of rapid Artificial Intelligence(AI) advancements.

Rahmati made remarks on Sunday, during the 53rd meeting of the Executive Board of the Organization of Asia Pacific News Agencies (OANA) held in Moscow, Russia.

Below is the full text of his speech:

Hello to all colleagues and distinguished guests! I’m pleased to gather once again for the OANA meetings, where we can discuss the issues and challenges, we face as news agency managers from our respective countries. As we focus more on artificial intelligence (AI) in this session, I’d like to take the opportunity to share my thoughts on this fascinating and challenging topic.

Let’s begin with a story that may be familiar to some of you. In 2016, Microsoft introduced a chatbot named TAY. The idea behind TAY was simple: it was supposed to learn from interactions with users on Twitter and improve its conversations as it talked more with people. Its bio read: “The more you talk the smarter TAY gets.” However, things did not go as planned. Within just 16 hours, Microsoft had to shut down TAY because the chatbot started sending racist and offensive messages. TAY had learned these messages from user interactions, turning the experiment into a complete disaster. This example shows us that when AI is not properly guided, it can quickly lead to ethical problems that harm society.

Another concerning issue is the rise of deepfake technology. Each of us, in our own country, has faced this problem somehow. For example, a while ago, an interview with a medical specialist, who was also a former member of the Iranian Parliament, was shared on social media regarding the health of the Supreme Leader. In this video, the specialist claimed that the Supreme Leader’s health was in danger and that there were signs of a recurrence of his previous illnesses. This interview, however, was a deepfake, and in reality, such a statement was neither true nor made!

In another case, I would like to show you a video that is both humorous and interesting. Using deepfake technology, Jennifer Lopez appears, wearing a hijab, in an old and popular Iranian TV series called 'Khaneh Be Doosh,' which translates to 'Homeless' in English. She replaces one of the original actors and speaks their lines!

So, what is AI, and how is it used in the media? AI refers to using computer systems and algorithms to perform tasks that usually require human intelligence. In the media industry, AI is increasingly common and has applications such as:

- Detecting fake news

- Personalizing content for readers

- Managing social media

- Targeting advertisements

- Optimizing search and content discovery

While these are useful applications, we must also be aware of the challenges and risks associated with AI, especially from an ethical perspective.

Here’s a quote from Henry Kissinger that I’d like to share. In his book "The Age of AI And Our Human Future," co-written with his colleagues, Kissinger says:

“AI not only expands the capacity to process and analyze data in unprecedented ways, but it also fundamentally challenges how humans make decisions, our understanding of the world, and even our definition of what it means to be human. This technology shows us that human creativity, experience, and judgment may need to be redefined, as machines surpass us in tasks once exclusively human. This raises the question: In a world where AI plays a central role, what makes humans distinct from non-humans?”

Additionally, the book raises concerns about the dangers of AI, including:

  1. Surveillance and Privacy:

 The extensive use of AI in surveillance can lead to privacy violations.

  1. Automated Decision-Making:

Decisions made by AI may be made without considering human and ethical aspects.

  1. Bias and Injustice:

AI algorithms might make biased decisions due to incorrect data or inherent biases in the data.

  1. Warfare and Security:

The use of AI in warfare and security could lead to increased violence and instability. Recently, ambassadors from 60 countries gathered in South Korea to sign an agreement to limit the expansion of AI in sensitive military and intelligence industries to avoid AI errors.

Some contemporary scholars even consider the philosophy and existence of AI as fundamentally flawed and malevolent. For instance, Alexander Dugin, a Russian political scientist, mentioned in a meeting with an Iranian scholar, Ayatollah Mirbagheri:

“Modernity, which includes advanced technologies like AI, is an incomplete and evil ideology for me. These technologies can lead to the destruction of traditions and human values.”

This statement by Mr. Dugin is quite thought-provoking and serves as a warning to us in the cultural and media fields: the danger of erasing traditions and human values! This topic merits a detailed discussion, which we do not have time for today.

Having discussed some examples of AI mistakes, let’s look at some ethical issues arising from AI use in the media, which pose serious challenges for us:

  1. Algorithmic Transparency:

 AI systems often function as a "black box," meaning even experts do not always understand how they reach specific decisions. This lack of transparency can reduce public trust in AI, especially when used for delivering news or information.

  1. Bias and Discrimination:

 AI systems learn from data, and if this data contains biases, AI can reflect and even amplify these biases. For instance, AI systems may inadvertently spread racial, gender, or political biases in the content they produce, leading to unfair and harmful outcomes.

  1. Privacy Issues:

AI systems often collect and analyze large amounts of personal data. While this can help media companies offer more personalized content, it also raises privacy concerns. Companies must be very careful about how they use this data to avoid violating individuals’ privacy rights.

  1. Trust in AI-Generated News:

AI can quickly generate news articles, but this doesn’t necessarily mean the information is reliable. People might start doubting the credibility of news if they know it was generated by a machine rather than a human journalist.

  1. Over-Reliance on AI:

As AI takes on more tasks, there’s a risk that we might become too dependent on it, losing critical thinking, creativity, and human analysis provided by journalists. This could lead to a decline in overall media quality.

  1. Spread of Misinformation:

AI systems, if not carefully monitored, can rapidly spread misinformation, especially during crises. This can have dangerous consequences, like creating panic or confusion among people.

  1. Legal and Accountability Issues:

If an AI system makes a mistake, such as spreading fake news or discriminatory content, who is responsible? The AI developer, the media company, or someone else? Currently, there are no clear legal frameworks to answer these questions.

  1. Job Losses:

 As AI takes on more tasks in the media industry, there is a fear of losing many jobs, especially for journalists and content creators. This can affect the diversity of viewpoints in the media and the quality of produced content.

  1. Influencing Public Opinion:

Media organizations may use AI to create content designed to influence or manipulate public opinion, which can undermine democracy and public trust.

Given these challenges, the big question is: What can we do to address the ethical concerns of AI in the media?

We have the opportunity, as a global community, especially within the framework of the Organization of Asia-Pacific News Agencies (OANA), to play a significant role in defining and creating ethical guidelines. These guidelines should include:

  1. Establishing Global Ethical Guidelines:

 It is essential to set clear rules and regulations for the ethical use of AI in the media. OANA can help its member agencies share knowledge and best practices to ensure AI is used responsibly and transparently.

  1. Training Media Professionals:

Journalists, editors, and other media professionals need training on the ethical use of AI. OANA can take a leading role in organizing workshops and training sessions in member countries to raise awareness about responsible AI use.

3. Monitoring AI Systems:

 We need systems that continuously monitor AI for potential biases or errors. This kind of international oversight, supported by OANA, can help ensure the accuracy and fairness of these systems.

4. Developing Common Policies:

OANA can provide a platform for developing shared policies among news agencies to promote ethical AI use in the media. By working together, we can create a media environment where AI helps improve quality, trust, and accountability.

In conclusion, AI has immense potential to transform the media industry, making it more efficient and responsive to audience needs. However, as we’ve seen, it also comes with serious risks, especially from an ethical standpoint. That’s why international cooperation is crucial. By collaborating through platforms like OANA, we can develop the necessary rules and systems to ensure the responsible use of AI, protecting both the media industry and the public.

Together, we can take significant steps to overcome the serious risks of AI. Therefore, I emphasize the importance of close cooperation, such as sharing experiences, setting regulations for AI oversight, and organizing joint training sessions among news agencies. By working together, we can better guide our newsrooms toward greater awareness of AI and ethical ways to use it.

Additionally, as I did in the previous summit, I would like to honor the memory of the oppressed people of Gaza. These are people who, under the most brutal and inhumane attacks and genocide, continue their resistance and still hold on to hope for days of freedom and peace. We will not forget Gaza!

Thank you for your attention.

Leave a Comment