Artificial Intelligence (AI), Risk of False Information, and Call for New Measures to Control Misleading Information

Artificial Intelligence (AI), Risk of False Information, and Call for New Measures to Control Misleading Information

Azher Hameed Qamar, Ph.D. (

Artificial Intelligence - It is becoming famous, becoming useful, and becoming dangerous. Though there are several pros and cons, and of course many of them are debatable, in this blog post I am interested in the use of AI tools and the risk of false information. In today’s world internet is the most popular source of information and people across the globe and from all fields of life and work read, listen, and watch the information widely available on the internet. Moreover, the spread of information is also a source of earnings for the individuals or groups who provide information. Blogs, videos, pictures, content, websites, and any other virtual presence of information is monetized, and pay-per-view, ads, affiliate marketing (etc.) is a mean of earning. Even if the rate of earning (in US dollars or euros) is low, it anyways benefits a lot to the individuals or groups who are uploading this information from countries where the currency is relatively weak. Hence, there is a huge fortune in the form of passive earnings they get, and that makes it more attractive. 

However, in the past, they have to work a little harder to find and collect information and present it in an attractive format. And they do a little effort to verify the information before posting it. Today, AI has solved several ‘issues’ and made life ‘easy’ to earn a passive earning quicker than before. If you have seen a lot of short videos or other content informing you how to earn quickly and passively using AI, you will see a pattern of ‘motivation’ in these videos and content that is based on ‘quick, easy, and passive earning’. For example, on you-tuber convinced in his/her video how easy it is to use the content that is available as Creative Common Licence on YouTube, and then using an AI tool we can generate a script that can be overlapped in video editing tools, and a new content is ready to post and monetize. Similarly, there are several other ways to use AI-generated text, pictures, and other forms of information that people are using in their blogs, websites, videos, etc. It is happening quickly, and it is becoming popular because it is ‘working’. Now, the sites like Chat-GPT and other Open AI tools do not claim that the information these sites generate is correct and can be used as authentic content. But it does not matter when people are using it and spreading it without a protocol of validation or verification of the content. Here comes the risk of ‘false information’ that may spread like wildfire and may be accepted (or even applied or practiced) by several people whose only source of information is the internet. False information may be misleading, deceptive, and even harmful to individuals and societies. In this short post, I am only trying to convince the reader of this blog to see, reflect, and talk about this in whatever position they are, a researcher, a blogger, a content writer, a teacher, a worker, etc. I am still thinking and working on this issue, but I have realized by now that there is an urgent need for Call for New Measures to Control Misleading Information particularly where AI is being used as a tool to create content for public use. May be a study can be conducted how the YouTubers and other content writers are using AI tools and then a critical analysis may inform us more about this issue that I discussed here briefly.  


Popular posts from this blog

Mrs Chatterjee vs Norway - An Immigrants Mother's Fight to Take Her Children Back

Scholars At Risk — Protecting Academic Capital to Benefit Academia: A Talk with a Scholar

Observation — A Data Collection Approach