Skip to content
Menu

Lessons in bias and transparency with Generative AI

Asa Butcher, Senior Editor, Spoon Finland

Beamex / Resources / For a safer and less uncertain world / Lessons in bias and transparency with Generative AI

Generative artificial intelligence (AI) has exploded into the public consciousness since the turn of 2023. Its ability to generate images, compose songs, make videos and write scientific papers has captured the imagination of millions, with ChatGPT setting a record for having the fastest-growing user base. With overly optimistic hype, however, come the pessimistic naysayers who claim that AI technology will destroy the world.

The reality is somewhere in between these extremes. Like so many technological advances, most reactions are based on partial knowledge, underlining the importance of user education. While AI researchers grapple to fully understand what happens inside the ‘black box’ of these proprietary systems, more should be openly discussed to raise general awareness about how the ‘AI sausages’ are being made.

Predicting the risks of Generative AI

“There are advantages and disadvantages to using Generative AI. It’s not that you shouldn’t use it, of course, you should, but caution is needed. We must educate people,” begins Chirag Shah, a professor at the University of Washington and Co-Director at Responsibility in AI Systems & Experiences (RAISE).

“There are two categories in machine learning: discriminative and generative. Discriminative models look at existing material and classify or rank it. In contrast, generative models use training datasets to predict what would fill the gap, which could be the next word, sentence, or pixel. It can create things that don’t necessarily exist because of pattern learning from the publicly available data,” he adds.

These systems also hallucinate. From generating fictional academic references, incorrectly stating that Leonardo da Vinci painted the “Mona Lisa” in 1815 and writing about the record for walking across the English Channel on foot, plenty of examples are being shared online. These delusions are beneficial when using DALL·E or Stable Diffusion to create fantasy images, but it becomes problematic and possibly harmful when somebody is seeking a medical diagnosis, for example.

The average user can mistake these systems for doctors or knowledgeable healthcare professionals, yet they excel in making up authentic-sounding information and citing legitimate-looking sources. ChatGPT feels trustworthy because it talks to us like a human in natural language, warns Shah: “When blind trust in these systems is combined with bias and a lack of transparency, you realise what a dangerous mix it can be. The more confidence we have in these systems, the more we don’t understand their limitations.”

Transparency can fill the gaps in trust

Understanding how these tools work relies upon transparency, but generative AI systems are elevating opacity to a new level. While ChatGPT’s creator OpenAI claims its mission is to ‘create safe AGI that benefits all of humanity’, it’s still primarily a commercial business with no incentive to reveal where its training data originated. 

Sweden-based technologist Paulina Modlitba, who studies human-computer interaction and the psychology of technology, wants to see regulations regarding data transparency because she is concerned about the social implications of generative AI: “The competition of these different companies is pushing AI into a problematic stage. For example, Google and Microsoft have dismissed their in-house AI ethicists, who can warn about any worrying ways that data is being used.”

She describes the reaction to Generative AI as being ‘almost euphoric’. “Users excitedly upload sensitive work documents to see if ChatGPT can improve and develop them without considering what will happen to the data afterwards. Ultimately, the companies must take responsibility, but that requires regulation.”

The proposed European Union AI Act would be a strong first step in protecting our integrity and personal data, but transparency is the immediate goal to make it easier for people to opt-out and see how their data is being used. GDPR is a good example of how politicians have been actionable in implementing privacy regulations, but more must be done.

“The fact that most countries don’t have a dedicated technology minister or political department is an issue. The situation is still too dependent on specific politicians being interested or having a tech background,” Modlitba observes, adding that we must reshape our political structures and society to navigate this revolution. 

Among her many concerns is the widening digital gap, which will now include people who know how to utilise AI tools and work alongside them and those who don’t. She also expresses disquiet about the neo-colonialism that technology and AI support in developing countries: “Industrial countries in the West are using Africans, for example, as cheap labour to train and reinforce these AI systems. We should use these technologies to decrease inequality, but we do the opposite.” 

Shah compares the journey ahead to the fight against tobacco consumption: “It has taken decades to go from ‘cigarettes are good for you’ to banning smoking. It will be the same with dataset transparency, eliminating bias and educating users about the dangers.” Modlitba agrees, concluding, “We need AI to solve issues like sustainability, the climate crisis and ageing populations. However, we must also become much more realistic regarding the negative effects.” 

About the author

Asa Butcher, Senior Editor at Spoon Finland, has over 20 years of writing experience spanning multiple disciplines, from journalism and copyediting to digital content and B2B communications. Born in England, he moved to Finland in 2002 as a freelance writer before joining an international media company focused on China. His eye for detail enables him to produce high-quality work that engages, informs and resonates with readers. He loves reading, movies and running.

You might also find interesting

For a safer and less uncertain world

Welcome to our series of topical articles where we discuss the impact that accurate measurement and calibration has on the world and our everyday lives.