Award Banner
Award Banner

'Safety labels' in AI apps to clearly state risks, testing in discussion: Josephine Teo

'Safety labels' in AI apps to clearly state risks, testing in discussion: Josephine Teo
Mrs Josephine Teo speaking at the Personal Data Protection tech conference at Sands Expo and Convention Centre on July 15, 2024.
PHOTO: The Straits Times

SINGAPORE - Users of generative artificial intelligence (AI) apps may soon see labels that clearly state how the AI should be used, its risks and how it was tested, as part of upcoming guidelines to make the technology easier to understand.

Likening the practice to safety labels on medication or household appliances, Minister for Digital Development and Information Josephine Teo said the effort aims to standardise how tech companies communicate transparency and testing.

Creators and deployers of generative AI should be clear with users on the data used, any risks and limitations of the model and how their systems have been tested, said Mrs Teo in an opening speech at the Personal Data Protection Week held between July 15 and 18 at the Sands Expo and Convention Centre at Marina Bay Sands.

"We will recommend that developers and deployers be transparent with users by providing information on how the generative AI models and apps work," said Mrs Teo, who is also Minister-in-charge of Smart Nation and Cybersecurity.

Explaining the guidelines, Mrs Teo said: "This is a little bit like when you open a box of over-the-counter medication. There is always a sheet of paper to tell you about how the medication is supposed to be used and what are some of the side effects you may face."

"This level of transparency is needed for AI models built using generative AI. That's the recommendation."

The guidelines will make clear safety benchmarks that should be tested before an AI is deployed, such as risks of spouting falsehoods, toxic statements and biased content. Generative AI refers to AI that can create new content like text and images, and is less predictable than traditional AI.

Mrs Teo added: "This is like when we buy household appliances, and they come with a label that says it has been tested because you cannot be expected to know whether the appliance is safe to use."

The Infocomm Media Development Authority (IMDA) will start consultations with the industry on the guidelines, said Mrs Teo, without giving a date on when the guidelines are expected.

Separately, Mrs Teo said that in early 2025, businesses in ASEAN will have a guide on data anonymisation to help facilitate secure transfer of data across the region.

The guide is one of the outcomes of a meeting in February among officials in the region overseeing technology, who discussed ways the nations can develop a secure global digital ecosystem.

IMDA also released a guide on privacy enhancing technology in AI which Mrs Teo said will help to address the growing demands for more data to train AI without compromising on users' privacy.

She referred to how the technology can help to protect personally identifiable information so that businesses can share data more securely.

Synthetic data, in particular, shows promise as a solution as it creates realistic data for AI model training without using the actual sensitive data, said Mrs Teo, who echoed experts' concerns that AI innovation could lag due to a shortage of good data to train AI due to privacy concerns, among others.

The guide pointed to synthetic data as a solution as it is modelled close to real-world data and can help to speed up innovation while mitigating concerns about cybersecurity incidents.

Managing data in generative AI poses even more challenges for the industry compared to traditional AI, which is more predictable, said IMDA assistant chief executive and Personal Data Protection Commission deputy commissioner Denise Wong. She was speaking during a panel discussion on AI and data privacy next to representatives from tech organisations, including consulting firm Accenture and ChatGPT-developer OpenAI.

Ms Wong said: "The next challenge is how do we do that in a generative AI space and what are the relevant principles? This is something the team and I will be discussing in consultation with the industry."

OpenAI's head of privacy legal Jessica Gan Lee said data protection safeguards need to be developed at all stages of AI, from training, development to deployment.

When asked about the risks of generative AI tools like ChatGPT, she said the key is to train AI models through diverse datasets "from all corners of the world", incorporating multiple cultures and languages and sources, while finding ways to reduce the processing of personal information.

Panellist Irene Liu, who is the regional strategy and consulting lead for finance, risk and compliance practice at Accenture, said: "A lot of the conversations are always centred around how organisations protect the data that they collect, but I feel a lot more focus has to be on the consumers themselves being responsible for the data that they provide."

Not many are aware of the implications of sharing information online, such as when downloading programmes or when accepting cookies, she said and suggested: "Can we make sure there is a level of education to consumers who are sharing this information, and understanding the implications of why they are sharing it as well?"

ALSO READ: Mobile phone users here can soon block overseas calls as part of new anti-scam measures

This article was first published in The Straits Times. Permission required for reproduction.

This website is best viewed using the latest versions of web browsers.