There’s no doubt that Google’s generative AI platform Gemini has faltered big time. Being a tech major, a lot better was expected from a company which employs top global talent. However, the lack of proper data sets and improper training saw the platform generating several responses which were seen to be highly biased and inaccurate. Google has been prompt in acknowledging its mistake and has apologised stating that the model is still under development. In the process, however, it has done a great disservice to smaller firms, particularly startups which are in the process of developing AI-based models. It’s thus amply clear that the industry, which boasts of such marquee names should behave more responsibly.

Also Read

More of the same

But one had clearly expected better from the government as well. The advisory issued over the weekend warning platforms against generating biased content, was uncalled for several reasons. It could have simply asked for mandatory labeling and disclaimers about the possibility of the content being unreliable. Instead, the government issued a directive that platforms need to seek the government’s permission before deploying generative AI models or algorithms. The startup world rightly went into a tizzy as this would result in slowing down the speed at which innovations take place in the tech world. The language of the advisory was also very general in nature, thereby confusing the industry regarding its applicability, forcing the minister of state for electronics and IT to clarify that it would not apply to startups but only to large platforms.

The clarification, of course, did not help because the definition of what’s a startup is not provided in the advisory. Also there is no answer to the question of what happens if a generative AI model developed by a startup also throws biased responses as Google’s Gemini. The communications & IT minister clarified later that the advisory is restricted to social media platforms and not for platforms which develop models for sectors like health, agriculture, etc. If that’s the case, then the advisory should have made it clear. It is also unclear which government body will be in charge of decisions to grant permission, what criteria it would use, etc. The government getting worked up with algorithmic bias seems rather strange.

Also Read

Where will fintechs go? With India’s growth prospects, the fintech sector can grow if it follows regulations

Bypassing the Suez

Who can regulate VDAs?

Understanding the four Vs of operations management – volume, variety, variation and visibility

Also Read

Bypassing the Suez

Artificial intelligence is not exactly about intelligence; it’s about training. Just as a child is trained not to touch a flame of a candle, the models need to be trained how to answer queries which are subjective in nature, which was clearly not the case with Google’s Gemini. However, the government should have known better rather than expecting that such models will be foolproof. The government certainly needs to be vigilant ahead of the parliamentary elections as biased content can vitiate the electoral atmosphere. However, misinformation can be countered by providing correct information. Just as the government has been able to sensitise people that what circulates on WhatsApp groups should not be treated as authentic and needs to be verified, the same can be done for AI-generated content.

The IT Act gives sweeping powers to the government in terms of blocking or removal of unlawful and undesirable content. Intermediaries which do not follow the directives run the risk of losing legal immunity provided to them under the Safe Harbour clause and can be criminally prosecuted. Armed with such supreme powers, certainly any advisory mandating prior permission of the government to launch models is unwarranted. The government should withdraw this part.

There’s no doubt that Google’s generative AI platform Gemini has faltered big time. Being a tech major, a lot better was expected from a company which employs top global talent. However, the lack of proper data sets and improper training saw the platform generating several responses which were seen to be highly biased and inaccurate. Google has been prompt in acknowledging its mistake and has apologised stating that the model is still under development. In the process, however, it has done a great disservice to smaller firms, particularly startups which are in the process of developing AI-based models. It’s thus amply clear that the industry, which boasts of such marquee names should behave more responsibly.

But one had clearly expected better from the government as well. The advisory issued over the weekend warning platforms against generating biased content, was uncalled for several reasons. It could have simply asked for mandatory labeling and disclaimers about the possibility of the content being unreliable. Instead, the government issued a directive that platforms need to seek the government’s permission before deploying generative AI models or algorithms. The startup world rightly went into a tizzy as this would result in slowing down the speed at which innovations take place in the tech world. The language of the advisory was also very general in nature, thereby confusing the industry regarding its applicability, forcing the minister of state for electronics and IT to clarify that it would not apply to startups but only to large platforms.

The clarification, of course, did not help because the definition of what’s a startup is not provided in the advisory. Also there is no answer to the question of what happens if a generative AI model developed by a startup also throws biased responses as Google’s Gemini. The communications & IT minister clarified later that the advisory is restricted to social media platforms and not for platforms which develop models for sectors like health, agriculture, etc. If that’s the case, then the advisory should have made it clear. It is also unclear which government body will be in charge of decisions to grant permission, what criteria it would use, etc. The government getting worked up with algorithmic bias seems rather strange.

Artificial intelligence is not exactly about intelligence; it’s about training. Just as a child is trained not to touch a flame of a candle, the models need to be trained how to answer queries which are subjective in nature, which was clearly not the case with Google’s Gemini. However, the government should have known better rather than expecting that such models will be foolproof. The government certainly needs to be vigilant ahead of the parliamentary elections as biased content can vitiate the electoral atmosphere. However, misinformation can be countered by providing correct information. Just as the government has been able to sensitise people that what circulates on WhatsApp groups should not be treated as authentic and needs to be verified, the same can be done for AI-generated content.

The IT Act gives sweeping powers to the government in terms of blocking or removal of unlawful and undesirable content. Intermediaries which do not follow the directives run the risk of losing legal immunity provided to them under the Safe Harbour clause and can be criminally prosecuted. Armed with such supreme powers, certainly any advisory mandating prior permission of the government to launch models is unwarranted. The government should withdraw this part.

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

QOSHE - Don’t hyper-regulate: Prior govt permission to launch AI-based models is uncalled for as it will only stifle innovation - The Financial Express
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Don’t hyper-regulate: Prior govt permission to launch AI-based models is uncalled for as it will only stifle innovation

9 2
06.03.2024

There’s no doubt that Google’s generative AI platform Gemini has faltered big time. Being a tech major, a lot better was expected from a company which employs top global talent. However, the lack of proper data sets and improper training saw the platform generating several responses which were seen to be highly biased and inaccurate. Google has been prompt in acknowledging its mistake and has apologised stating that the model is still under development. In the process, however, it has done a great disservice to smaller firms, particularly startups which are in the process of developing AI-based models. It’s thus amply clear that the industry, which boasts of such marquee names should behave more responsibly.

Also Read

More of the same

But one had clearly expected better from the government as well. The advisory issued over the weekend warning platforms against generating biased content, was uncalled for several reasons. It could have simply asked for mandatory labeling and disclaimers about the possibility of the content being unreliable. Instead, the government issued a directive that platforms need to seek the government’s permission before deploying generative AI models or algorithms. The startup world rightly went into a tizzy as this would result in slowing down the speed at which innovations take place in the tech world. The language of the advisory was also very general in nature, thereby confusing the industry regarding its applicability, forcing the minister of state for electronics and IT to clarify that it would not apply to startups but only to large platforms.

The clarification, of course, did not help because the definition of what’s a startup is not provided in the advisory. Also there is no answer to the question of what happens if a generative AI model developed by a startup also throws biased responses as Google’s Gemini. The communications & IT minister clarified later that the advisory is........

© The Financial Express


Get it on Google Play