Generative AI refers to algorithms capable of creating new content, including text, images, audio, and videos. Despite its allure, it raises ethical concerns such as transparency, misinformation, responsibility, and the ethical and legal framework.
The Indian Express interviewed AI experts to discuss the ethical implications of Generative AI.
Generative AI tools can be seen as ‘RoughDraft AI’, acting as productivity aids for those with domain expertise by providing draft responses. Ethical use demands that AI solutions are accessible and transparent.
When discussing ethical AI use, Abhivardhan, Chairperson and Managing Trustee of the Indian Society of Artificial Intelligence and Law, highlighted concerns about data processing transparency, unclear algorithmic functions, and sector-specific impacts. He noted that promises about Gen AI tools often lack accessibility and transparency.
Dr. Azahar Machwe, AI SME at Lloyds Banking Group, emphasized the importance of attributing content sources and implementing checks for hate, aggression, or profanity while ensuring compliance with legal frameworks.
Megha Mishra, an Internet sociologist and Gen AI ethics expert, noted that new technology always has positives and negatives, and there is a global effort to uphold community values.
Bias can never be fully eliminated.
Machwe mentioned that bias is sometimes necessary, for example, in discussions on pollution or natural disasters. He suggested carefully directing bias by adjusting prompts but acknowledged that it can spread myths, biases, and falsehoods.
Mishra emphasized the need for quality data, as poor data leads to poor output. She also highlighted the importance of understanding AI’s limitations and the biases of those building these models.
Accountability lies with the creator.
Abhivardhan argued that entities and technology teams owning and maintaining AI systems should be accountable. Core AI model teams should be liable for inaccessible policies, and companies using AI models should share accountability. High-risk AI systems, as per EU AI Act standards, should also be held liable.
Machwe stated that the creator of the artefacts should initially be responsible, followed by the user. This distinction highlights the need for individuals to be accountable for both the content they create and disseminate.
Mishra suggested that tech companies making AI tools cannot shirk their responsibility. Both developers and users of these tools should be held accountable.
The biggest feature needed is Gen AI to detect Gen AI.
To ensure transparency in Gen AI models, Abhivardhan suggested that technology teams and companies must explain their data governance policies. Standardizing AI practices is crucial for ethical and economic accountability.
Machwe noted the need for Gen AI to detect other Gen AI, potentially through watermarking, to ensure source attribution. He also emphasized clear use of training data sets and avoiding IP lock-in. Mishra agreed that flagging the source could help ensure transparency.
Clear policies are needed.
Generative AI affects privacy and requires transparent policies. AI systems should explain the necessity of certain prompts, and companies must follow data law principles like privacy, consent, and data quality.
Machwe emphasized the need for a ‘right to forget’ feature to ensure user data is scrubbed and not used for training. He also suggested anonymizing interactions with Gen AI to prevent data profiling and providing clear data use statements with user indemnification against future copyright claims.
Balancing the need for data with the right to privacy.
Abhivardhan suggested that companies’ business models should balance profit from non-sensitive data without intrusive practices. Licensing models like General Licensing or BSD can help maintain balance.
Machwe proposed revenue-sharing partnerships with data providers and using a mix of real and synthetic data.
Mishra emphasized proper data selection, cleaning, and inclusivity to balance AI training needs with privacy rights.
Combating misinformation and deepfakes.
Abhivardhan noted that the Ministry of Electronics and Information Technology (MeiTY) issued an advisory on AI models and deepfakes. Despite unclear language, MeiTY can regulate deepfakes by creating an open-source repository for detection methods and sensitizing users about the risks.
Mishra stressed the importance of massive literacy efforts to tackle misinformation and deepfakes.
Long-term ethical considerations of Gen AI.
Abhivardhan highlighted three considerations: maintaining data flow mapping, making company policies accessible and understandable, and examining business models for data privacy considerations.
Machwe pointed out the challenges of controlling AI development as it becomes more accessible. Long-term ethical considerations remain relevant, and it’s crucial to teach AI ethical considerations.
Mishra noted that ‘social exclusion’ could be a long-term ethical consideration.
Preparing society for the broader impacts of Generative AI.
Abhivardhan suggested educating people about GenAI tools as productivity enhancers, encouraging AI use despite professional risks, clarifying consent dynamics, informing about AI’s deterministic nature, and addressing misconceptions about AI accuracy.
Machwe emphasized the immediate impacts of Gen AI, focusing on understanding the future of work, determining ownership rights, and staying updated on technology.
Legal and ethical framework, the way forward.
Machwe mentioned existing frameworks like the EU AI Act, UK PRA SS1/23, and US AI Bill of Rights, alongside GDPR. He emphasized the importance of AI understanding these laws and self-regulating. Like recognizing key characteristics of a dog, AI should understand legal and ethical frameworks.
This extensive discussion highlights the importance of ethical considerations in the development and deployment of Generative AI, emphasizing transparency, accountability, and the need for robust legal and ethical frameworks.