Bangladesh needs a comprehensive framework to address the risks of AI

Thoughts

03 November, 2023, 04:55 pm
Last modified: 03 November, 2023, 05:28 pm
It is essential to develop comprehensive and ethically grounded frameworks, legislation and responsible practices about AI
Illustration: Bloomberg

In the age of Artificial Intelligence, ensuring transparency and accountability has become increasingly important, particularly in countries like Bangladesh, as this system generates predictions and can cause harm or carry out immoral or unlawful actions without accountability. 

Accountability ensures that AI systems are fair, uphold principles and act accordingly. AI transparency eliminates bias and discrimination in decision-making, improving accountability and trustworthiness. 

However, Bangladesh faces a complex problem when AI, false information and accountability come together. 

The digital age has changed how people talk, share information and interact, making the threat of false information a serious issue. The dissemination of disinformation in Bangladesh is influenced by a multitude of variables, and the detrimental impact of artificial intelligence has further intensified this issue. 

The cultural and linguistic diversity in Bangladesh gives rise to distinct languages, communities and cultural echo chambers, which in turn contribute to the dissemination of disinformation. To address the issue of disinformation, it is imperative to prioritise the cultivation of digital literacy while also recognising the efficacy of language-sensitive interventions in fostering the development of critical thinking abilities. 

The presence of political and religious sensitivities in Bangladesh further exacerbates the issue of disinformation, as artificial intelligence specifically targets vulnerable communities and disseminates divisive propaganda. Digital inequality poses a significant barrier to equitable access to technology and information, hence impeding the achievement of equal participation and information accessibility. 

Promoting digital inclusion in Bangladesh necessitates establishing transparency and accountability measures to bridge existing gaps. Implementing more stringent regulations and democratic verification processes is necessary for social media platforms. Additionally, it is crucial to establish independent fact-checking groups that are duly vetted and referenced in reporting.

In 2020, Bangladesh introduced its National Strategy for Artificial Intelligence to address ethical issues and emphasise the need for a strong legal and ethical foundation for AI, including fairness, cybersecurity and transparency. 

The strategy emphasises AI safety to minimise mishaps and unintentional harm from poorly constructed AI systems.  It promotes responsible AI governance and a national AI ecosystem with information frameworks and AI applications.  

The action plan promotes legislation and policy formulation to build an ethical and legal framework for AI. It assists policymakers in developing AI projects, but it is non-binding and relies on government goodwill and stakeholder participation. 

Bangladesh has no legal AI technology governance framework or national policy. Despite having AI technology professionals, Bangladesh lacks legal competence in technology ethics, making regulatory framework development challenging.

However, various legal measures in Bangladesh can combat disinformation, aiming to uphold social cohesion, safeguard public stability and ensure the distribution of accurate and reliable information. The Penal Code of 1860, the Bangladesh Telecommunication Act of 2001 and the Digital Security Act of 2018, now the Cybersecurity Act, are significant legal measures in Bangladesh. 

These laws can be used to protect public order, uphold democracy and ensure the distribution of accurate and reliable information. However, implementing these laws faces several challenges, including vague and broad legal definitions, concerns about freedom of expression and the need for a well-defined legal framework. 

Experts emphasise the importance of precision in drafting legal language that clearly outlines restrictions on fraudulent information and protects individuals' rights to express their opinions. 

Capacity and resources are also a challenge in Bangladesh. Law enforcement and regulatory entities lack the resources and expertise to enforce disinformation legislation. Collaboration between law enforcement, technology specialists and civil society organisations is crucial to bridge this gap and ensure the effective implementation of legal frameworks in Bangladesh. 

Balancing innovation and regulation is crucial for Bangladesh, as the spread of false information has become increasingly prevalent due to technology. AI has the potential to revolutionise higher education and empower individuals to thrive in the digital era. However, ethical considerations and transparency are essential for adequately utilising AI. 

A comparative legal analysis of artificial intelligence is needed to understand the importance of transparency and responsibility in AI deployment. The UN Educational, Scientific and Cultural Organisation's "Recommendation on the ethics of Artificial Intelligence" in 2022 emphasises the importance of AI transparency and responsibility in respecting human rights, fundamental freedoms and ethical principles. 

Member states should invest in digital, media and information literacy skills to combat disinformation, misinformation and hate speech. The non-binding document provides guidance and recommendations based on ethical principles and human rights to Member States, the European Union and other stakeholders on the ethical considerations of AI, urging them to consider it when developing AI policies and regulations. 

The European Artificial Intelligence Act (AI Act) is the world's first complete AI law, aiming to set rules for AI usage in the EU and minimise potential risks. The draft legislation prioritises data quality, transparency, human oversight and accountability, addressing ethical and implementation issues in AI technology. 

It requires conformance assessment of high-risk AI systems before being sold, transparency for AI systems that interact with humans, detect emotions or social categories based on biometric data and generate or edit information (deep fakes). The regulation also increases openness without adversely affecting intellectual property rights. 

AI regulation in the US is still in its early stages, with the "Blueprint for an AI Bill of Rights" being a non-binding guide. It protects against algorithmic discrimination, allows opt-outs from automated systems and includes transparency, public participation, fairness, safety and portability. The blueprint is a guidance paper, and companies must explain AI and decision-making and obtain consent before exploiting consumers' personal data. 

China is set to submit a draft of its AI law to lawmakers this year, promoting AI safety and innovation following its socialist values, social morality and public order. The regulation binds service providers to help users scientifically understand and rationally apply generative artificial intelligence content without harming others' reputations, rights or interests. 

Users can report offending content to cyberspace administration or applicable authorities. Existing regulations penalise providers who violate these measures, with network information departments imposing warnings, correcting, suspending or cancelling generative artificial intelligence services, and paying between 10,000 and 100,000 yuan in cases lacking rules or regulations.

Thus, the global debate on regulating AI is becoming increasingly important as new technologies threaten businesses and society. Transparency and responsibility are essential in the fight against false information, especially regarding AI.  

The principle of algorithmic transparency and autonomous decision-making is crucial for addressing bias and accountability. However, regulating AI systems, which can execute complex tasks without human oversight, is difficult due to the complexity of defining AI and assessing moral and legal responsibility. AI is already increasing surveillance, discrimination and misinformation, but most discussion has focused on futuristic fears of it becoming sentient and turning against humanity. 

To address the situation, Bangladesh must adopt social and educational approaches, develop digital literacy programs, and collaborate with governmental entities and technology corporations to develop constructive strategies to counteract the growing danger posed by disinformation. 

To combat misinformation, it is essential to establish clear AI development and deployment guidelines and frameworks that support all aspects of society and reduce fear. It is essential to involve professionals in AI within the regulatory framework. Strengthening international collaboration among governments worldwide is necessary to address the issue of fake news.

Challenges such as vague legal language, concerns about freedom of expression and lack of capacity and resources for enforcement must be addressed for effective implementation. 

By nurturing responsible AI development and promoting media literacy, Bangladesh can harness the power of technology to counter misinformation while safeguarding democratic values and social cohesion. International and cross-border challenges are also significant, as misinformation is often digital and cross-border, making enforcement difficult. 

To address these challenges, it is essential to develop comprehensive and ethically grounded frameworks, legislation and responsible practices about AI. Collaboration between governments, industry stakeholders and researchers is crucial in setting unambiguous principles and standards such as privacy, transparency, fairness and accountability.


Farjana Yesmin is an Assistant Professor at the Department of Law, University of Chittagong


Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions and views of The Business Standard

Comments

While most comments will be posted if they are on-topic and not abusive, moderation decisions are subjective. Published comments are readers’ own views and The Business Standard does not endorse any of the readers’ comments.