Crafting a human-centred AI framework for the social sector
The social sector can emerge as a leader in ethical AI innovation, set a valuable precedent for other industries, and propel AI forward in a manner that aligns with the principles of social justice and the greater good
In an era marked by the rapid spread of artificial intelligence (AI), the social sector plays a crucial role in shaping the future of this transformative technology. Leaders in the social sector have a unique opportunity to bring a critical social impact perspective to AI development and application, ensuring that AI is used for the betterment of society.
However, to effectively fulfil this role, leaders need the knowledge and tools to leverage AI to enhance their organisations and influence the broader technological landscape. The social sector—including nonprofit organisations, governmental bodies, and community groups—must adopt a human-centred AI strategy that prioritises individual and community needs, values, and rights.
This is a critical moment, calling for a comprehensive educational initiative tailored to the social sector. Such an initiative should combine cutting-edge research with practical applications of human-centred AI and social systems design frameworks. By fostering a close-knit community of leaders, this initiative can facilitate the sharing of best practices, collaborative problem-solving, and the creation of innovative solutions.
Also, it aims to demystify AI by improving professionals' understanding of AI tools, underlying data, and their implications for organisational processes. In doing so, leaders in the social sector can strengthen their institutions and contribute to developing an ethically sound and socially responsive AI ecosystem.
The urgency of adopting human-centred AI in the social sector
AI has increasingly become prevalent in the social sector, offering powerful tools for data analysis, resource allocation, and program evaluation. However, implementing AI technologies without a human-centred approach poses significant risks, such as exacerbating existing inequalities, infringing on privacy, and marginalising the populations these technologies are meant to benefit.
A human-centred AI strategy is essential, prioritising the development of systems that are not only technically robust but also ethically sound and socially responsible. This approach requires a deliberate focus on inclusivity, transparency, and accountability, ensuring that AI technologies are accessible, understandable, and fair. Within the social sector, AI tools must be designed and implemented in ways that uphold the dignity, autonomy, and rights of all individuals, with particular attention to marginalised and vulnerable communities.
Core principles of a human-centred AI strategy
A human-centred AI strategy is grounded in several core principles that ensure its alignment with ethical and social goals.
Firstly, inclusivity and equity demand that AI systems be designed with the input and needs of diverse stakeholders, particularly under-represented communities such as low-income populations, racial and ethnic minorities, and individuals with disabilities. Engaging these groups ensures that AI mitigates, rather than perpetuates, systemic inequalities.
Secondly, transparency and explainability are equally critical. AI systems must function transparently, clearly explain decisions, and foster accountability and trust among key stakeholders, including beneficiaries, donors, and the public.
Thirdly, ethical governance is essential to safeguarding individuals and communities by implementing strong data protection measures, ensuring fairness in algorithmic decision-making, and actively addressing potential biases in AI. Finally, continuous evaluation and adaptation are necessary. The dynamic nature of the social sector requires AI tools to evolve with changing community needs and intervention effectiveness, allowing organisations to assess AI's impact and refine systems to improve outcomes.
The responsibility of social sector leaders
Social sector leaders play a pivotal role in shaping the development of AI. They must actively advocate for the integration of human-centred principles into AI systems and ensure their organisations have the knowledge and tools to implement these principles effectively. This requires investing in capacity building and training, collaborating with AI experts, and developing best practices for ethical AI use.
These leaders should also support policies that promote the ethical deployment of AI and advocate for regulatory frameworks that protect individuals and communities. By taking a proactive and strategic approach, social sector leaders can influence AI development to align with their core values of social justice, equity, and human dignity.
In addition to advocating for ethical AI, social sector leaders have other vital responsibilities. They must foster inclusive dialogue and engage stakeholders to ensure that marginalised communities are represented in the AI development process.
This helps ensure that AI technologies address the specific needs of vulnerable populations and do not exacerbate existing inequalities. Leaders must also prioritise risk management and ethical oversight by establishing frameworks to identify and mitigate potential harms, such as bias, discrimination, and data privacy concerns. Implementing ethical audit mechanisms and robust data governance policies can further promote responsible AI use.
Another responsibility is advancing AI literacy within their organisations and communities. Leaders should create opportunities for continuous learning about AI, including its capabilities, limitations, and social impacts, so that staff and stakeholders are well-informed and prepared to engage critically with AI systems.
Finally, social sector leaders should collaborate and share knowledge with other sectors, such as academia, industry, and government, through multi-sector partnerships. This collaboration is essential for shaping a responsible AI ecosystem that aligns with broader social impact goals.
AI governance through a policy perspective
To address the use of AI in the social sector, policymakers need to take a comprehensive approach that prioritises ethical governance, inclusivity, and accountability. One crucial aspect is the development of regulatory frameworks that require transparency in AI decision-making processes and ensure that AI systems are explainable and that all stakeholders, including marginalised communities, can understand their impacts.
Additionally, policies should promote fairness in algorithmic design to prevent bias and discrimination while protecting individuals' rights to privacy and data security.
Government and social sector leaders should collaborate to establish standards and guidelines for the ethical use of AI, focussing on promoting social equity and the public good. It is also essential to incorporate capacity-building initiatives into policy to encourage ongoing learning and promote AI literacy across organisations.
Finally, policies should incentivise partnerships and knowledge sharing between different sectors to ensure that AI technologies are developed and applied in ways that enhance organisational effectiveness and align with the goals of social justice and human dignity.
The path forward
As AI continues to advance, the social sector is presented with both opportunities and challenges. To fully capitalise on the potential of AI while safeguarding the rights and well-being of communities, it is crucial to adopt a human-centred AI strategy.
This approach requires a commitment to inclusivity, transparency, ethical governance, and continuous learning. By embracing these guiding principles, the social sector can ensure that AI catalyses positive social change, promoting fairness and sustainability while upholding fundamental values.
However, implementing a human-centred AI approach is a complex undertaking that requires dedicated efforts from all stakeholders.
Nevertheless, by taking on this responsibility, the social sector can emerge as a leader in ethical AI innovation, set a valuable precedent for other industries, and propel AI forward in a manner that aligns with the principles of social justice and the greater good.
Dr Munshi Muhammad Abdul Kader Jilani is an Assistant Professor at the Bangladesh Institute of Governance and Management (BIGM). University of Dhaka (Affiliated), Email: [email protected]
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions and views of The Business Standard