Safe and Trusted AI: Reflections from the Impact AI Summit Panel


 

At the Impact AI Summit in Delhi, much of the energy centred on innovation, scale and the transformative potential of artificial intelligence. But in our panel on Safe and Trusted AI, AI for Economic Growth and Social Good, and Inclusion for Social Empowerment, the focus shifted to a more foundational question: how do we ensure that AI innovation is aligned with human rights, accountability and long term public interest?

My central message was simple. Responsible AI is not a constraint on innovation. It is a condition for sustainable innovation.

The theme of this year’s Summit, People, Planet and Progress, framed the discussion well. AI sits at the intersection of all three:

  • People: If AI systems entrench bias, enable surveillance without safeguards, or undermine job quality through opaque algorithmic management, they directly impact human rights and dignity. Responsible AI requires companies to conduct meaningful human rights due diligence, assess risks before deployment, and ensure remedy where harms occur.

  • Planet: AI is increasingly embedded in climate and energy systems, supply chain optimisation and financial decision making. If it accelerates misinformation, weakens environmental governance, or is deployed without transparency in high impact sectors, it can undermine planetary outcomes. Governance must therefore integrate climate, nature and social risks rather than treat them in silos.

  • Progress: Economic growth driven by AI will not be sustainable if it lacks accountability. Investors, regulators and boards must scrutinise AI governance structures, oversight mechanisms and disclosure practices. Without transparency and clear lines of responsibility, trust erodes and so does long term value.

I also emphasised that investor engagement has a critical role to play. Capital allocation can incentivise responsible innovation. Investors should ask harder questions about AI governance, human rights impact assessments, board oversight and implementation, not just high level principles.

Finally, multistakeholder collaboration is essential. Governments, companies, civil society and investors each see different parts of the risk landscape. Bringing these perspectives together is not easy, but it is necessary if AI is to accelerate social good rather than deepen inequality.

The conversation at the Summit reinforced a growing consensus: the real race in AI is not only about technological capability. It is about building systems that are trustworthy, rights respecting and accountable by design.

Watch the recording here: 


Comments