February 24, 2025

AI Unveiled: A Comprehensive guide to navigating current AI policy debates

Artificial Intelligence is the buzz

Today, AI (Artificial Intelligence) is the buzzword around the world. A major Summit dubbed The Paris  AI Action Summit took place in the French Capital between February 10 – 11. The Summit brought together a wide spectrum of people including heads of state and governments, tech titans, researchers, scientists, privates sector companies, civil society organisations from about 100 countries around the world. Issues discussed included governance, public interest, ethics, safety, inclusion in AI. In this article, Our Analysts break down the debate on key issues on the AI global governance agenda today.

Beyond the daily cursory encounters with the word, Artificial Intelligence (AI) perhaps does not yet mean much, to an average Ugandan. On the global scene – especially in the developed world – however, AI is currently the buzz. Not only is AI the next frontier for global leadership and dominance, it is also a technological revolution unravelling right in front of our eyes. The world is on the cusp of the end of one era (reliance on human intelligence) and the beginning of another (reliance on machine intelligence).  

What is Artificial Intelligence?

Intelligence has hitherto been associated with only human beings (natural intelligence). Artificial Intelligence is simulation of human intelligence by computer systems, machines and technologies to enable machines perform tasks until now accomplished only by human beings. These tasks include learning, reasoning, perceiving and self-correction.  

Learning is the ability to improve one’s understanding and performance, using information or experience that they have gained, from exposure to similar conditions or circumstances. Machines learn from algorithms that learn from data. Reasoning- is the ability to use established rules to reach conclusions, solve problems, make decisions. Reasoning follows rules like logic, inference or decision-making processes. Perception – is the ability to receive, organize and interpret sensory data like images, to make interpretation and meaning. Self-correction is the ability to compare, contrast, adjust, adapt based on new available information.

Categories of AI

AI is generally categorised into three: Narrow, General or Superintelligent. Narrow AI is designed to perform narrow and limited tasks such as voice assistants, playing games or recommendation systems. General AI can understand, learn, apply intelligence and undertake tasks that a human being can do. The potential in this area is just beginning. Superintelligent AI is envisioned to surpass human intelligence in ordinarily what has been human forte such as creativity, social skills, general wisdom. This is the area that is speculatively bound to take over human abilities and subsume the human person. Superintelligent is the area of much debate about ethics and the future of humans.

What is the big deal about AI?

AI is of great interest to the world because it is a revolution like no other since the Industrial Revolution in the 18th Century. It is changing the paradigm of the world – the way we have known it for the last four billion years. It is fundamentally changing how people and their communities live, work, relate, trade, – literally every facet of human life is on the cusp of a revolution. AI will affect people economically, socially, politically, culturally. And it is a double-edged sword. It comes with huge opportunities for empowerment, progress, prosperity, but also with huge risks and threats, if not well managed.

What are the key contentious issues regarding AI?

  • Global Governance

Although there is fast explosion in AI innovations and technologies, its governance is currently still piecemeal. Some countries and regional bodies (e.g the European Union) have developed their own governance frameworks. There are also parallel, uncoordinated, initiatives that provide some forms of frameworks such as the G7 Hiroshima AI Process, the Bletchley Park Summit in 2023, and the Tech Accord to Combat Deceptive Use of AI in 2024 Elections that was signed by private companies alone in Munich in February 2024.

The core questions of AI development are however of a multi-national and cross border nature. These include questions about safety, ethics, environmental impact, energy, equity/access, competition, cultural diversity, data protection, interoperability of standards, military use, integrity of information, open technical standards and actors’ sovereignty. These call for well-developed and structured, shared international frameworks and standards. Efforts, including by the Paris AI Action Summit, was towards bringing all stakeholders together to start discussions to coalesce thinking towards a collective, coordinated, inclusive global AI governance framework – that addresses the challenges and answers to the restive national and global questions around AI. Potentially, a race to develop AI technology could lead to a lack of global cooperation, with nations potentially seeking AI superiority, which might result in an AI arms race

There is however contention over how to harmonize different national policies, especially with varying geopolitical interests. This includes discussions on how multilateral cooperation can be achieved, particularly in addressing AI’s environmental impact.

  • Ethical AI and Public Interest

There is an argument that the current trajectory of AI development is championed by mainly the well-oiled, private sector companies and powerful billionaires, with specific commercial interests.  The argument is that if this trajectory is maintained, three key problems could arise, and will be propagated by AI: Firstly, it will perpetuate current inequalities around the world, but especially between those who develop and control AI, and those who merely use.  Secondly, currently development of AI is concentrated in the hands of a circle of a few private actors. This jeopardizes both the participation of a diversity of actors and the sovereignty of the countries that do not have the leverage in the technology. At this rate, rather than close the gap, AI will continue to propagate the divide between the poor and rich countries. Thirdly, if left in its current trajectory, there will be missed opportunities in global development that the technology has potential to close because there will be no public interest but rather fragmented, non-inclusive interests.

There are potentially several ethical issues related to AI development. These include biases and unfairness: AI systems learn from data, and if the data contains biases (racial, gender, etc.), these biases can be perpetuated or even amplified by AI decisions, leading to unfair outcomes in areas like hiring, lending, or law enforcement. There is also the question of moral responsibility- namely who is responsible for decisions made by AI systems, especially in cases where AI leads to harm or negative outcomes. Who polices privacy? AI technologies, particularly those involving machine learning, often require vast amounts of personal data, raising significant privacy issues regarding data collection, usage, and storage. The development of autonomous weapon systems could lead to ethical dilemmas about the nature of warfare, decision-making in combat, and the potential for misuse or escalation.

 These among other concerns are leading to the discussion around the ethics and public interest, (promoting economic, social, and environmental progress), sustainability, inclusivity and the equitable distribution of AI benefits.

  • AI Safety and Misuse

Much as AI brings huge potential for development and progress, it also comes with huge risks and safety questions. In cybersecurity, AI potentially can be deployed for harmful purposes. It can be used to execute highly sophisticated cyber-attacks. Yet AI systems themselves can be vulnerable to attacks, for example, where inputs are manipulated to cause incorrect outputs. The other risk is misuse of AI maliciously in deepfakes, surveillance, or for creating autonomous weapons.

There’s a significant focus around the risks such as misinformation, bias, and the potential for AI-controlled weapons. The urgency of implementing AI safety measures is heightened by reports indicating that AI models are approaching professional cybersecurity skills levels. Discussions also touch on the potential for AI misuse, with some advocating for open-source AI for transparency so that it is available and visible to all, while others, express concerns about the risks associated with open-source policies. This objection is especially noticeable by the United States of America.

  • Regulation vs. Innovation

A major contention point is the balance between regulation and innovation. It became clear in the just concluded Paris Action Summit that Europe, particularly with the EU AI Act, is pushing for stringent regulation, oversight, focusing on ethical AI deployment, whereas there’s a contrasting push for less regulation from the U.S. The Trump administration especially, seems to favour speed over stringent rules to maintain a competitive edge against China. At the Paris Summit, both the US and UK declined to sign the communique citing national interests but also critically making the case for avoiding regulation to stifle innovations.

  • Open vs. Closed AI Systems

The debate on whether AI development should be open-source or closed is intense. China’s approach with initiatives like DeepSeek emphasizes open source for transparency and broader access, while there’s criticism from others about potential misuse if AI technologies are too openly accessible.

  • AI’s Impact on Labor

This is one of the most debated questions about AI. The development of AI is expected to fundamentally re-write work, jobs, work methods and labour markets. The Paris Summit sought to have discourse around containing AI associated risks and fostering working tools that enhance productivity, safety and wellbeing of people at work. The Paris’ Summit’s Future of Work track is one of the proponents to promote socially responsible use of artificial intelligence through sustained social dialogue.

Discussions on the future of work are prominent, with concerns about how AI will transform job markets, the nature of work, and the need for policies to manage these changes. There’s a push for international dialogue on how AI can support productivity and well-being without displacing workers. The counter point of view is that machines bring better efficiency, more productivity, remove risk of human safety, etc. Hence machines should take over most of the functions that humans are currently performing with all their limitations. 

  • Geopolitical Tensions and AI Leadership

There are issues of broader geopolitical rivalries, particularly between the U.S., China, and Europe. There is emerging competition on who will lead in AI technology and how that leadership should be managed to ensure global wellbeing, safety and ethical standards. The EU believes they are current leaders (given their forward-looking regulatory frameworks. However, during the Summit, US Vice President JD Vance emphasised his country’s current administration intention to “continue to lead” on AI. China is making big steps in the development of AI technologies.

These issues underline the Paris Summit’s role as a critical point of discussion where the direction of AI policy and development could be significantly influenced by the outcomes and agreements reached. There is also contention over the actual commitments and actions proposed in the summit’s declarations, with some dissatisfaction regarding the alignment with previous international consensus on AI risks. The US and UK did not sign the communique at the end of the Paris Action Summit, citing “national interest.”

The Infrastructure Magazine prides in providing Depth, Context, Insight, Perspective to industry issues. Is there any issue that you want to give depth, insight, context, perspective to? Contact our partnerships team: [email protected] or WhatsApp: +256 752 665 775