Responsible AI: are governments and corporations giving up?

Dr Philip Inglesant, SGR, warns that the recent upsurge in enthusiasm for Artificial Intelligence (AI) downplays the increasing dangers which are arising due to the erosion of safeguards.

Article from Responsible Science journal, no.7; advance online publication: 13 February 2025
 

The current ‘AI boom’ threatens to run like a steamroller over responsible innovation. The announcement of the UK AI Opportunities Action Plan, which pushes aside societal concerns so that Britain can become a ‘AI maker’ rather than an ‘AI taker’, and proposals in the USA for a ‘Manhattan Project’ in Artificial General Intelligence, are only the latest manifestations of this trend.

The AI Action Summit, which took place on 10-11 February in Paris, illustrates the issues at stake. The final statement on inclusive and sustainable AI was endorsed by over 60 nations but, notably, not the USA and the UK.

This article provides a brief introduction to the rapidly developing situation.
 

Key issues in AI

AI seems to be everywhere. It makes decisions that affect our lives in large and small ways, from who we see as friends on social media to the medical treatment we receive to whether we are given a bank loan. One type of AI – Large Language Models – can give us answers to questions phrased in ordinary language, and even compose letters and poems for us.

However, there is a far bigger picture:

  • AI as a key enabler for innovation and future prosperity – but bringing major societal changes, not necessarily shared equitably;
  • AI carrying out tasks autonomously, without human intervention – removing human agency, even in life-and-death decisions; and
  • The quest for Artificial General Intelligence (AGI) – AI surpassing humans in all cognitive domains, perhaps beyond human control.

While there is some recognition of the need to address basic issues of AI safety, such as privacy, security, and potential misuse, the wider issues are ignored or denigrated as ‘anti-business’.
 

From ‘Regulated AI’ to ‘Responsible AI’

AI is hardly a new idea, but the current boom is enabled by extremely powerful computers with vastly increased processing speeds and the ability to handle huge volumes of data using innovative mathematical methods [1].

There is some acknowledgement of the risks, for example, in moves over the past few years in both the UK and USA to set up AI Safety Institutes, and the World Economic Forum AI Governance Alliance [2]. In his speech announcing the UK AI Opportunities Action Plan [3], Prime Minister Keir Starmer said, “We will test and understand AI before we regulate it… we will make sure this technology is safe”. However, in this view, regulation must be “proportionate and grounded in the science… because of fears of a small risk… Too often you miss the massive opportunity”.

This narrow view on regulation and safety ignores the broad societal implications which require genuinely responsible AI development and deployment. The more urgent risks from AI are not that it will fail to work ‘safely’, but rather that it works as intended but with unintended – but not unforeseen – consequences.

In an article in MIT Technology Review [4], Nathan E Sanders and Bruce Schneier argue that “tech’s darling is artificial intelligence”, with the potential to change the world in many ways. However, they say, we have been here before. In 2011, social media was feted for its role in enabling the democratic uprisings known as the Arab Spring. Now social media is widely blamed for spreading misinformation, harming mental health, and perhaps throwing elections. “Let’s not make the same mistakes with AI that we made with social media”, they say.

Unfortunately, with vast amounts of money potentially being at stake, and the consequent competition between governments to be the ‘number one’ country for AI, responsible AI is being pushed aside as easily as a steamroller flattens newly-laid asphalt.
 

The AI steamroller

In contrast to Hollywood scenarios of an evil ‘Skynet-style’ superintelligence taking over the world, the clear and present danger is that so many aspects of our lives will soon be controlled by AI that it will no longer be possible for humans to have adequate oversight of the decisions being made.

The ‘AI steamroller’ is driven by governments and by very large, mainly US, corporations. Geopolitics is also an important factor. In response to what some are calling the New Cold War and the perceived threat of China overtaking the USA in key technologies, the US-China Economic and Security Review Commission recommended [5, 6] that Congress establish and fund a programme along the lines of the Manhattan Project of World War II, dedicated to “racing to and acquiring an Artificial General Intelligence capability” to maintain USA leadership in AGI.

The original Manhattan Project developed the atomic bomb. Scientists were seriously worried that a nuclear bomb would ignite the atmosphere, until calculations showed that this was highly unlikely or impossible [7]. Similar control problems surround AI but, according to pioneer Geoffrey Hinton, the risk in this case is much more probable [8].

There is now an ‘AI Arms Race’ [9]. One of President Trump’s first actions on 21 January 2025 was the announcement of the ‘Stargate Project’, a new company with (eventually) $500 billion of investment from funders including SoftBank, OpenAI, Oracle, and Emirati investment firm MGX [10]. At the Paris Summit, President Macron announced a total of €109 billion of private-sector investment in French AI developments [11]. The UK has promised to follow, in keeping with its AI Opportunities Action Plan [12, 13]. Meanwhile, China has recently shaken US AI hegemony with its open-source DeepSeek AI model [14].

It is telling that the UK Action Plan was written by technology entrepreneur Matt Clifford. One of the plan’s core principles is to “be on the side of innovators: In every element of the Action Plan, the government should ask itself: does this benefit people and organisations trying to do new and ambitious things in the UK? If not, we will fail to meet our potential” [12].

This is a high risk strategy. AI should be judged not on how much it benefits business and ‘innovators’ but on how much it benefits humanity. If there any hope of stopping or slowing the AI and AGI steamroller, it must come from international cooperation or, as Shahab Hasan writing in Medium argues, “governments, researchers, and the global community” working together [15]. Future of Life Institute president Max Tegmark is blunter: an AGI race “would be a suicide race,” since AGI by its nature cannot be controlled [16].
 

Big tech abandons its principles?

Early in 2024, the corporation OpenAI, which has a stated mission to develop “safe and beneficial” AGI, softened its prohibitions on the use of its models for weapons development or military applications, and by October announced that it would now be prepared to work on national security “in a way that stays true to our mission” [17]. OpenAI argues that this will help to keep AI leadership with “democratic countries… guided by values like freedom, fairness, and respect for human rights”. But a careful reading shows how far this departs from OpenAI’s mission, stated in its charter, to ensure that “artificial general intelligence… benefits all of humanity” [18].

Despite the fine words, OpenAI has been in partnership with Microsoft since 2019, including as provider of the Azure AI which forms a large share of Microsoft’s services to the Israeli military in the war on Gaza [19]. Microsoft is a major investor, has access to OpenAI Intellectual Property and, despite recent changes to their partnership, the OpenAI programming interface remains exclusive to Microsoft’s Azure cloud platform [20].

Then, in December 2024, it was announced that OpenAI was joining military technology company Anduril in a strategic partnership to develop counter-unmanned aircraft (anti-drone) systems [21]. This, they claimed, was also a response to the “accelerating race between the United States and China to lead the world in advancing AI”. Work for the military is lucrative and, no doubt, politically astute – but “when your customer is the US military, tech companies do not get to decide how their products are used” [22].
 

Towards the future

Despite the AI steamroller and the rapidly-changing world geopolitical situation, moves to unwind and abandon responsible AI are not going unchallenged. There are interventions by community organisations such as the Future of Life Institute [23] (although Elon Musk is, according to their website, still an FLI External Advisor), by academic leaders in AI including Yoshua Bengio, Stuart Russell, and Geoffrey Hinton [24], and government actions such as the EU AI Act [25] and the UK National Safety Institute [26] (while the fate of its opposite number, the US AI Safety Institute, must now be in considerable doubt). Although the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, SB 1047, was vetoed by governor Gavin Newsom [27], the reported reason was more to do with concerns over the narrow targeting of only very large AI models and the difficulty in regulating a technology that is still in its infancy. Most recently, in his keynote speech to the Paris AI Summit, US Vice-president JD Vance warned that “excessive regulation of the AI sector could kill a transformative sector just as it’s taking off” [28]. But the challenges of regulating an emerging technology are not exclusive to AI; this is an old idea, and must not be the excuse to do nothing [29].

The Paris AI Summit concluded with the ‘Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet’ [30]. The refusal by the USA, along with the UK, to sign the statement is partly founded on resistance to regulation in any form, but also in the insistence that “to safeguard America's advantage, the Trump administration will ensure that the most powerful AI systems are built in the US, with American-designed and manufactured chips” [28]. Conservative America has also long made clear its opposition to regulatory attempts to combat misinformation, which it views as censorship [31].

The opposing camps are, at least, becoming clearer [9]. On one side is the USA, which is aggressively against any form of AI regulation and promotes ‘America First’. On the other is the European Union, the African Union, United Arab Emirates, African nations including South Africa, India (co-host of the Summit) and even China, who collectively declare their support for AI which is diverse, multi-stakeholder, human rights based, human-centric, ethical, safe, secure, and trustworthy.

This is a fast-moving area of concern, and this short introduction does not pretend to cover all the issues. For example, there has not been space here to discuss the enormous power requirements of AI, or explore some of the US AI initiatives in more depth. Further SGR articles will examine these developments.
 

Dr Philip Inglesant teaches and researches Responsible Innovation in areas including AI, quantum computing, and information technologies more broadly. He is an Advisor to SGR’s Board of Directors.
 

References

[1] Wikipedia (2025). History of Artificial Intelligence.  https://en.wikipedia.org/wiki/History_of_artificial_intelligence 

[2] World Economic Forum (2024). AI Governance Alliance. https://initiatives.weforum.org/ai-governance-alliance/home 

[3] Starmer K (2025). PM speech on AI Opportunities Action Plan.   https://www.gov.uk/government/speeches/pm-speech-on-ai-opportunities-action-plan-13-january-2025

[4] Sanders N, Schneier B (2024). Let’s not make the same mistakes with AI that we made with social media. MIT Technology Review. 13 March.   https://www.technologyreview.com/2024/03/13/1089729/lets-not-make-the-same-mistakes-with-ai-that-we-made-with-social-media/

[5] US-China Economic and Security Review Commission (2024). 2024 Report to Congress: Executive Summary and Recommendations. https://www.uscc.gov/sites/default/files/2024-11/2024_Executive_Summary.pdf 

[6] Snider S (2024). The New Cold War: US Urged to Form ‘Manhattan Project’ for AGI.  Information Week. 21 November. https://www.informationweek.com/machine-learning-ai/the-new-cold-war-us-urged-to-form-manhattan-project-for-agi 

[7] Konopinski E, Marvin C, Teller E (1946). Ignition of the atmosphere with nuclear bombs. Report LA-602. Los Alamos Laboratory.

[8] Brown S (2023). Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI. Management Sloan School, MIT. https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai

[9] Editorial Board (2025). The new AI arms race.  Financial Times. 13 February.  https://www.ft.com/content/8daa9dd3-3ced-47b2-ad42-df5eb50fd062 

[10] OpenAI (2025). Announcing The Stargate Project. https://openai.com/index/announcing-the-stargate-project/ 

[11] Reuters (2025). Macron signals investments of 109 billion euros in French AI by private sector. Reuters. 9 February. https://www.reuters.com/technology/artificial-intelligence/france-invest-109-billion-euros-ai-macron-announces-2025-02-09/

[12] Clifford M (2025). AI Opportunities Action Plan. Department for Science, Innovation, and Technology. https://www.gov.uk/government/publications/ai-opportunities-action-plan?lang=en-gb

[13] Hornstein O (2025). ‘We’ll follow that quickly’: UK AI minister on Macron’s €109bn AI plan. UK Tech News. 13 February. https://www.uktech.news/ai/well-follow-that-quickly-uk-ai-minister-on-macrons-e109bn-ai-plan-20250213 

[14] DeepSeek (2025). DeepSeek: Into the unknown. https://www.deepseek.com/

[15] Hasan S (2024). The US government’s AGI Project: The AGI Manhattan Project. Medium. 25 November. https://medium.com/@ShahabH/the-u-s-governments-agi-project-the-agi-manhattan-project-932686da932c

[16] Tegmark M (2024). On AGI Manhattan Project. https://futureoflife.org/statement/agi-manhattan-project-max-tegmark/

[17] OpenAI (2024). OpenAI’s approach to AI and national security. 24 October. https://openai.com/global-affairs/openais-approach-to-ai-and-national-security/

[18] OpenAI (undated). OpenAI Charter. https://openai.com/charter/

[19] Grim R, Ahmed W (2025). The Israeli Military Is One of Microsoft's Top AI Customers, Leaked Documents Reveal. Drop Site News. 23 January.  https://www.dropsitenews.com/p/microsoft-azure-israel-top-customer-ai-cloud

[20] Microsoft Official Blog (2025). Microsoft and OpenAI evolve partnership to drive the next phase of AI. 21 January. https://blogs.microsoft.com/blog/2025/01/21/microsoft-and-openai-evolve-partnership-to-drive-the-next-phase-of-ai/

[21] Anduril Industries (2024). Anduril Partners with OpenAI to Advance US Artificial Intelligence Leadership and Protect US and Allied Forces. 4 December. https://www.anduril.com/article/anduril-partners-with-openai-to-advance-u-s-artificial-intelligence-leadership-and-protect-u-s/

[22] O’Donnell J (2024). OpenAI’s new defense contract completes its military pivot. MIT Technology Review. 4 December.   https://www.technologyreview.com/2024/12/04/1107897/openais-new-defense-contract-completes-its-military-pivot/

[23] Future of Life Institute (undated). Steering transformative technology towards benefitting life and away from extreme large-scale risk. https://futureoflife.org/

[24] Bengio Y et al (2024). Managing extreme AI risks amid rapid progress. Science. vol.384, p.6698.

[25] European Parliament (2023). EU AI Act: first regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[26] AI Safety Institute (undated). Rigorous AI research to enable advanced AI governance. Department for Science, Innovation, and Technology. https://www.aisi.gov.uk/

[27] Newsom G (2024). Veto of California Senate Bill 1047. Office of the Governor, California. https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf

[28] Vance J (2025). Transcript of JD Vance remarks at Paris AI Summit. https://gist.github.com/lmmx/b373b9819318d014adfdc32182ab17ff

[29] Collingridge D (1982). The social control of technology. Palgrave Macmillan.

[30] President of France (2025). Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet. La Maison Élysée. https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet

[31] Trump D (2025). Restoring Freedom of Speech and Ending Federal Censorship. The White House, Washington DC. https://www.whitehouse.gov/presidential-actions/2025/01/restoring-freedom-of-speech-and-ending-federal-censorship/ 


Image credit: Gerd Altmann via Pixabay