Thursday, 26 Feb 2026
  • About us
  • Contact
  • History
  • My Interests
  • Privacy Policy
Nexpressdaily.com
  • Home
  • Politics
  • Finance
  • Health
  • Technology
  • Travel
  • World
  • đŸ”„
  • Politics
  • Technology
  • Travel
  • World
  • Finance
  • Health
Font ResizerAa
Nexpressdaily.comNexpressdaily.com
  • My Saves
  • My Interests
  • My Feed
  • History
  • Travel
  • Finance
  • Politics
  • Health
  • Technology
  • World
Search
  • Pages
    • Home
    • Blog Index
    • Contact Us
    • Search Page
    • 404 Page
  • Personalized
    • My Feed
    • My Saves
    • My Interests
    • History
  • Categories
    • Finance
    • Politics
    • Technology
    • Travel
    • Health
    • World
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
World

There is a global consensus for AI safety despite Paris Summit backlash, new report finds

Nexpressdaily
Last updated: May 8, 2025 7:30 am
Nexpressdaily
Share
SHARE
ADVERTISEMENT

The last global gathering on artificial intelligence (AI) at the Paris AI Action Summit in February saw countries divided, notably after the US and UK refused to sign a joint declaration for AI that is “open, inclusive, transparent, ethical, safe, secure, and trustworthy”.

AI experts at the time criticised the declaration for not going far enough and being “devoid of any meaning,” the reason countries cited for not signing the pact, as opposed to their being against AI safety.  

The next global AI summit will be held in India next year, but rather than wait until then, Singapore’s government held a conference called the International Scientific Exchange on AI Safety on April 26.

“Paris [AI Summit] left a misguided impression that people don’t agree about AI safety,” said Max Tegmark, MIT professor and contributor to the Singapore report.

“The Singapore government was clever to say yes, there is an agreement,” he told Euronews Next.

Representatives from leading AI companies, such as OpenAI, Meta, Google DeepMind, and Anthropic, as well as leaders from 11 countries, including the US, China, and the EU, attended.  

The result of the conference was published in a paper released on Thursday called ‘The Singapore Consensus on Global AI Safety Research Priorities’. 

The document lists research proposals to ensure that AI does not become dangerous to humanity. 

It identifies three aspects to promote a safe AI: assessing, developing trustworthiness, and controlling AI systems, which include large language models (LLMs), ​​multimodal models that can work with multiple types of data, often including text, images, video, and lastly, AI agents. 

Assessing AI

The main research that the document argues should be assessed is the development of risk thresholds to determine when intervention is needed, techniques for studying current impacts and forecasting future implications, and methods for rigorous testing and evaluation of AI systems.

Some of the key areas of research listed include improving the validity and precision of AI model assessments and finding methods for testing dangerous behaviours, which include scenarios where AI operates outside human control. 

Developing trustworthy, secure, and reliable AI

The paper calls for a definition of boundaries between acceptable and unacceptable behaviours.

It also says that when building AI systems, they should be developed with truthful and honest systems and datasets. 

And once built, these AI systems should be checked to ensure they meet agreed safety standards, such as tests against jailbreaking.

Control

The final area the paper advocates for is the control and societal resilience of AI systems.

ADVERTISEMENT

This includes monitoring, kill switches, and non-agentic AI serving as guardrails for agentic systems. It also calls for human-centric oversight frameworks. 

As for societal resilience, the paper said that infrastructure against AI-enabled disruptions should be strengthened, and it argued that coordination mechanisms for incident responses should be developed. 

‘Not in their interest’

The release of the report comes as the geopolitical race for AI intensifies and AI companies thrash out their latest models to beat their competition. 

However, Xue Lan, Dean of Tsinghua University, who attended the conference, said: “In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future”.

ADVERTISEMENT

Tegmark added that there is a consensus for AI safety between governments and tech firms, as it is in everyone’s interest.

“OpenAI, Antropic, and all these companies sent people to the Singapore conference; they want to share their safety concerns, and they don’t have to share their secret sauce,” he said. 

“Rival governments also don’t want nuclear blow-ups in opposing countries, it’s not in their interest,” he added. 

Tegmark hopes that before the next AI summit in India, governments will treat AI like any other powerful tech industry, such as biotech, whereby there are safety standards in each country and new drugs are required to pass certain trials.

ADVERTISEMENT

“I’m feeling much more optimistic about the next summit now than after Paris,” Tegmark said.

Share This Article
Email Copy Link Print
Previous Article Trump Basically Tells Families To Suffer As He Refuses Tariff Exemptions
Next Article Disneyland’s original Haunted Mansion returns this week

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
XFollow
InstagramFollow
LinkedInFollow
MediumFollow
QuoraFollow
- Advertisement -
Ad imageAd image

Popular Posts

Justices rule discrimination laws protect all, even majority groups

WASHINGTON — The Supreme Court ruled Thursday that the nation’s anti-discrimination laws apply equally to all employees,…

By Nexpressdaily

California asks FDA to undo limits on abortion pill mifepristone amid RFK Jr. scrutiny

California and three other states petitioned the U.S. Food and Drug Administration on Thursday to…

By Nexpressdaily

Fairphone software devs hit back against GrapheneOS security claims

Paul Jones / Android AuthorityTL;DR The team behind the /e/OS Android fork has addressed some…

By Nexpressdaily

You Might Also Like

World

Sask. RCMP charge owner of ‘Queen of Canada’ compound with assaulting officers

By Nexpressdaily
World

Trump and CA Governor Newsom clash over deployment of National Guard in LA

By Nexpressdaily
World

Iran Proposes Novel Path to Nuclear Deal With U.S.

By Nexpressdaily
World

Iranians in L.A. have ‘mixed and complicated’ feelings about U.S. role

By Nexpressdaily
Nexpressdaily.com
Facebook Twitter Youtube Rss Medium

About US

NexpressDaily.com is a leading digital news platform committed to delivering timely, accurate, and unbiased news from around the world. From politics and business to technology, sports, health, and entertainment – we cover the stories that matter most. Stay connected with real-time updates, expert insights, and trusted journalism, all in one place.

Top Categories
  • World
  • Finance
  • Politics
  • Tech
  • Health
  • Travel
Usefull Links
  • About us
  • Contact
  • History
  • My Interests
  • Privacy Policy

© Nexpressdaily. All Rights Reserved.

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?