Upgrade to get full access to all premium news on Yahoo Finance and get more great articles like this free preview.
A silver or gold subscription plan is required to access premium news articles.
Finance ministers, central bankers and financiers have expressed serious concerns about a powerful new AI model that they fear could undermine the safety of the financial system.
The development of the cloud mythos model by Anthropic has led to crisis meetings after the discovery of vulnerabilities in several major operating systems.
Experts say it has an unprecedented ability to identify and exploit cyber-security vulnerabilities — though others caution that more testing is needed to better understand its capabilities.
Canadian Finance Minister François-Philippe Champagne told the BBC that the myth was discussed in detail at an International Monetary Fund (IMF) meeting in Washington DC this week.
“Certainly serious enough to warrant the attention of all finance ministers,” he said.
“The difference is that the Strait of Hormuz – we know where it is and we know how big it is … the issue we’re facing with Anthropic is that it’s unknown, unknown.”
“It needs a lot of attention so that we have safeguards in place, and we have procedures in place to ensure the resilience of our financial system,” he added.
Mythos is one of Anthropic’s latest models for its broader AI system called Cloud, which rivals OpenAI’s ChatGPT and Google’s Gemini.
Anthropic revealed this earlier this month, when developers responsible for testing AI models and their performance of so-called “missline” actions – which go against human values, goals and behavior – said it was “remarkably capable of computer security tasks”.
Citing concerns that it could surface old software bugs or find ways to easily exploit system vulnerabilities, Anthropic has not released the model.
Instead it made Mythos available to tech giants such as Amazon Web Services, CrowdStrike, Microsoft and Nvidia as part of an initiative called Project Glasswing — which it calls “an effort to secure the world’s most critical software.”
Mythos is part of Anthropic’s cloud system, the company’s family of AI models and its AI assistant of the same name. [NurPhoto via Getty Images]
On Thursday, Anthropic released a new version of Cloud Opus, an existing model, that will allow Mythos’ cyber capabilities to be tested on less powerful systems.
The concerns raised about Mythos may be more than the chatter around previous AI models, but some cyber-security experts have questioned how justified it is. – A given model in particular has not been tested by the wider industry to see how capable it really is.
The UK’s AI Security Institute has been given access to a preview version of it, and has just published an independent report on the model’s cyber-security prowess.
Its researchers noted that it was a powerful tool capable of finding many security holes in unprotected environments, but suggested that Mythos Cloud was not dramatically better than its predecessor, Opus 4.
“Our testing shows that Mythos Preview can exploit systems with weak security postures, and it is likely that more models with these capabilities will be developed,” the report authors said.
It’s also not the first time an AI developer has claimed the capabilities of its models mean they shouldn’t be released — a tactic some critics argue is building an argument.
In February 2019, OpenAI cited similar fears when it chose to stagger the release of GPT-2, an earlier version of its models that now powers its largest tool, ChatGPT.
Top bankers will be given access to an advance model to test their system.
Barclays chief executive CS Venkatakrishnan told the BBC: “It is serious that people should be concerned.
“We need to understand it better, and we need to understand the vulnerabilities that are being exposed and fix them quickly.”
He added that “it’s going to be a new world” – referring to a highly interconnected financial system with both opportunities and weaknesses.
While developer Anthropic says the model has already exposed several security vulnerabilities in some critical operating systems, financial systems and web browsers, governments and banks have been granted access to its public release in advance to help secure their own systems.
Bank of England Governor Andrew Bailey told the BBC that the development had to be taken very seriously: “We now have to look very carefully at what this latest AI development means for the risk of cybercrime.”
He added: “The result may be that there is a development of AI, of modelling, which makes it easier to detect existing vulnerabilities in core IT systems, and then clearly cybercriminals – bad actors – can seek to exploit them.”
The U.S. Treasury confirmed that Anthropic raised the issue, encouraging its major banks to test their systems, before any public release of Mythos.
Financial industry sources indicated that another prominent US AI company may soon release a similar powerful model but without the same security.
James Wise, a partner at Balderton Capital, is chairman of the Sovereign AI unit, a venture capital fund that will invest in British AI companies, backed by £500m of government funding.
He said that Mithos is “the first of many powerful models” that can expose the system’s weaknesses.
His unit is “investing in British AI companies that are addressing that – companies working on AI safety and security”, he told the BBC’s Today programme.
“We hope that the models that expose the flaws are also the models that will fix them.”
[BBC]
Sign up for our Tech Decoded newsletter to follow the world’s top tech stories and trends. Outside the UK? Sign up here.