The UK’s competition watchdog said there was a “real risk” that developments in the artificial intelligence industry could end up with just a few companies dominating the market and consumers being bombarded with harmful information.
In a report released on September 18, the Competition and Markets Authority investigated the underlying models of artificial intelligence and concluded that while artificial intelligence has the potential to change the way people live and work, “these changes may It will happen quickly and have a significant impact on competition and consumers.”
The competition watchdog has warned that in the short term, consumers could face a surge in disinformation or AI fraud if competition is weak or developers fail to comply with consumer protection laws.
In the long term, a small number of companies are likely to end up gaining or consolidating market power, which could result in them not being able to offer the best products or services, or charging high prices, the report said.
The CMA said: “It is important that these outcomes do not occur.” Chief executive Sarah Cardell added:
“There remains a real risk that the use of AI evolves in a way that undermines consumer trust, or is dominated by a small number of players who use market power to prevent the entire economy from fully benefiting.”
To address the issue, the regulator has proposed several “guiding principles” to ensure “consumer protection and healthy competition while allowing for full economic benefits”.
The guidelines appear to focus on increasing access and transparency – particularly when it comes to preventing businesses from gaining an advantage through the use of AI models.

The UK competition regulator said it will publish an update on the principles and their adoption in early 2024, as well as insights into the further development of the AI ecosystem. It said it has engaged with AI developers and businesses deploying the technology.
related: 5 AI trends to look forward to in 2023 and beyond
This is not the first time the UK has sounded the alarm about the rapid development of artificial intelligence. In June, Matt Clifford, an adviser to the Prime Minister’s Artificial Intelligence Task Force, said the technology would need to be regulated and controlled over the next two years to curb significant existential risks.
Also in June, Japan’s privacy regulator issued a warning to ChatGPT’s parent company, OpenAI, over its data collection methods.
Magazine: AI Eye: 25,000 traders bet on ChatGPT’s stock picks, AI sucks at rolling dice, and more