A Br(AI)ghter Future for the EU? Shaping EU Strategy in the Era of General Artificial Intelligence
- In 2025, the European Union adopted an AI development strategy (AI Continent Action Plan, InvestAI) which, despite emphasising “European values”, largely replicates the American model. It focused on building computing infrastructure and striving to develop artificial general intelligence (AGI). AGI is characterised by the ability to perform a variety of tasks in the real world with efficiency and effectiveness comparable to human intelligence.
- The example of the United States shows that it is primarily investment expenditure that influences the shape of the technology being developed, and in the EU, this investment is limited. Between 2013 and 2023, American companies attracted more than six times as much private capital as companies in the EU (approximately $76 billion in the EU compared to $486 billion in the US).
- A realistic alternative for the EU is adopting a strategy focused on AI applications in specific sectors of the economy, with measurable performance indicators within specific time frames, as well as the development of specialised AI models based on high-quality data sets.
Artificial General Intelligence
Artificial general intelligence (AGI), which currently exists only in theory, is an advanced form of artificial intelligence distinguished by its ability to perform a variety of tasks in the real world with efficiency and effectiveness comparable to human intelligence. This would also distinguish it from existing AI models, which are still based on patterns learned from training data, without the ability to reason independently or flexibly adapt to completely new, unforeseen situations in a manner comparable to human intellect. Some in the AI industry believe that increasing the amount of data with which existing artificial intelligence models are trained would lead to a breakthrough in AI capabilities and thus to the emergence of AGI.
However, it is not certain whether this method is appropriate, or whether AGI can even be achieved. A 2025 survey of 475 AI researchers conducted by the Association for the Advancement of Artificial Intelligence (AAAI) found that the majority of respondents (76%) believe that “scaling current approaches to AI” is “unlikely” or “very unlikely” to achieve AGI. Scaling is the assumption that continuously increasing models and data resources automatically leads to improved capabilities. However, simply increasing the volume of data on which language models are trained does not lead to a proportional increase in artificial intelligence capabilities. Nor does it solve fundamental problems with AI, such as “hallucinations” (the phenomenon in which artificial intelligence models generate false or misleading information and present it as fact), logical errors, or the inability to perform actions similar to real reasoning. Furthermore, extensive use of low-quality data–often without verification of its origin or reliability–creates the risk of training models which perpetuate and further replicate errors and misinformation. Current machine learning paradigms are insufficient to achieve general artificial intelligence.
Despite these doubts, the strategies adopted by the US and the EU show a desire to develop existing methods of training artificial intelligence by increasing the amount of training data and providing greater computing power (including the construction of new data centres and AI factories). However, while scaling improves the performance of AI models, it does not necessarily lead to results resembling understanding. Nor will it necessarily lead to a technological breakthrough.
The US approach to AI development and regulation
The Biden administration took extensive measures to promote the development of AI. It invested more than $50 billion of public funds in the domestic production of semiconductors necessary for the construction of systems used to train artificial intelligence models, and has restricted the export of advanced chips to countries considered particularly hostile or competitive in this industry (including China, Russia and Venezuela). At the same time, Biden’s vision for the development of AI in politics also included the so-called ethical development of this technology. Key in this regard was Executive Order 14110, which introduced, among other things, a requirement to label AI-generated content so that users would know when they were dealing with material created by artificial intelligence. The regulation also introduced consumer and privacy protections against potential harms resulting from the use of artificial intelligence, as well as safeguards for employees.
Donald Trump’s return to the US presidency brought a clear shift in American policy towards AI. The current administration treats the development of artificial intelligence as a race – whoever is the first to develop AI models that reach the level of AGI will gain a lasting global political and economic advantage.
The Trump administration sees overly far-reaching AI regulations as a threat to innovation and has therefore repealed some of the measures introduced by the previous administration, including Regulation 14110, which was aimed at ensuring the safe development of the technology. This has become one of the main points of tension with the EU, which has introduced its own regulations in the field of AI. The US is demanding that the EU regulations not be enforced against American corporations. The culmination of the US’s efforts was the adoption of America’s AI Action Plan in July 2025. This plan is based on three pillars: accelerating innovation by removing regulations, building infrastructure, and technological diplomacy, i.e. a programme to export complete AI packages to allies. The document envisages the extensive construction of data centres and AI factories, as well as an entire support infrastructure system (data centre cooling systems, energy facilities, chip production). To accelerate these investments, numerous administrative barriers, including environmental ones, have been removed. In practice, this means that data centres can now also be located in areas that were previously protected due to their valuable natural assets.
The US also aims to maintain its position as a leading exporter of AI solutions by offering not chips, but entire infrastructure packages, preventing buyers from building their own independent systems. Exports are to be subject to individual negotiations, in which access to AI would be conditional on regulatory concessions. This practice will undermine the EU’s attempts to regulate AI, as it may encourage Member States to favour bilateral negotiations with the United States.
The US approach focuses on extensively increasing computing power – more data centres, more AI chips, larger data sets – assuming that scale alone will deliver a technological breakthrough. That is why the US strategy is largely focused on the development of generative AI models.
The EU’s approach to AI development and regulation
The EU has also previously recognised the importance of artificial intelligence for economic growth and the competitiveness of its economy against those of the US and China. In 2018, the European Commission published the Coordinated Plan on Artificial Intelligence, followed by a revised version in 2021. Among other things, the plan envisaged the creation or updating of national AI strategies, as well as the effective use by Member States of EU funding, in particular the Recovery and Resilience Facility (RRF), to implement AI.
Table 1. Overview of national AI strategies
.jpg)
The RRF, a key component of the €806.9 billion NextGenerationEU recovery instrument, has played a groundbreaking role in accelerating the development of artificial intelligence in all 27 EU Member States. With a mandatory target of 20% of funds allocated to digitalisation (approximately €134 billion), the RRF has provided unprecedented funding for AI-related investments, although approaches vary significantly between countries. This funding has doubled the resources allocated to AI development in the EU compared to previous years.
In 2024, the EU adopted the Artificial Intelligence Act (AI Act), one of the fundamental objectives of which is to create conditions for the development of ethical AI. The Act classifies AI models according to their level of risk to people and society – from solutions with negligible risk (e.g. spam filters), through limited and high risk (e.g. systems used in recruitment), to unacceptable risk (e.g. social scoring—citizen rating system).
As AI development accelerated, the EU began to link the development of artificial intelligence with the possibility of increasing the competitiveness of the EU economy. This was directly reflected, among other things, in Mario Draghi’s report published in 2024, which became one of the reference points for the EC’s economic policy directions for the 2024-2029 term. The report pointed to the relatively low involvement of the EU economy in the development and implementation of new technologies, including artificial intelligence, which could lead to a weakening of the competitiveness of EU industry. It therefore recommended wider use of AI in strategic sectors.
The EU’s AI Action Plan for the continent, presented in April 2025, was developed in the context of changing US policy and, in practice, is a response to the shift made by the Trump administration. In terms of priorities, it is similar to the US strategy. It also envisages the construction of large-scale AI infrastructure, including so-called AI factories and gigafactories. Specific plans for their construction are included in the Apply AI Strategy, which complements the Plan. It identifies 10 sectors in which the implementation of artificial intelligence should be accelerated: health, transport and automotive, robotics, manufacturing, engineering and construction, climate and environment, energy, agri-food, defence, electronic communications and media. The EU also plans to increase its computing power by developing a European cloud. These measures aim to reduce dependence on both Chinese and American technologies, and the EU is focusing on technological autonomy and building its own production capacity in the field of AI infrastructure. This may positively strengthen the European market and the potential of the private sector, facilitating the development of companies such as France’s Mistral AI and Poland’s Bielik AI.
Focus on Poland
|
Poland is also taking steps to build its own artificial intelligence capabilities. The Bielik language model, developed by a community centred around the SpeakLeash Foundation, is one of the most recognisable Polish AI projects—a large language model optimised for the Polish language. In February 2025, the Ministry of Digital Affairs released PLLuM (Polish Large Language Model)—a family of Polish language models created by a consortium of six scientific institutions, including the Wrocław University of Science and Technology, NASK and the University of Łódź. PLLuM, trained on Polish organic data, is adapted to the specifics of the Polish language and public administration terminology, and its planned applications include, among others, a virtual assistant in the mObywatel application. In February 2025, the government also adopted an updated AI Development Policy in Poland for 2025–2030, setting out the directions for state action in this area. Poland also aspires to attract investments in computing infrastructure that could support the development of domestic AI solutions. |
The EU continues to emphasise the importance of developing ethical AI models. At the same time, despite attempts to maintain its vision of AI (e.g. through the continued implementation of the AI Act), the EU has partially adopted the American narrative that regulation poses a threat to innovation, and in response is changing its approach to AI development. Moreover, the assumption of extensive AI factory construction seems to replicate the American approach, which considers generative models, i.e. those that create something relatively new based on data (images or text, e.g. ChatGPT) to be strategically key. These are general models trained on very large data sets.
AI funding in the US and the EU
The fundamental difference between the European and American models of AI funding lies in the proportions between public and private funds. Between 2013 and 2023, American companies in the AI industry attracted more than six times as much private capital as companies in the European Union (approximately $76 billion in the EU vs. $486 billion in the US), creating an exceptionally favourable environment for the development of innovation in the field of AI. This is because in the US, private capital, driven by venture capital funds and tech giants, dominates, while in the EU, public funds play a key role. Under the Horizon Europe programme, the EU has allocated €6.4 billion to AI development between 2021 and 2024. The European Innovation Council (EIC), with a budget of over €1.2 billion for the development of new technologies in 2024, has invested only €150 million in AI-related projects (a total of over €400 million between 2021 and 2024). Importantly, some EU Member States, such as Greece, Cyprus, Slovenia and Poland, are heavily dependent on EU funds to finance their AI investments, which exacerbates inequalities within the EU. This is due to the low level of expenditure by these countries themselves on AI development, as well as the low level of private investment.
According to the Stanford AI Index Report, private investment in AI in the US reached $109.1 billion in 2024, almost 12 times more than in China ($9.3 billion) and 24 times more than in the UK ($4.5 billion). Pitchbook Data also reflects these differences: total AI funding is estimated at $97 billion in the US, compared to $13.5 billion in Europe and $13.7 billion in Asia. The difference is also evident in generative AI, where US private investment exceeded the combined total of China, the European Union and the UK by $25.4 billion. It is worth noting, however, that the amounts invested by China and Europe (the EU and the UK) are similar.
In 2025, the EU launched the InvestAI project, which aims to mobilise €200 billion over five years, including €50 billion from public funds and €150 billion from the private sector. A key element of the plan is the construction of four AI gigafactories for €20 billion, each with a computing power equivalent to approximately 100,000 advanced AI chips, which is the power currently being pursued by the largest AI companies, such as OpenAI and Anthropic. At the same time, in the US, the Trump administration has launched the Stargate project, which involves the immediate implementation of $100 billion with the prospect of increasing this to $500 billion over four years (most of the funds will be provided by OpenAI, Oracle, Softbank and MGX).
Table 2. AI funding in the US, EU and UK
.jpg)
Source: Our World in Data, 2024
However, most private investment in AI worldwide does not focus on specific applications of this technology in particular industries, such as medicine, education or the military. The vast majority of these funds are invested in the infrastructure needed to train advanced artificial intelligence models (in particular, generative AI), followed by data processing. The aim is to continuously increase the amount of training data and computing power.
Table 3. Global private investment in AI by field/area of application
.jpg)
Source: Stanford AI Index Report, 2025
European AI factories
European AI factories, a key element of the AI Action Plan for the continent, are in theory intended to support the development of AI in specific sectors of the economy, including healthcare, automotive, energy, agriculture and defence. In practice, however, most of them take a general approach (training AI on large, general data sets, without specialisation), moving away from the original vision. An analysis of thirteen AI factories conducted by the Interface think tank shows that nine of them are focused on at least five different sectors simultaneously, rarely taking into account the strengths of their local industrial ecosystem. Only one factory – HammerHAI – is focused exclusively on industries that match the profile of the region, like the automotive industry in Stuttgart. The others are effectively offering general-purpose computing infrastructure rather than engaging in specialisation in line with local competitive advantages. Furthermore, while global computing infrastructure for AI is dominated by the private sector, European AI factories are dominated by academic and research institutions. This structure calls into question the actual ability of these factories to transform research into commercial applications and strengthen the competitiveness of European industry.
Risks and limitations of a data centre-based strategy for general AI models
The expansion of infrastructure for large language models involves significant risks. The first risk stems from the energy intensity of data centres. According to the International Energy Agency, a single large AI factory currently consumes as much electricity as 100,000 households. In several US states, data centres already account for more than 10% of total electricity consumption, and in Ireland, as much as 20%. In contrast, a single AI gigafactory, as envisaged in the EU’s artificial intelligence development strategy, could potentially consume as much electricity as several million households. The development of data centres for AI will therefore increase electricity demand in the EU and the US, while electricity prices for industry in the EU have remained on average 50% higher than in the US over the last decade (see Table 4). These increased costs may make it difficult for the European Union to compete economically with the US in the mass development of AI and in encouraging data centre developers to locate in the EU.
Table 4. Average electricity prices for industry in the EU and the US
.jpg)
Source: European Commission, Study on energy prices and costs
Due to the energy intensity of data centres, the implementation of the EU’s AI strategy may also conflict with the EU’s climate goals. Reducing greenhouse gas emissions requires the electrification of transport and industry, for example, which will increase the EU economy’s demand for electricity, including Poland’s, by at least several dozen per cent by 2050. Renewable energy sources, intended to meet this demand in a carbon-free manner, are still in their infancy. Meeting the energy demand of traditional sectors of the economy alone is a challenge, and according to the EU’s AI strategy, data centres will become another major consumer of electricity. It is unclear whether the development of carbon-free sources will be able to keep pace with such a large total energy demand, and the AI Plan for the Continent does not answer this question. There is therefore a risk that, in order to implement the EU’s AI development strategy, it will become necessary to return to generating energy from fossil fuels, such as natural gas, or to opt for large-scale investment in nuclear energy.
Sticking to the AI development strategy set out in the AI Continent Plan may further increase the EU’s dependence on energy supplies from the US (LNG and oil). Faced with a possible energy shortage caused by the massive development of energy-intensive data centres and the lack of its own energy resources, the EU will have to import even more energy than before. In such a situation, the EU will probably turn to less emission-intensive natural gas. The US remains the leading exporter of gas to the EU (in 2024, it accounted for 16.5% of gas supplies), and President Donald Trump forced EU countries to import even more American gas in exchange for a more lenient customs policy towards the Union. Following the path of artificial intelligence development set by the US in the long term may therefore serve US interests and, at the same time, pose a threat to achieving strategic autonomy.
The supply of key components for the construction of AI infrastructure also poses a significant risk. The model of AI development through data volume scaling relies heavily on advanced chips, the production of which is almost entirely located outside the EU. Taiwan, through TSMC, controls most of the world’s production of the most advanced semiconductors needed to build the infrastructure for training and running large AI models. The strategy of developing key technology by importing essential components from such a politically sensitive area – located in a zone of potential conflict with China – poses a threat to European technological autonomy. Although the EU has attempted to reduce this dependence by adopting the European Chips Act, which provides for €43 billion in investment in domestic semiconductor production, the effects of these measures will not be visible for several years. Meanwhile, the implementation of the EU’s AI strategy is based on the assumption of uninterrupted access to chips produced outside the EU, which makes the entire initiative vulnerable to disruptions in global supply chains, export restrictions or changes in the trade policies of key manufacturers.
Table 5. Largest semiconductor manufacturers by revenue
.jpg)
Source: Trendforce, 2024
Conclusions and recommendations
While the EU has adopted a comprehensive AI Act imposing a range of requirements on developers and suppliers of AI systems in terms of safety, transparency and respect for fundamental rights, the US has deliberately rejected the regulatory path, focusing on minimising restrictions on innovation. This regulatory asymmetry has serious consequences for the European AI market. American technology corporations, operating in an environment with significantly lower regulatory compliance requirements, can develop products faster and more cheaply, and then offer them on a European market that demands local companies meet much more stringent standards. Furthermore, the US administration is exerting direct pressure on the EU to relax or not enforce the AI Act regulations on American suppliers. This situation presents EU regulators with a dilemma: maintaining high standards may slow down the implementation of AI in Europe and discourage investors, while relaxing them undermines the fundamental purpose of regulation – to protect citizens and build trust in technology.
The EU’s AI development strategy is increasingly shaped by external factors that limit its freedom to choose its own technological path. The US administration is openly pressuring its allies to limit technological cooperation with China, accept solutions from American cloud providers, and adapt their regulations to the interests of Silicon Valley companies. Threats of US tariffs and the open linking of energy security issues to technology policy decisions put the EU in a much weaker negotiating position than its economic potential would suggest. At the same time, China is stepping up its efforts to attract European scientists and entrepreneurs with offers of unlimited access to data and state capital. In this competitive field, the European Union is less able to set and implement its own standards, and the AI Plan for the continent does not propose tools to effectively counter this dynamic, treating the development of artificial intelligence not as a tool for achieving specific goals, but rather as an end in itself.
In view of the above challenges, the following actions are worth considering:
- Linking AI development strategies to energy sovereignty. Advanced data centres are highly energy-intensive. Currently, a single large AI factory consumes as much electricity as 100,000 households; in the case of gigafactories, the energy consumption could be equivalent to that of several million households. Energy sovereignty is therefore a prerequisite for digital sovereignty. The European Union should therefore develop a concrete plan that takes into account not only the construction of AI factories (while considering how many to build and for what purpose), but also how to provide them with adequate energy resources. For this reason, the EU should consider large-scale investments in nuclear energy.
- Reorienting AI strategy from general-purpose infrastructure to sectoral applications. The European Union should move away from replicating the American model of AI development based on extensive scaling of computing infrastructure, to favour focusing on specialised AI systems in sectors with a high impact on productivity. The production of new generative AI models should not be the EU’s primary goal. Instead, the creation of specialised models will allow it to remain competitive with China and the US. Furthermore, the time frame for building the planned infrastructure raises questions about its adequacy for future technological needs—it is not certain that generative AI in its current form will be the main axis of competitive advantage in a few years. An effective AI policy should therefore focus on specific goals and a vision for development: only on the basis of a clear strategic vision can investment expenditure be allocated appropriately so that it brings the expected results in specific periods (5, 10, 15 years). The Apply AI Strategy identifies 10 sectors in which the implementation of artificial intelligence should be accelerated. However, further sectors should also be selected, e.g. healthcare, industry, public administration, energy, defence and logistics, for which specialised AI models and solutions would be developed. Thus, at least 30% of InvestAI funds (approximately EUR 60 billion) should be allocated to sectoral solutions.
- Investment in quantum computers. Instead of replicating the American model of extensive computing infrastructure development, the EU could consider a “technological leap” strategy – similar to how developing countries bypassed traditional banking and moved directly to fintech. In the context of AI, such a leap could be investing in quantum computers, which could render current data centres obsolete within a decade. Member States with advanced quantum programmes – Germany, France, the Netherlands and Finland – should coordinate the development of this technology to avoid duplication of investment and achieve economies of scale in negotiations with suppliers (IQM, Pasqal, IBM).
- Building competitive advantage through data quality. Europe needs its own data centres, which should not only be technologically advanced, but should also operate on high-quality data, utilising collections based on double verification. This offers the potential for European AI systems to stand out in terms of quality compared to American systems. Strategic value depends on the quality of the data processed, not just on computing power. The EU should therefore build a competitive advantage through rigorous quality assurance practices for training data sets. At the same time, the EU should focus on building an advantage in the area of AI micro-models – specialised, energy-efficient systems tailored to specific sectoral applications. The EU’s key strengths remain data quality and human capital, which enable the creation of precise models. Building competitive advantage through rigorously verified, high-quality data sets may prove to be a more sustainable strategy than the race for “raw” computing power. This is a task for the European Artificial Intelligence Authority, which should establish a process of double verification of data sets for AI.
- Specialisation of AI factories. European AI factories, which are a key element of the AI Action Plan for the continent, have in practice deviated from the original vision of supporting AI development in specific sectors of the economy. The European Commission (DG CNECT) and EuroHPC should ensure that factories built under InvestAI meet the requirements of related sectoral specialisation. This would ensure that AI models trained in European AI factories are not only based on high-quality data, but are also specialised – the EU would therefore move away from creating further generative AI models towards specialised AI solutions, thereby building its competitiveness. It is also important for the EU to first decide why it wants to build more factories, which require large financial and energy investments–the construction of factories should not be an end in itself.
- Ensuring data sovereignty. Recent testimony by Microsoft representatives before the French Senate’s investigative committee showed that technology companies cannot guarantee that EU citizens’ data will never be transferred to the US authorities without the explicit consent of EU countries. The same applies to generative artificial intelligence systems and the data that feeds into them. For this reason, it is necessary to introduce requirements for the storage of sensitive data exclusively within European jurisdiction. This is a key task for the European Data Protection Board, which should update its guidelines on data localisation requirements for AI systems processing sensitive data (public and private).
- Mobilising the European private sector. An analysis of financial expenditure on AI development shows that private capital and public-private partnerships are key in this area. EU actions and public investment programmes should therefore focus on mobilising private capital, as is the case with the construction of AI gigafactories, where the European Commission assumes that part of the amount planned for the construction of gigafactories should be financed by the private sector.
- Poland as the leader of regional AI strategy in Central and Eastern Europe. The Central and Eastern European region has significant potential in the form of a high number of ICT specialists, but has struggled with insufficient funding for the development of artificial intelligence. Poland, as the largest economy in the region, should initiate a joint AI strategy for CEE countries, focusing on specialised sectoral solutions in the areas of energy, cybersecurity, industrial production, healthcare and public administration. Such coordination would avoid duplication of investments and allow for more effective competition for funds from the InvestAI programme.
- Poland should invest in domestic, specialised AI models, e.g. focused on supporting public administration. Existing domestic AI models, such as Bielik AI and PLlum, provide the basis for the systematic development of high-quality, verified data sets in Polish. In this way, Poland can build a lasting advantage, strengthening its position also by the fact that, according to research, Polish is one of the most precise languages in terms of issuing AI commands.
- The Polish government decided that an AI gigafactory will be built in Poland. It is important that it has a clear sectoral specialisation corresponding to the priorities of the region, particularly in the area of cybersecurity, where CEE countries have both urgent needs and growing competences.
