Governments across Europe, the Middle East and Asia are beginning to confront a problem that until recently sat largely in the background of digital policy: where, and under whose control, their most sensitive data is processed.
For years, public-sector technology strategies relied heavily on external cloud providers and commercially developed software. That model delivered efficiency and scale. But as artificial intelligence becomes more deeply embedded in national security, infrastructure and economic systems, concerns about dependency are becoming more explicit.
A growing number of policymakers now view AI infrastructure not simply as a technical choice, but as a matter of sovereignty.
Among the companies operating in this space is DREAM, founded by Israeli entrepreneur Shalev Hulio and former Austrian Chancellor Sebastian Kurz. The company focuses on building AI systems designed to operate within government-controlled environments, including on-premise and air-gapped systems.
The premise is straightforward: in certain sectors, particularly defense, finance and critical infrastructure, data cannot leave national borders or secure networks. That constraint has limited the use of many of the most advanced AI tools, which are typically cloud-based.
“Governments cannot simply take a model built in the U.S. or China, upload sensitive national data, and run it,” Hulio explains. “Data cannot leave. Systems are on-prem. Some are air-gapped. There are strict regulations.”
This challenge is not unique to any one country. Across Europe in particular, reliance on foreign technology providers has become a subject of increasing debate, especially since the start of the war in Ukraine.
“In the past, outsourcing was efficient,” Kurz notes. “Today AI is part of national capability. If your models, compute, or infrastructure are controlled outside your jurisdiction, your independence is limited. What happens if the supplier decides, or is told to, just switch it off?”
The concern extends beyond questions of control to questions of speed and coordination. As cyber threats grow more sophisticated, governments are often dealing with fragmented systems and siloed data.
“When data is fragmented today, it is not just inefficient,” Hulio warns. “It means you cannot connect signals fast enough. And when adversaries are automated and coordinated, speed becomes survival.”
According to Hulio, this reflects a broader structural gap. “When we saw that governments are among the most attacked entities in the world and almost no one builds specifically for them,” Hulio explains, “it became clear this was not a product gap. It was structural.”
Kurz describes the consequences in operational terms. “If adversaries are automated and governments remain fragmented and manual, the imbalance compounds,” he says. “This shows up in real places: Cyber attacks on critical infrastructure. Supply chain disruptions. Energy instability. For citizens, it means lights going out. Services disrupted.”
He recalls conversations with European leaders who expressed a similar concern: “They fear cyber attacks on infrastructure more than tanks crossing their borders.”
The shift has prompted interest in the ability to train and deploy AI models entirely within national infrastructure.

DREAM is one of several companies attempting to build systems around that constraint. Its approach focuses on structuring large volumes of government data and developing models that operate locally rather than relying on external cloud environments.
“Most earlier platforms were dashboards and tools for analysts,” Hulio says. “They were not sovereign AI systems designed for national scale and leadership-level decision making. They improved visibility. They did not create unified decision capability.”
The company’s architecture reflects the trade-offs inherent in government systems. “You cannot optimize only for speed or growth,” Hulio says. “You must optimize for security, accountability, and responsibility. That changes architecture fundamentally.”
In practice, that means building models tailored to specific national contexts and threat environments, rather than relying on standardized global systems.
“Infrastructure can be similar. Data is national. Models are trained locally,” Kurz says. “Threat environments differ. Capability can be standardized. Sovereignty remains local.”
The broader trend points to a shift in how governments define technological power. Where national security was once measured in physical assets, it is increasingly tied to control over data, computation and decision-making systems.
“You no longer need tanks or aircraft to cause damage,” Hulio says. “Today the most lethal and lasting attacks can be through cyber. Electricity, water, oil and gas, financial systems, supply chains; they’re all vulnerable. Now with AI, attackers have the ability to process enormous amounts of data and build predictions.”
As AI capabilities continue to advance, the question for governments is no longer whether to adopt them, but how to do so without increasing strategic dependence.
“Cyber, AI, and quantum are becoming structural elements of power,” Kurz concludes. “If you do not invest in infrastructure and an ecosystem, you will depend on others. Governments need AI that runs inside their own environment, on their own data, under their own control. That is what we build.”
Source link
#Sovereign #Pivot #Governments #Rebuilding #Control #Infrastructure

