Noticia

The UNE 0087 standard defines for the first time in Spain the key principles and requirements for creating and operating in data spaces

On 17 July, the UNE 0087 Specification "Definition and Characterisation of Data Spaces" was officially published, the first Spanish standard to establish a common framework for these digital environments.

This milestone has been possible thanks to the collaboration of the Data Space Reference Centre (CRED) with the Spanish Association for Standardisation (UNE). This regulation, which was approved on June 20, 2025, defines three key pillars in adherence to data spaces: interoperability, governance and value creation, with the aim of offering legal certainty, trust and a common technical language in the data economy.

For its creation, three working groups have been formed with more than 50 participants from both public and private entities who have contributed their knowledge to define the principles and key characteristics of these collaborative systems. These working groups have been coordinated as follows:

  • WG1: Definition of Data Spaces and Maturity Model.
  • WG2: Technical and Semantic Interoperability.
  • WG3: Legal and organisational interoperability.

The publication of this regulation is, therefore, a reference document for the creation of secure and reliable data spaces, applicable in all productive sectors and which serves as a basis for future guide documents.

In this way, to offer guidelines that facilitate the implementation and development of data spaces, the UNE 0087:2025 specification was created to create an inclusive framework of reference that guides organizations so that they can take advantage of all the information in an environment of regulatory compliance and digital sovereignty. The publication of this regulation has a number of benefits:

  • Accelerate the deployment of data spaces across all sectors of the economy.
  • Supporting sustainability and scaling/growth of data-sharing ecosystems.
  • Promoting public/private collaboration, ensuring convergence with Europe.
  • Move towards technological autonomy and data sovereignty in ecosystems.
  • Promote the discovery of new innovative business opportunities by fostering collaboration and the creation of strategic alliances.

Within the specification, data spaces are defined, their key characteristics of interoperability, governance and value generation are established and the benefits of their adhesion are determined. The specification is published here and it is important to add that, although there is a download cost, it is free of charge, thanks to the sponsorship of the General Directorate of Data.

With this tool, Spain takes a firm step in the consolidation of cohesive, secure data spaces aligned with the European framework, facilitating the implementation of cross-cutting projects in different sectors.

calendar icon
Noticia

Spain is taking a key step towards the data economy with the launch of the Data Spaces Kit, an aid programme that will subsidise the integration of public and private entities in sectoral data spaces.

Data spaces are secure ecosystems in which organizations, both public and private, share information in an  interoperable way, under common rules and with privacy guarantees. These allow new products to be developed, decision-making to be improved and operational efficiency to be increased, in sectors such as health, mobility or agri-food, among others.

Today, the Ministry for Digital Transformation and Public Function, through the Secretary of State for Digitalisation and Artificial Intelligence, has published in the Official State Gazette the rules governing the granting of aid to entities interested in effectively joining a data space.

This programme, which is called the "Data Spaces Kit", will be managed by Red.es and will subsidise the costs incurred by the beneficiary entities to achieve their incorporation into an eligible data space, i.e. one that meets the requirements set out in the bases, from the day of their publication.

Recipients and Funding

This aid plan is aimed at both public and private entities, as well as Public Administrations. Among the beneficiaries of these grants are the participants, which are those entities that seek to integrate into these ecosystems to share and take advantage of data and services.

For the execution of this plan, the Government has launched aid of up to 60 million euros that will be distributed, depending on the type of entity or the level of integration as follows:

  • Private and public entities with economic activity will have an aid of up to €15,000 under the effective incorporation regime or up to €30,000 if they join as a supplier.
  • On the other hand, Public Administrations will have funding of up to €25,000 if they are effectively incorporated, or up to €50,000 if they do so as a supplier.

The incorporation of companies from different sectors in the data spaces will generate benefits both at the business level and for the national economy, such as increasing the innovation capacity of the beneficiary companies, the creation of new products and services based on data analysis and the improvement of operational efficiency and decision-making.

The call is expected to be published during the fourth quarter of 2025. The subsidies will be applied for on a non-competitive basis, on a first-come, first-served basis and until the available funds are exhausted.

The publication of these regulatory bases in the Official State Gazette (BOE) aims to boost the data ecosystem in Spain, strengthen the competitiveness of the economy at the global level and consolidate the financial sustainability of innovative business models.

More information:

Regulatory bases in the BOE.

Data Space Reference Center LinkedIn page.

calendar icon
Blog

Just a few days ago, the Directorate General of Traffic published the new Framework Programme for the Testing of Automated Vehicles which, among other measures, contemplates "the mandatory delivery of reports, both periodic and final and in the event of incidents, which will allow the DGT to assess the safety of the tests and publish basic information [...] guaranteeing transparency and public trust."

The advancement of digital technology is making it easier for the transport sector to face an unprecedented revolution in autonomous vehicle driving, offering significant improvements in road safety, energy efficiency and mobility accessibility.

The final deployment of these vehicles depends to a large extent on the availability, quality and accessibility of large volumes of data, as well as on an appropriate legal framework that ensures the protection of the various legal assets involved (personal data, trade secrets, confidentiality, etc.), traffic security and transparency. In this context, open data and the reuse of public sector information are essential elements for the responsible development of autonomous mobility, in particular when it comes to ensuring adequate levels of traffic safety.

Data Dependency on Autonomous Vehicles

The technology that supports autonomous vehicles is based on the integration of a complex network of advanced sensors, artificial intelligence systems and real-time processing algorithms, which allows them to identify obstacles, interpret traffic signs, predict the behavior of other road users and, in a collaborative way, plan routes completely autonomously.

In the autonomous vehicle ecosystem, the availability of quality open data is strategic for:

  • Improve road safety, so that real-time traffic data can be used to anticipate dangers, avoid accidents and optimise safe routes based on massive data analysis.
  • Optimise operational efficiency, as access to up-to-date information on the state of roads, works, incidents and traffic conditions allows for more efficient planning of journeys.
  • To promote sectoral innovation, facilitating the creation of new digital tools that facilitate mobility.

Specifically, ensuring the safe and efficient operation of this mobility model requires continuous access to two key categories of data:

  • Variable or dynamic data, which offers constantly changing information such as the position, speed and behaviour of other vehicles, pedestrians, cyclists or weather conditions in real time.
  • Static data, which includes relatively permanent information such as the exact location of traffic signs, traffic lights, lanes, speed limits or the main characteristics of the road infrastructure.

The prominence of the data provided by public entities

The sources from which such data come are certainly diverse. This is of great relevance as regards the conditions under which such data will be available. Specifically, some of the data are provided by public entities, while in other cases the origin comes from private companies (vehicle manufacturers, telecommunications service providers, developers of digital tools...) with their own interests or even from people who use public spaces, devices and digital applications.

This diversity requires a different approach to facilitating the availability of data under appropriate conditions, in particular because of the difficulties that may arise from a legal point of view. In relation to Public Administrations, Directive (EU) 2019/1024 on open data and the reuse of public sector information establishes clear obligations that would apply, for example, to the Directorate General of Traffic, the Administrations owning public roads or municipalities in the case of urban environments. Likewise, Regulation (EU) 2022/868 on European data governance reinforces this regulatory framework, in particular with regard to the guarantee of the rights of third parties and, in particular, the protection of personal data.

Moreover, some datasets should be provided under the conditions established for dynamic data, i.e. those "subject to frequent or real-time updates, due in particular to their volatility or rapid obsolescence", which should be available "for re-use immediately after collection, through appropriate APIs and,  where appropriate, in the form of a mass discharge."

One might even think that the high-value data category  is of particular interest in the context of autonomous vehicles given its potential to facilitate mobility, particularly considering its potential to:

  • To promote technological innovation, as they would make it easier for manufacturers, developers and operators to access reliable and up-to-date information, essential for the development, validation and continuous improvement of autonomous driving systems.
  • Facilitate monitoring and evaluation from a security perspective, as transparency and accessibility of such data are essential prerequisites from this perspective.
  • To boost the development of advanced services, since data on road infrastructure, signage, traffic and even the results of tests carried out in the context of the aforementioned Framework Programme constitute the basis for new mobility applications and services that benefit society as a whole.

However, this condition is not expressly included for traffic-related data in the definition made at European level, so that, at least for the time being, public entities should not be required to disseminate the data that apply to autonomous vehicles under the unique conditions established for high-value data. However, at this time of transition for the deployment of autonomous vehicles, it is essential that public administrations publish and keep updated under appropriate conditions for their automated processing, some datasets, such as those relating to:

  • Road signs and vertical signage elements.
  • Traffic light states and traffic control systems.
  • Lane configuration and characteristics.
  • Information on works and temporary traffic alterations.
  • Road infrastructure elements critical for autonomous navigation.

The recent update of the official catalogue of traffic signs, which comes into force on 1 July 2025, incorporates signs adapted to new realities, such as personal mobility. However, it requires greater specificity with regard to the availability of data relating to signals under these conditions. This will require the intervention of the authorities responsible for road signage.

The availability of data in the context of the European Mobility Area

Based on these conditions and the need to have mobility data generated by private companies and individuals, data spaces appear as the optimal legal and governance environment to facilitate their accessibility under appropriate conditions.

In this regard, the initiatives for the deployment of the European Mobility Data Space, created in 2023, constitute an opportunity to integrate into its design and configuration measures that support the need for access to data required by autonomous vehicles. Thus, within the framework of this initiative, it would be possible to unlock the potential of mobility data , and in particular:

  • Facilitate the availability of data under conditions specific to the needs of autonomous vehicles.
  • Promote the interconnection of various data sources linked to existing means of transport, but also emerging ones.
  • Accelerate the digital transformation of autonomous vehicles.
  • Strengthen the digital sovereignty of the European automotive industry, reducing dependence on large foreign technology corporations.

In short, autonomous vehicles can represent a fundamental transformation in mobility as it has been conceived until now, but their development depends, among other factors, on the availability, quality and accessibility of sufficient and adequate data. The Sustainable Mobility Bill currently being processed in Parliament is a great opportunity to strengthen the role of data in facilitating innovation in this area, which would undoubtedly favour the development of autonomous vehicles. To this end, it will be essential, on the one hand, to have a data sharing environment that makes access to data compatible with the appropriate guarantees for fundamental rights and information security; and, on the other hand, to design a governance model that, as emphasised in the Programme promoted by the Directorate-General for Traffic,  facilitates the collaborative participation of "manufacturers, developers, importers and fleet operators established in Spain or the European Union", which poses significant challenges in the availability of data.


Content prepared by Julián Valero, Professor at the University of Murcia and Coordinator of the Research Group "Innovation, Law and Technology" (iDerTec). The contents and points of view reflected in this publication are the sole responsibility of its author.

calendar icon
Blog

Data is a fundamental resource for improving our quality of life because it enables better decision-making processes to create personalised products and services, both in the public and private sectors. In contexts such as health, mobility, energy or education, the use of data facilitates more efficient solutions adapted to people's real needs. However, in working with data, privacy plays a key role. In this post, we will look at how data spaces, the federated computing paradigm and federated learning, one of its most powerful applications, provide a balanced solution for harnessing the potential of data without compromising privacy. In addition, we will highlight how federated learning can also be used with open data to enhance its reuse in a collaborative, incremental and efficient way.

Privacy, a key issue in data management

As mentioned above, the intensive use of data requires increasing attention to privacy. For example, in eHealth, secondary misuse of electronic health record data could violate patients' fundamental rights. One effective way to preserve privacy is through data ecosystems that prioritise data sovereignty, such as data spaces. A dataspace is a federated data management system that allows data to be exchanged reliably between providers and consumers. In addition, the data space ensures the interoperability of data to create products and services that create value. In a data space, each provider maintains its own governance rules, retaining control over its data (i.e. sovereignty over its data), while enabling its re-use by consumers. This implies that each provider should be able to decide what data it shares, with whom and under what conditions, ensuring compliance with its interests and legal obligations.

Federated computing and data spaces

Data spaces represent an evolution in data management, related to a paradigm called federated computing, where data is reused without the need for data flow from data providers to consumers. In federated computing, providers transform their data into privacy-preserving intermediate results so that they can be sent to data consumers. In addition, this enables other Data Privacy-Enhancing Technologies(Privacy-Enhancing Technologies)to be applied. Federated computing aligns perfectly with reference architectures such as Gaia-X and its Trust Framework, which sets out the principles and requirements to ensure secure, transparent and rule-compliant data exchange between data providers and data consumers.

Federated learning

One of the most powerful applications of federated computing is federated machine learning ( federated learning), an artificial intelligence technique that allows models to be trained without centralising data. That is, instead of sending the data to a central server for processing, what is sent are the models trained locally by each participant.

These models are then combined centrally to create a global model. As an example, imagine a consortium of hospitals that wants to develop a predictive model to detect a rare disease. Every hospital holds sensitive patient data, and open sharing of this data is not feasible due to privacy concerns (including other legal or ethical issues). With federated learning, each hospital trains the model locally with its own data, and only shares the model parameters (training results) centrally. Thus, the final model leverages the diversity of data from all hospitals without compromising the individual privacy and data governance rules of each hospital.

Training in federated learning usually follows an iterative cycle:

  1. A central server starts a base model and sends it to each of the participating distributed nodes.
  2. Each node trains the model locally with its data.
  3. Nodes return only the parameters of the updated model, not the data (i.e. data shuttling is avoided).
  4. The central server aggregates parameter updates, training results at each node and updates the global model.
  5. The cycle is repeated until a sufficiently accurate model is achieved.

Central server -> starts a base model ---> sends it to each of the participating nodes 2. Node (1), node (2), node (X) ---> train the model locally with their data --> return the parameters of the updated model (data transfer is avoided) 3. Central server -> adds updates to the parameters (training results at each node) -> updates the global model *The cycle repeats until an accurate model is obtained

Figure 1. Visual representing the federated learning training process. Own elaboration

This approach is compatible with various machine learning algorithms, including deep neural networks, regression models, classifiers, etc.

Benefits and challenges of federated learning

Federated learning offers multiple benefits by avoiding data shuffling. Below are the most notable examples:

  1. Privacy and compliance: by remaining at source, data exposure risks are significantly reduced and compliance with regulations such as the General Data Protection Regulation (GDPR) is facilitated.
  2. Data sovereignty: Each entity retains full control over its data, which avoids competitive conflicts.
  3. Efficiency: avoids the cost and complexity of exchanging large volumes of data, speeding up processing and development times.
  4. Trust: facilitates frictionless collaboration between organisations.

There are several use cases in which federated learning is necessary, for example:

  • Health: Hospitals and research centres can collaborate on predictive models without sharing patient data.
  • Finance: banks and insurers can build fraud detection or risk-sharing analysis models, while respecting the confidentiality of their customers.
  • Smart tourism: tourist destinations can analyse visitor flows or consumption patterns without the need to unify the databases of their stakeholders (both public and private).
  • Industry: Companies in the same industry can train models for predictive maintenance or operational efficiency without revealing competitive data.

While its benefits are clear in a variety of use cases, federated learning also presents technical and organisational challenges:

  • Data heterogeneity: Local data may have different formats or structures, making training difficult. In addition, the layout of this data may change over time, which presents an added difficulty.
  • Unbalanced data: Some nodes may have more or higher quality data than others, which may skew the overall model.
  • Local computational costs: Each node needs sufficient resources to train the model locally.
  • Synchronisation: the training cycle requires good coordination between nodes to avoid latency or errors.

Beyond federated learning

Although the most prominent application of federated computing is federated learning, many additional applications in data management are emerging, such as federated data analytics (federated analytics). Federated data analysis allows statistical and descriptive analyses to be performed on distributed data without the need to move the data to the consumers; instead, each provider performs the required statistical calculations locally and only shares the aggregated results with the consumer according to their requirements and permissions. The following table shows the differences between federated learning and federated data analysis.

 

Criteria

Federated learning

Federated data analysis

Target

Prediction and training of machine learning models.  Descriptive analysis and calculation of statistics. 
Task type Predictive tasks (e.g. classification or regression). Descriptive tasks (e.g. means or correlations). 
Example Train models of disease diagnosis using medical images from various hospitals. Calculation of health indicators for a health area without moving data between hospitals.
Expected output Modelo global entrenado. Resultados estadísticos agregados.
Nature Iterativa. Directa.
Computational complexity​ Alta. Media. 
Privacy and sovereignty High Average
Algorithms Machine learning. Statistical algorithms.

Figure 1. Comparative table. Source: own elaboration

Federated learning and open data: a symbiosis to be explored

In principle, open data resolves privacy issues prior to publication, so one would think that federated learning techniques would not be necessary. Nothing could be further from the truth. The use of federated learning techniques can bring significant advantages in the management and exploitation of open data. In fact, the first aspect to highlight is that open data portals such as datos.gob.es or data.europa.eu are federated environments. Therefore, in these portals, the application of federated learning on large datasets would allow models to be trained directly at source, avoiding transfer and processing costs. On the other hand, federated learning would facilitate the combination of open data with other sensitive data without compromising the privacy of the latter. Finally, the nature of a wide variety of open data types is very dynamic (such as traffic data), so federated learning would enable incremental training, automatically considering new updates to open datasets as they are published, without the need to restart costly training processes.

Federated learning, the basis for privacy-friendly AI

Federated machine learning represents a necessary evolution in the way we develop artificial intelligence services, especially in contexts where data is sensitive or distributed across multiple providers. Its natural alignment with the concept of the data space makes it a key technology to drive innovation based on data sharing, taking into account privacy and maintaining data sovereignty.

As regulation (such as the European Health Data Space Regulation) and data space infrastructures evolve, federated learning, and other types of federated computing, will play an increasingly important role in data sharing, maximising the value of data, but without compromising privacy. Finally, it is worth noting that, far from being unnecessary, federated learning can become a strategic ally to improve the efficiency, governance and impact of open data ecosystems.


Jose Norberto Mazón, Professor of Computer Languages and Systems at the University of Alicante. The contents and views reflected in this publication are the sole responsibility of the author.

calendar icon
Blog

Today's industry is facing one of the biggest challenges in its recent history. Market demands, pressure to meet climate targets, consumer demand for transparency and technological acceleration are converging in a profound transformation of the production model. This transformation is not only aimed at greater competitiveness, but also at more resilient, flexible, efficient and sustainable production.

In this context, industrial digitisation - driven by technologies such as the Internet of Things (IoT), artificial intelligence, edge computing, or cyber-physical systems - is generating massive amounts of operational, environmental and logistical data. However, the mere existence of this data does not in itself guarantee value. The key is to govern it properly, i.e. to establish principles, processes, roles and technologies that ensure that this data is reliable, accessible, useful and secure. In other words, that the data is fit to be harnessed to improve industrial processes.

This is why industrial data governance is positioned as a strategic factor. It is not just a matter of ‘having data’, but of turning it into a critical infrastructure for decision-making, resource optimisation, intelligent automation and ecological transition. Without data governance, there is no Industry 4.0. And without Industry 4.0, the challenges of sustainability, energy efficiency or full traceability are intractable.

In this article we explore why data governance is essential in industry, what concrete benefits it brings to production processes and how initiatives such as the National Industry Data Space can accelerate this transformation.
 We then analyse its impact at different levels, from the most relevant use cases to the collaborative frameworks that are emerging in Spain.

Why is data governance key in industry?

Industrial data comes from a multitude of distributed sources: IoT sensors, SCADA systems, automated assembly lines, maintenance platforms, ERP or Manufacturing Execution Systems (MES), among others. This heterogeneity, if not properly managed, can become a barrier to the integration and useful analysis of information.

Data governance overcomes these barriers by establishing the rules of the game for data capture, storage, quality, interoperability, use, protection and disposal. This enables not only operational efficiency but also long-term sustainability. How?

  • Reducing operational inefficiencies: by having accurate, up-to-date and well-integrated data between systems, tasks can be automated, rework avoided, and unplanned downtime reduced. For example, a plant can adjust the speed of its production lines in real time based on the analysis of performance and demand data.
  • Improving sustainability: robust data management can identify patterns of energy consumption, materials or emissions. With this information, processes can be redesigned to be more sustainable, eco-design can be applied and the environmental footprint can be reduced. Data, in this case, acts as a compass towards decarbonisation.
  • Ensuring regulatory compliance and traceability: from ISO 9001 to the new circular economy regulations or the Digital Product Passport, industries must demonstrate compliance. This is only possible with reliable, traceable and auditable data.
  • Facilitating interoperability between systems: data governance acts as the ‘glue’ that binds together the different technological silos of an organisation: quality, production, logistics, maintenance, purchasing, etc. The standardisation and semantic alignment of data allows for more agile flows and better informed decisions.
  • Boosting the circular economy: thanks to the full traceability of a product's life cycle, from design to recycling, it is possible to identify opportunities for reuse, material recovery and sustainable design. This is supported by data that follows the product throughout its life.

What should data governance look like in the industrial environment?

A data governance model adapted to this context should include:

Specific roles: it is necessary to have a defined team, where everyone's responsibility and tasks are clear. Some of the roles that cannot be missing are:

  • Data owners: responsible for the use of the data in their area (production, quality, maintenance...).
  • Data stewards: ensure the consistency, completeness and accuracy of the information.
  • Data governance team: coordinates the strategy, defines common policies and evaluates compliance.

Structured processes: Like the roles, it is necessary to define the various phases and operations to be carried out. These include the following:

  • Classification and cataloguing of data assets (by type, criticality, use).
  • Data quality control: definition of validation rules, cleaning of duplicates, exception management.
  • Data life cycle: from its creation on the machine to its archiving or destruction.
  • Access and security: privilege management, usage audits, traceability.

Organisational policies: to ensure interoperability and data quality it is necessary to have standards, norms and guidelines to guide users. Some examples are:

  • Standards for nomenclature, formats, encoding and synchronisation.
  • Standards for interoperability between systems (e.g. use of standards such as OPC UA or ISA-95).
  • Guidelines for ethical and legally compliant use (such as Data Regulation, GDPR or environmental legislation).

This approach makes industrial data an asset managed with the same rigour as any physical infrastructure.

Industrial use cases enabled by data governance

The benefits of data governance in industry are realised in multiple practical applications. Some of the most representative use cases are:

1.Predictive maintenance

One of the great classics of Industry 4.0. By combining historical maintenance data with real-time sensors, organisations can anticipate machine failures and avoid unexpected downtime. But this is only possible if the data is governed: if its capture frequency, format, responsible parties, quality and availability have been defined.

2. Complete product traceability

From raw material to end customer, every event in the value chain is recorded and accessible. This is vital for sectors such as food, automotive or pharmaceuticals, where traceability is both an added value and a regulatory obligation. Data governance ensures that this traceability is not lost, is verifiable and meets the required interoperability standards.

3. Digital twins and process simulation

For a digital twin - a virtual replica of a physical process or system - to work, it needs to be fed with accurate, up-to-date and consistent data. Data governance ensures synchronisation between the physical and virtual worlds, and allows the generation of reliable simulation scenarios, from the design of a new production line to the optimisation of the factory layout, i.e. of the different elements within the plant.

4. Energy monitoring and emission control

Real-time monitoring of energy, water or gas consumption can reveal hidden inefficiencies and opportunities for savings. Through intelligent dashboards and KPIs defined on governed data, industrial plants can reduce their costs and advance their environmental sustainability goals.

5. Automation and intelligent quality control

Machine vision systems and machine learning algorithms trained with production data allow to detect defects in real time, adjust parameters automatically and improve final quality. Without good data quality (accuracy, completeness, consistency), these algorithms can fail or generate unreliable results.

The National Industry Data Space: key to collaboration and competitiveness

For industrial data governance to transcend the scope of each company and become a real lever for sectoral transformation, it is necessary to have infrastructures that facilitate the secure, reliable and efficient sharing of data between organisations. The National Data Space for Industry, framed within the Plan for the Promotion of Sectoral Data Spaces promoted by the Ministry for Digital Transformation and the Civil Service, is in this line.

This space aims to create an environment of trust where companies, associations, technology centres and administrations can share and reuse industrial data in an interoperable manner, in accordance with ethical, legal and technical principles. Through this framework, the aim is to enable new forms of collaboration, accelerate innovation and reinforce the strategic autonomy of the national productive fabric.

The industrial sector in Spain is enormously diverse, with an ecosystem made up of large corporations, SMEs, suppliers, subcontractors, clusters and R&D centres. This diversity can become a strength if it is articulated through a common data infrastructure that facilitates the integration and exchange of information in an orderly and secure manner. Moreover, these industrial data can be complemented with open data published by public bodies, such as those available in the National Catalogue of Open Data, thus extending the value and possibilities of reuse for the sector as a whole.

The strengths of this common infrastructure allow:

  • Detect synergies along the value chain, such as industrial recycling opportunities between different sectors (e.g. plastic waste from one chemical industry as raw material in another).
  • Reducing entry barriers to digitisation, especially for SMEs that do not have the resources to deploy advanced data analytics solutions, but could access shared services or data within the space.
  • Encourage open innovation models where companies share data in a controlled way for the joint development of solutions based on artificial intelligence or predictive maintenance.
  • Promote sectoral aggregate indicators, such as shared carbon footprints, energy efficiency levels or industrial circularity indices, which allow the country as a whole to make more coordinated progress towards sustainability and competitiveness objectives.

The creation of the National Industrial Data Space can be a true lever for modernization for the Spanish industrial fabric:

  • Increased international competitiveness, by facilitating compliance with European market requirements, such as the Data Regulation, the Digital Product Passport, and sustainability standards.
  • Regulatory agility and improved traceability, allowing industries to respond quickly to audits, certifications, or regulatory changes.
  • Proactive capacity, thanks to the joint analysis of production, consumption, or market data that allows for the prediction of disruptions in supply chains or the demand for critical resources.
  • Creation of new business models, based on the provision of products as a service, the reuse of materials, or the shared leasing of industrial capacities.

The deployment of this national data space not only seeks to improve the efficiency of industrial processes. It also aims to strengthen the country's technological and data sovereignty, enabling a model where the value generated by data remains within the companies, regions, and sectors themselves. In this sense, the National Industrial Data Space aligns with European initiatives such as GAIA-X and Manufacturing-X, but with an approach adapted to the context and needs of the Spanish industrial ecosystem.

Conclusions

Data governance is a fundamental pillar for the industry to move toward more efficient, sustainable, and resilient models. Having large volumes of information is not enough: it must be managed properly to generate real value.

The benefits are clear: operational optimization, improved traceability, a boost to the circular economy, and support for technologies such as artificial intelligence and digital twins. But the real leap forward comes when data is no longer managed in isolation and becomes part of a shared ecosystem.

The National Industrial Data Space offers this framework for collaboration and trust, facilitating innovation, competitiveness, and technological sovereignty. Investing in its development means investing in a more connected, intelligent industry that is prepared for the challenges of the future.


Content prepared by Dr. Fernando Gualo, Professor at UCLM and Data Governance and Quality Consultant. The content and point of view reflected in this publication are the sole responsibility of its author.

calendar icon
Blog

With 24 official languages and more than 60 regional and minority languages, the European Union is proud of its cultural and linguistic diversity. However, this richness also represents a significant challenge in the digital and technological sphere. Advances in artificial intelligence (AI) and natural language processing have been dominated by English, creating a noticeable imbalance in the availability of language resources for most European languages.

This imbalance has direct consequences, for example:

  • Asymmetric technology development: Companies and researchers have difficulty creating AI solutions adapted to specific languages because resources are limited.
  • Technological dependence: Europe risks becoming dependent on language solutions developed outside its cultural and normative context.

Addressing this gap is not only a matter of inclusion, but also represents a large-scale economic opportunity, capable of generating huge gains in both trade and technological innovation. To address these challenges, the European Commission has launched the European Language Data Space (LDS), a decentralised infrastructure that promotes the secure and controlled exchange of language data among multiple actors in the European ecosystem.

Unlike a simple centralised repository, the LDS functions as a language data marketplace that allows participants to share, sell or license their data under clearly defined conditions and with full control over the use of the data.

The European Language Data Space (LDS), with a beta version operational, represents a decisive step towards democratising language technologies across all languages of the European Union. We tell you the keys to this project and the next steps.

How does this platform work?

LDS is based on a decentralised peer-to-peer (P2P) architecture that allows users to interact directly with each other, without the need for a central server or single authority, where each participant maintains control of its own data. The key elements of LDS operation are:

1. Decentralised and sovereign architecture

Each participant (whether data provider or data consumer) can locally install the LDS Connector, a software that allows interacting directly with other participants without the need for a central server.. This approach ensures:

  •  Data sovereignty: owners retain full control over who can access their data and under what conditions of use.

  • Trust and security: Only eligible and authorised participants, legal entities registered in the EU, can be part of the ecosystem.

  • Interoperability: is compatible with other European data spaces, following common standards.

2. Data exchange flow

The exchange process follows a structured flow between two main actors:

  • The providers describe their linguistic datasets, establish access policies (licences, prices) and publish these offers in the catalogue.
  • The consumers explore the catalogue, identify resources of interest and, through their connectors, initiate negotiations on the terms of use.

If both parties reach an agreement, a contract is established and the data transfer takes place securely between the connectors.

3 Supporting infrastructure

Although the exchange is decentralised, the LDS includes supporting elements such as:

  • Participant registration: ensures that only verified entities participate in the ecosystem.

  • Optional catalogue: facilitates the publication and discovery of available resources

  • Hub of vocabularies: is a service that centralises controlled vocabularies, and allows maintaining lists of values, definitions, relationships between terms, mappers between lists, etc.

  • Monitoring service: allows you to monitor the overall operation of the system.

Added value for the European data ecosystem

The LDS brings significant benefits to the European digital landscape:

  • Boosting multilingual AI

By facilitating access to quality linguistic data in all European languages, the LDS contributes directly to the development of more inclusive AI models adapted to Europe's multilingual reality.  This is especially relevant at a time when large language models (LLMs) are transforming human-machine interaction.

  • Strengthening the data economy

It is estimated that true digital language integration could generate enormous economic benefits in both trade and technological innovation. The LDS creates a marketplace where language data becomes valuable by incentivising its collection, processing and availability under fair and transparent conditions.

  • Preservation of linguistic diversity

By promoting technological development in all European languages, the LDS contributes to preserving and revitalising the continent's linguistic heritage, ensuring that no language is left behind in the digital revolution.

  • The crucial role of industry and public administrations

The success of the LDS depends crucially on the active participation of various actors:

  • Fresh, quality data

The platform seeks to attract especially "fresh" data from the industry (media, publishing, customer services) and the public sector, necessary to train and improve current language models. They are particularly valued:

  • Multimodal data (text, audio, video).

  • Specific content from various professional domains.

  • Up-to-date and relevant language resources.
  • Participation open to all ecosystem actors

The LDS is designed to be inclusive, allowing both private organisations and public entities to participate, as long as they are legal entities registered in the EU. Both types of organisations can act as providers and/or consumers of data.

Participation is formalised through a validation process by the governance board, ensuring that all eligible organisations can benefit from this common language data marketplace.

How can you take part?

The beta version of the LDS is now operational and open to new participants. Organisations interested in participating in this initiative can:

  1. Join the test and focus groups: to contribute to the development and improvement of the platform, here.
  2. Testing the LDS connector: experimenting with the technology in controlled environments.
  3. Provide technical feedback : helping to define key aspects such as metadata, licensing or exchange mechanisms.
  4. Identify relevant data: assessing which language resources could be shared through the platform.

The future of the LDS

While LDS currently focuses on data exchange, its medium-term vision envisages the possibility of integrating language services and AI model hosting within the same ecosystem, thus reinforcing Europe's role in the development of language technologies . A pre-final version of LDS is expected to be available in July 2025 and the finalised version of LDS is expected in January 2026.

All these aspects were discussed at a free online seminar held by the European open data portal "Data spaces: experience from the European Language Data Space".  You can go back to watch the webinar here.

In a global context where technological sovereignty has become a strategic priority, the European Language Data Space represents a decisive step towards ensuring that the AI revolution does not leave Europe's linguistic richness behind.

calendar icon
Blog

Access to financial and banking data is revolutionising the sector, promoting transparency, financial inclusion and innovation in economic services. However, the management of this data faces regulatory challenges in balancing openness with security and privacy.

For this reason, there are different ways of accessing this type of data, as we will see below.

Open Banking and Open Finance versus Open Data.

These terms, although related, have important differences.

The term Open Banking refers to a system that allows banks and other financial institutions to securely and digitally share customer financial data with third parties. This requires the customers' express approval of the data sharing conditions . This consent can be cancelled at any time according to the customer's wishes.

Open Finance, on the other hand, is an evolution of Open Banking which embraces a broader range of financial products and services. When we talk about Open Finance, in addition to banking data, data on insurance, pensions, investments and other financial services are included.

In both Open Banking and Open Finance, the data is not open (Open Data), but can only be accessed by those previously authorised by the customer. The exchange of data is done through an application programming interface or API , which guarantees the agility and security of the process. All of this is regulated by the European directive on payment services in the internal market (known as PSD2), although the European Commission is working on updating the regulatory framework.

  • Applications of Open Banking and Open Finance:

The purpose of these activities is to provide access to new services based on information sharing. For example, they facilitate the creation of apps that unify access to all the bank accounts of a customer, even if they are from different providers. This improves the management and control of income and expenditure by providing an overview in a single environment.

Another example of use is that they allow providers to cross-check information more quickly. For example, by allowing access to a customer's financial data, a dealer could provide information on financing options more quickly.

Open data platforms on banking

While private banking data, like all types of personal data, is strictly regulated and cannot be openly published due to privacy protection regulations, there are sets of financial data that can be freely shared. For example, aggregate information on interest rates, economic indicators, historical stock market data, investment trends and macroeconomic statistics, which are accessible through open sources.

This data, in addition to boosting transparency and confidence in markets, can be used to monitor economic trends, prevent fraud and improve risk management globally. In addition, fintechcompanies, developers and entrepreneurs can take advantage of them to create solutions such as financial analysis tools, digital payment systems or automated advice.

Let's look at some examples of places where open data on the banking and financial sector can be obtained.

International sources

Some of the most popular international sources are:

  • European Central Bank: provides statistics and data on euro area financial markets, through various platforms. Among other information, users can download datasets on inflation, bank interest rates, balance of payments, public finances, etc.

  • World Bank: provides access to global economic data on financial development, poverty and economic growth.

  • International Monetary Fund: provides simplified access to macroeconomic and financial data, such as the outlook for the global or regional economy. It also provides open data from reports such as its Fiscal Monitor, which analyses the latest developments in public finances.

  • Federal Reserve Economic Data (FRED): focuses on US economic data, including market indicators and interest rates. This repository is created and maintained by the Research Department of the Federal Reserve Bank of St. Louis.

National sources

Through the National Open Data Catalogue of datos.gob.es a large number of datasets related to the economy can be accessed. One of the most prominent publishers is the Instituto Nacional de Estadística (INE), which provides data on defaults by financial institution, mortgages, etc.

In addition, the Banco de España offers various products for those interested in the country's economic data:

  • Statistics: the Banco de España collects, compiles and publishes a wide range of economic and financial statistics. It includes information on interest and exchange rates, financial accounts of the institutional sectors, balances of payments and even household financial surveys, among others.
  • Dashboard: the Banco de España has also made available to the public an interactive viewer that allows quarterly and annual data on external statistics to be consumed in a more user-friendly way.

In addition, Banco de España has set up asecure room for researchers to access  data that is valuable but cannot be opened to the general public due to its nature. In this sense we find:

  • BELab: the secure data laboratory managed by the Banco de España, offering on-site (Madrid) and remote access. These data have been used in various projects.
  • ES_DataLab: restricted microdata laboratory for researchers developing projects for scientific and public interest purposes. In this case, it brings together micro-data from various organisations, including the Bank of Spain.

Data spaces: an opportunity for secure and controlled exchange of financial data

As we have just seen, there are also options to facilitate access to financial and banking data in a controlled and secure manner. This is where data spaces come into play, an ecosystem where different actors share data in a voluntary and secure manner, following common governance, regulatory and technical mechanisms.

In this respect, Europe is pushing for a European Financial Data Facility (EEDF), a key initiative within the European Data Strategy. The EEDF consists of three main pillars:

  • Public reporting data ("public disclosures"): collects financial reporting data (balance sheets, revenues, income statements), which financial firms are required by law to disclose on a regular basis. In this area is the European Single Access Point (ESAP)initiative, a centralised platform for accessing data from over 200 public reports from more than 150,000 companies.
  • Private customer data of financial service providers: encompasses those data held by financial service providers such as banks. In this area is the framework for access to financial data, which covers data such as investments, insurance, pensions, loans and savings.
  • Data from supervisory reports: for this type of data, the supervisory strategy, which covers data from different sectors (banks, insurance, pension funds...) has to be taken into account in order to promote digital transformation in the financial sector.

In conclusion, access to financial and banking data is evolving significantly thanks to various initiatives that have enabled greater transparency and that will encourage the development of new services, while ensuring the security and privacy of shared data. The future of the financial sector will be shaped by the ability of institutions and regulators to foster data ecosystems that drive innovation and trust in the market.

calendar icon
Blog

A Smart Tourism Destination (ITD) is based on a management model based on innovation and the use of advanced technology to optimise the visitor experience and improve the sustainability of the destination, while strengthening the quality of life of residents. The DTI model is based on a series of indicators that allow the state of the tourism destination to be analysed, areas for improvement to be diagnosed and strategic action plans to be developed. This approach, promoted by SEGITTUR (Sociedad Estatal para la Gestión de la Innovación y las Tecnologías Turísticas) and other regional public entities (e.g. the DTI-CV model of the Comunitat Valenciana defined by INVATTUR - Instituto Valenciano de Tecnologías Turísticas), has been consolidated as a key pillar in the digital transformation of tourism. This intensive use of technologies in ITDs has transformed them into true data-generating centres, which - combined with external sources - can be used to optimise decision-making and improve destination management.

Data provenance in an ITD and its use

In an ITD, data are basically generated from two main areas:

  1. Data generated by visitors or tourists: they create a digital footprint as they interact with different technologies. This footprint includes comments, ratings, images, spending records, locations and preferences, which are reflected in mobile apps, social media or booking platforms. In addition, data is generated passively through electronic payment systems or urban mobility systems, as well as traffic measurement devices, among others.
  2. Data generated by the tourist destination: thanks to the sensorisation and implementation of IoT networks (Internet of Things ), destinations collect real-time information on traffic management, energy consumption, environmental quality and use of services (public or private). In addition, the destination generates essential data on its tourism offer, such as updated lists of accommodation or hospitality establishments, places or events of tourist interest and complementary services.

The combination of these data sources in a Intelligent Destination Platform (IDP) such as the one proposed by SEGITTUR, allows ITDs to use them to facilitate a more innovative and experience-oriented management.

Title: Areas of data generation in an ITD

 

Source: own elaboration

There are numerous examples and good practices in the use of these tourism data, implemented by various European destinations, whose description is documented in the report Study on mastering data for tourism by EU destinations. This report provides a detailed insight into the opportunities that the use of data offers to improve the competitiveness of the tourism ecosystem. Furthermore, this report does not ignore the importance of tourist destinations as data generators, formulating a series of recommendations for public administrations, including the development of a cross-cutting data management plan - i.e. involving not only the area of tourism, but also other areas such as urban planning and the environment-, guaranteeing an integrated approach. This plan should promote the availability of open data, with a special focus on data related to sustainability, accessibility and specialised tourism offer.

Smart destination models and open data

SEGITTUR's DTI model (recently described in its practical guide) establishes as a requirement the creation of an open data portal in tourist destinations to facilitate the publication of data in the place where tourism activity takes place and its access in reusable formats, enabling the development of different products and services. No specific themes are established, but information of interest such as public transport, shops, job offers, cultural agenda or environmental sensors are highlighted. Interesting is the definition of indicators to assess the quality of the portal such as compliance with open data standards, the existence of systems to automate the publication of data or the number of datasets available per 100,000 inhabitants. It is also indicated that new datasets should be added progressively as their usefulness is identified.

It should be noted that in other DTI models, such as INVATTUR's DTI-CV model mentioned above, it is also proposed that destinations should have a tourism open data portal in order to promote tourism innovation.

High-value tourism data

The European Union, through Directive (EU) 2019/1024 on open data and re-use of public sector information and Implementing Regulation (EU) 2023/138, has defined high value datasets in various areas, including tourism within the category of statistical data. These are data on tourism flows in Europe:

  • Overnight stays in tourist accommodation establishments, at national level, at NUTS 2 level (autonomous communities and cities), NUTS 3 level (provinces) and for some specific cities.
  • Tourist arrivals and departures, tourist expenditure, hotel occupancy, demand for tourist services, at national level.

It is interesting to note that these and other data have been collected in Dataestur, the data platform for tourism in Spain. Dataestur organises its data in different categories:

  • Travel and leisure: statistics on tourist arrivals, attraction ratings, museum visits and leisure spending.
  • Economy: data on employment in the tourism sector, active businesses and spending by international visitors.
  • Transport: data on mobility, including air traffic, bus and rail transport, roads and ports.
  • Accommodation: information on hotel occupancy, rural tourism, campsites and tourist accommodation, as well as prices, profitability and hotel satisfaction.
  • Sustainability: indicators on air quality, water and nature conservation in tourist destinations.
  • Knowledge: analysis of visitor perception, security, digital connectivity, tourism studies and reports.

Most of these data are collected at provincial level (NUTS 3) and are therefore not published at destination level. In this sense, the Spanish Federation of Municipalities and Provinces (FEMP) proposes 80 datasets to be published openly by the local administration which, in addition, take into account high-value data, bringing them down to the local level. Among all these data sets, the following are explicitly defined as data within the tourism category: cultural agenda, tourist accommodation, census of commercial and leisure premises, tourist flows, tourist attractions and monuments.

Barriers and opportunities in the publication of open data by ITDs

After analysing the current state of data management in the field of tourism, a series of opportunities for tourism destinations as generators of open data are proposed:

  • Provision of data for internal consumption: tourism data covers multiple themes and is generated in different departments within the local administration, such as tourism, urban planning, mobility, environment or economy. Given this scenario of diversity of sources and decision-makers, working on the publication of data in reusable formats not only facilitates its reuse by external agents, but also optimises its use within the local administration itself, allowing for a more efficient and data-based management.

  • Fostering innovation in tourism: open data from tourism destinations is an excellent raw material on which to develop intelligent products and services with added value for the sector. This facilitates public-private collaboration, promoting the creation of a technology industry around tourism destinations and the open data they generate.

  • Facilitating the participation of tourism destinations in data spaces: the publication of open data allows the managing bodies of tourism destinations to join data spaces in a more robust way. On the one hand, having open data facilitates interoperability between actors in the sector. On the other hand, tourism open data initiatives increase the data culture in tourism destinations, boosting the perception of the usefulness of data-driven tourism management.

Despite these clear opportunities, there are a number of barriers that make it difficult for tourism destinations to publish data in open format effectively:

  • Necessity of sufficient budget and technical resources: the publication of open data requires investments in technological platforms and in the training of specialised teams. This is even more important in the field of tourism, where data are heterogeneous, subject-matter diverse and fragmented, requiring additional efforts in their collection, standardisation and coordinated publication.
  • Small business dominance in tourism destinations: tourism businesses need to be supported to incorporate the use of open destination data, as well as the development of data-driven solutions tailored to the needs of the destination.
  • Awareness of the usefulness of open data: there is a risk that open data will be seen as a trend rather than a strategic resource that enables tangible benefits. In this sense, data is perceived as an internal resource rather than an asset that can be shared to multiply its value. There is a lack of clear references and examples of the impact of the reuse of open data in tourist destinations that would allow for a deeper incorporation of a data culture at the tourist destination level.
  • Difficulty in operationalising data strategies: Tourism destinations have incorporated the publication of open data in their strategic plans, but it is necessary to push for its effective implementation. One of the key issues in this regard is the fear of loss of competitive advantage, as the publication of open data related to a destination's tourism activity could reduce its differentiation from other destinations. Another concern relates to legal and personal data protection aspects, especially in areas such as mobility and tourism consumption.

Conclusions: the future of open data in ITD models

In relation to data management, it is necessary to address aspects that are still not sufficiently developed in the DTI models, such as data exchange at the destination, rather than the mere purchase of information; the transversal integration of data on a local scale, allowing the cross-referencing of information from different areas (urban planning, environment, tourism, etc.).); obtaining a greater level of detail in the data, both in terms of time (specific events) and space (areas or points of interest within destinations), safeguarding privacy; and the development of an effective open data strategy.

Focusing on this last point, ITD strategies should include the publication of open data. To this end, it is a priority to define a data management plan that allows each destination to determine what data is produced, how it can be shared and under what conditions, ensuring that the opening of data does not negatively affect the competitiveness of the destination or conflict with current data protection and privacy legislation.

A key tool in this process is the definition of a catalogue, which makes it possible to organise, prioritise and classify the available data (and their metadata) according to their value and usefulness for the different actors in the tourism ecosystem. This catalogue should enable ITD data to comply with the FAIR principles (Findable, Accessible, Interoperable, Reusable), facilitating their open publication or allowing their integration in data spaces (such as the European tourism data space developed in the DeployTour project). In this context, each identified and catalogued dataset should have two versions:

  1. An open version, accessible to any user and in a reusable format, with an unrestricted licence (i.e. an open dataset).
  2. A version that allows specific agreements for use in data spaces, where the sovereignty and control of the destination is maintained, establishing access restrictions and conditions of use.

Regardless of the approach taken, all published data should comply with the FAIR principles, ensuring that it is findable, accessible, interoperable and reusable, promoting its use in both the public and private sectors and facilitating the development of innovative data-driven solutions in the field of tourism.


Jose Norberto Mazón, Professor of Computer Languages and Systems at the University of Alicante. The contents and views reflected in this publication are the sole responsibility of the author.

calendar icon
Blog

The European Green Deal (Green Deal) is the European Union's (EU) sustainable growth strategy, designed to drive a green transition that transforms Europe into a just and prosperous society with a modern and competitive economy. Within this strategy, initiatives such as Target 55 (Fit for 55), which aims to reduce EU emissions by at least 55% by 2030, stand out, and the Nature Restoration Regulation(, which sets binding targets to restore ecosystems, habitats and species.

 The European Data Strategy positions the EU as a leader in data-driven economies, promoting fundamental values such as privacy and sustainability.  This strategy envisages the creation of data spaces sectoral spaces to encourage the availability and sharing of data, promoting its re-use for the benefit of society and various sectors, including the environment.

This article looks at how environmental data spaces, driven by the European Data Strategy, play a key role in achieving the goals of the European Green Pact by fostering the innovative and collaborative use of data.

Green Pact data space from the European Data Strategy

In this context, the EU is promoting the Green Deal Data Space, designed to support the objectives of the Green Deal through the use of data. This data space will allow sharing data and using its full potential to address key environmental challenges in several areas: preservation of biodiversity, sustainable water management, the fight against climate change and the efficient use of natural resources, among others.

In this regard, the European Data Strategy highlights two initiatives:

  • On the one hand, the GreenData4all initiative which carries out an update of the INSPIRE directive to enable greater exchange of environmental geospatial data between the public and private sectors, and their effective re-use, including open access to the general public.
  •  On the other hand, the Destination Earth project proposes the creation of a digital twin of the Earth, using, among others, satellite data, which will allow the simulation of scenarios related to climate change, the management of natural resources and the prevention of natural disasters.

Preparatory actions for the development of the Green Pact data space

As part of its strategy for funding preparatory actions for the development of data spaces, the EU is funding the GREAT project (The Green Deal Data Space Foundation and its Community of Practice). This project focuses on laying the foundations for the development of the Green Deal data space through three strategic use cases: climate change mitigation and adaptation, zero pollution and biodiversity. A key aspect of GREAT is the identification and definition of a prioritised set of high-value environmental data (minimum but scalable set).  This approach directly connects this project to the concept of high-value data defined in the European Open Data Directive (i.e. data whose re-use generates not only a positive economic impact, but also social and environmental benefits)..  The high-value data defined in the Implementing Regulation include data related to Earth observation and the environment, including data obtained from satellites, ground sensors and in situ data.. These packages cover issues such as air quality, climate, emissions, biodiversity, noise, waste and water, all of which are related to the European Green Pact.

Differentiating aspects of the Green Pact data space

At this point, three differentiating aspects of the Green Pact data space can be highlighted.

  • Firstly, its clearly multi-sectoral nature requires consideration of data from a wide variety of domains, each with their own specific regulatory frameworks and models.
  • Secondly, its development is deeply linked to the territory, which implies the need to adopt a bottom-up approach (bottom-up) starting from concrete and local scenarios.
  • Finally, it includes high-value data, which highlights the importance of active involvement of public administrations, as well as the collaboration of the private and third sectors to ensure its success and sustainability.

Therefore, the potential of environmental data will be significantly increased through European data spaces that are multi-sectoral, territorialised and with strong public sector involvement.

Development of environmental data spaces in HORIZON programme

In order to develop environmental data spaces taking into account the above considerations of both the European Data Strategy and the preparatory actions under the Horizon Europe (HORIZON) programme, the EU is funding four projects:

  • Urban Data Spaces for Green dEal (USAGE).. This project develops solutions to ensure that environmental data at the local level is useful for mitigating the effects of climate change. This includes the development of mechanisms to enable cities to generate data that meets the FAIR principles (Findable, Accessible, Interoperable, Reusable) enabling its use for environmentally informed decision-making.
  • All Data for Green Deal (AD4GD).. This project aims to propose a set of mechanisms to ensure that biodiversity, water quality and air quality data comply with the FAIR principles. They consider data from a variety of sources (satellite remote sensing, observation networks in situ, IoT-connected sensors, citizen science or socio-economic data).
  • F.A.I.R. information cube (FAIRiCUBE). The purpose of this project is to create a platform that enables the reuse of biodiversity and climate data through the use of machine learning techniques. The aim is to enable public institutions that currently do not have easy access to these resources to improve their environmental policies and evidence-based decision-making (e.g. for the adaptation of cities to climate change).
  • Biodiversity Building Blocks for Policy (B-Cubed).. This project aims to transform biodiversity monitoring into an agile process that generates more interoperable data. Biodiversity data from different sources, such as citizen science, museums, herbaria or research, are considered; as well as their consumption through business intelligence models, such as OLAP cubes, for informed decision-making in the generation of adequate public policies to counteract the global biodiversity crisis.

Environmental data spaces and research data

Finally, one source of data that can play a crucial role in achieving the objectives of the European Green Pact is scientific data emanating from research results.  In this context, the European Union's European Open Science Cloud (EOSC) initiativeis an essential tool. EOSC is an open, federated digital infrastructure designed to provide the European scientific community with access to high quality scientific data and services, i.e. a true research data space. This initiative aims to facilitate interoperability and data exchange in all fields of research by promoting the adoption of FAIR principles, and its federation with the Green Pact data space is therefore essential.

Conclusions

Environmental data is key to meeting the objectives of the European Green Pact. To encourage the availability and sharing of this data, promoting its re-use, the EU is developing a series of environmental data space projects. Once in place, these data spaces will facilitate more efficient and sustainable management of natural resources, through active collaboration between all stakeholders (both public and private), driving Europe's ecological transition.


Jose Norberto Mazón, Professor of Computer Languages and Systems at the University of Alicante. The contents and views reflected in this publication are the sole responsibility of the author.

calendar icon
Noticia

The Ministry for the Digital Transformation and Civil Service, on 17 December, announced the publication of the call for proposals for products and services for data spaces, an initiative that seeks to promote innovation and development in various sectors through financial aid. These grants are designed to support companies and organisations in the implementation of advanced technological solutions, thus promoting competitiveness and digital transformation in Spain.

In addition, on 30 December, the Ministry also launched the second call for proposals for demonstrators and use cases. This call aims to encourage the creation and development of sectoral data spaces, promoting collaboration and the exchange of information between the different actors in the sector.

The Ministry has been conducting promotions through online workshops to inform and prepare stakeholders about the opportunities and benefits of the data space sector. It is expected that these events will continue throughout January, providing further opportunities for stakeholders to inform themselves and participate.

The following material is of interest to you:

Call for demonstrators and use cases

2nd Call for Demonstrators and Use Cases. 65 M.€. Call for proposals December 24.Schedule of demonstrators and use cases. 1) First call. Call for applications: first and second quarter 2024. Resolution: third and fourth quarter 2024. Project execution: third and fourth quarter 2024 + 2025 + first and second quarter 2026. 2) Second call. Call: third and fourth quarter 2024. Application: first quarter 2025. Resolution: second quarter 2025. Execution of projects: third and fourth quarter 2025 + first and second quarter 2026.

Products and services

Products and services. 44M.€. Call for proposals December 24.Schedule. BBRR publication: third quarter 2024. Call for applications: fourth quarter 2024 and first quarter 2025. Resolution: first and second quarter 2025. Project execution: third and fourth quarter 2025 + first and second quarter 2026.

calendar icon