El Instituto de Estadística y Cartografía de Andalucía (IECA), in collaboration with the Andalusian Agency for International Development Cooperation (AACID), has incorporated new indicators at the municipal level into its Sustainable Development Indicators System for Andalusia for the Agenda 2030. This effort aims to integrate statistical and geographical information while enhancing the efficiency of the Andalusian public administration and the information services provided to society.
Thanks to these efforts, Andalusia has been selected as one of the participating regions in the European project "REGIONS 2030: Monitoring the SDGs in EU regions," along with nine other regions in the European Union. All of these regions share a strong commitment to the analysis and fulfillment of the Sustainable Development Goals (SDGs), recognizing the importance of this work in decision-making and sustainable regional development.
The "REGIONS 2030" project, funded by the European Parliament and developed by the Joint Research Centre (JRC) of the European Commission in collaboration with the Directorate-General for Regional and Urban Policy (DG REGIO) and EUROSTAT, aims to fill data gaps in monitoring the SDGs in EU regions.

Image 1: "REGIONS 2030" Project: Monitoring the SDGs in EU regions.
Source: Andalusian Institute of Statistics and Cartography (IECA)
The new indicators incorporated are essential for measuring the progress of the SDGs
The Andalusian Institute of Statistics and Cartography, in collaboration with AACID, has created a set of indicators that allow for evaluating the advancement of the Sustainable Development Goals at the regional level, available on their website. All the new municipal-level indicators are identified with the Joint Research Centre (municipal) for Andalusia, and they address 9 of the 17 Sustainable Development Goals.
The methodology used for most of the indicators is based on georeferenced information from the Andalusian Institute of Statistics and Cartography, using publications on the Spatial Distribution of the Population in Andalusia and the Characterization and Distribution of Built Space in Andalusia as reference points.
One of the indicators provides information on Goal 1: No Poverty and measures the risks of poverty by assessing the percentage of people residing at an address where none of their members are affiliated with Social Security. This indicator reveals more unfavorable conditions in urban municipalities compared to rural ones, consistent with previous studies that identify cities as having more acute poverty situations than rural areas.
Similarly, the per capita Built-up Area indicator for Goal 11: Sustainable Cities and Communities has been calculated using cadastral data and geospatial processes in geographic information systems.
Visualization and query of the new municipal indicators
Allow for obtaining information at the municipal level about the value and variation of the indicators compared to the previous year, both for the entire Andalusia region and different degrees of urbanization.

Image 2: Data visualization of the indicator.
Source: Andalusian Institute of Statistics and Cartography (IECA)
Moreover, the applied filter enables an analysis of the temporal and geographical evolution of the indicators in each of the considered areas, providing a temporal and territorial perspective.

Image 3: Visualization of the indicator's evolution by area.
Source: Andalusian Institute of Statistics and Cartography (IECA)
These results are presented through an interactive map at the municipal level, displaying the distribution of the indicator in the territory.

Image 4: Interactive map of the indicator.
Source: Andalusian Institute of Statistics and Cartography (IECA)
The data for the indicators are also available in downloadable structured formats (XLS, CSV, and JSON). Methodological information regarding the calculations for each indicator is provided as well.
The inclusion of Andalusia in the "REGIONS 2030" project
Has integrated all of this work with the existing Sustainable Development Indicators System for Andalusia for the Agenda 2030, which has been calculated and published by the IECA to date. This collective effort among different regions will serve to establish a methodology and select the most relevant regional indicators in Europe (NUTS2 European level) so that this methodology can be applied to all European regions in the future.
The "REGIONS 2030" project, after completing its initial work in Andalusia, has disseminated its results in the article "Monitoring the SDGs in Andalusia region, Spain," published by the European Commission in July 2023, and in an event held at the Three Cultures Foundation of the Mediterranean on September 27, under the title 'SDG Localisation and Monitoring Framework for 2030 Agenda Governance: Milestones & Challenges in Andalusia.' In this event, each selected region presented their results and discussed the needs, deficiencies, or lessons learned in generating their reports.
The "REGIONS 2030" project will conclude in December 2023 with the presentation and publication of a final report. This report will consolidate the ten regional reports generated during the monitoring of the Sustainable Development Goals at the regional level in Europe, contributing to their effective monitoring as part of the proper implementation of the Agenda 2030.
This application is a tool that presents in real time the costs of electricity in Spain for the Regulated Tariff PVPC (Voluntary Price for Small Consumers). The objective is that any user can check the hours with the lowest electricity costs in order to save on their electricity bill.
Different graphs of the price of electricity hour by hour are offered, as well as useful data for users obtained from the open API of ESIOS (Red Eléctrica de España). All these graphs and data show information about fluctuations in the price of electricity in Spain.
The user can easily find out at what time electricity is cheapest at any given moment and the exact price, as well as an estimate of the next day's prices as of 8:30 p.m. the previous day.
Open data sources are:
-
Data from Red Eléctrica: https://api.esios.ree.es/
Data activism is an increasingly significant citizen practice in the platform era for its growing contribution to democracy, social justice and rights. It is an activism that uses data and data analysis to generate evidence and visualisations with the aim of revealing injustices, improving people's lives and promoting social change.
In the face of the massive use of surveillance data by certain corporations, data activism is exercised by citizens and non-governmental organisations. For example, the organisation Forensic Architecture (FA)a centre at Goldsmiths under the University of London, investigates human rights violations, including state violence, using public, citizen and satellite data, and methodologies such as open source intelligence (known as OSINT). The analysis of data and metadata, the synchronisation of video footage taken by witnesses or journalists, as well as official recordings and documents, allows for the reconstruction of facts and the generation of an alternative narrative about events and crises.
Data activism has attracted the interest of research centres and non-governmental organisations, generating a line of work within the discipline of critical studies. This has allowed us to reflect on the effect of data, platforms and their algorithms on our lives, as well as on the empowerment that is generated when citizens exercise their right to data and use it for the common good.

Image 1: Ecocide in Indonesia (2015)
Source: Forensic Architecture (https://forensic-architecture.org/investigation/ecocide-in-indonesia)
Research centres such as Datactive o Data + Feminism Lab have created theory and debates on the practice of data activism. Likewise, organisations such as Algorights -a collaborative network that encourages civil society participation in the field of aI technologies- y AlgorithmWatch -a human rights organisation - generate knowledge, networks and arguments to fight for a world in which algorithms and artificial Intelligence (AI) contribute to justice, democracy and sustainability, rather than undermine them.
This article reviews how data activism emerged, what interest it has sparked in social science, and its relevance in the age of platforms.
History of a practice
The production of maps using citizen data could be one of the first manifestations of data activism as it is now known. A seminal map in the history of data activism was generated by victims and activists with data from the 2010 Haiti earthquakeon the Kenyan platform Ushahidi ("testimony" in Swahili). A community of digital humanitarianscreated the map from other countries and called on victims and their families and acquaintances to share data on what was happening in real time. Within hours, the data was verified and visualised on an interactive map that continued to be updated with more data and was instrumental in assisting the victims on the ground. Today, such mapsare generated whenever a crisis arises, and are enriched with citizen, satellite and camera-equipped drone data to clarify events and generate evidence.
Emerging from movements known as cypherpunk and technopositivism or technoptimism (based on the belief that technology is the answer to humanity's challenges), data activism has evolved as a practice to adopt more critical stances towards technology and the power asymmetries that arise between those who originate and hand over their data, and those who capture and analyse it.
Today, for example, the Ushahidi community map production platform has been used to create data on gender-based violence in Egypt and Syria, and on trusted gynaecologists in India, for example. Today, the invisibilisation and silencing of women is the reason why some organisations are fighting for recognition and a policy of visibility, something that became evident with the #MeToo movement. Feminist data practices seek visibility and critical interpretations of datification(or the transformation of all human and non-human action into measurable data that can be transformed into value). For example, Datos Contra el Feminicidio or Feminicidio.net offer maps and data analysis on femicide in various parts of the world.
The potential for algorithmic empowerment offered by these projects removes barriers to equality by improving the conditions conditions that enable women to solve problems, determine how data is collected and used, and exercise power.
Birth and evolution of a concept
In 2015, Citizen Media Meets Big Data: The Rise of Data Activismwas published, in which, for the first time, data activism was coined and defined as a concept based on practices observed in activists who engage politically with data infrastructure. Data infrastructure includes the data, software, hardware and processes needed to turn data into value. Later, Data activism and social change (London, Palgrave) and Data activism and social change. Alliances, maps, platforms and action for a better world (Madrid: Dykinson) develop analytical frameworks based on real cases that offer ways to analyse other cases.
Accompanying the varied practices that exist within data activism, its study is creating spaces for feminist and post-colonialist research on the consequences of datification. Whereas the chroniclers of history (mainly male sources) defined technology in relation to the value of their productsfeminist data studies consider women as users and designers of technology as users and designers of algorithmic systems and seek to use data for equality, and to move away from capitalist exploitation and its structures of domination.
Data activism is now an established concept in social science. For example, Google Scholar offers more than 2,000 results on "data activism". Several researchers use it as a perspective to analyse various issues. For example, Rajão and Jarke explore environmental activism in Brazil; Gezgin studies critical citizenship and its use of data infrastructure; Lehtiniemi and Haapoja explore data agency and citizen participation; and Scott examines the need for platform users to develop digital surveillance and care for their personal data.
At the heart of these concerns is the concept of data agency, which refers to people not only being aware of the value of their data, but also exercising control over it, determining how it is used and shared. It could be defined as actions and practices related to data infrastructure based on individual and collective reflection and interest. That is, while liking a post would not be considered an action with a high degree of data agency, participating in a hackathon - a collective event in which a computer programme is improved or created - would be. Data agency is based on data literacy, or the degree of knowledge, access to data and data tools, and opportunities for data literacy that people have. Data activism is not possible without a data agency.
In the rapidly evolving landscape of the platform economy, the convergence of data activism, digital rights and data agency has become crucial. Data activism, driven by a growing awareness of the potential misuse of personal data, encourages individuals and collectives to use digital technology for social change, as well as to advocate for greater transparency and accountability on the part of tech giants. As more and more data generation and the use of algorithms shape our lives in areas such as education, employment, social services and health, data activism emerges as a necessity and a right, rather than an option.
____________________________________________________________________
Content prepared by Miren Gutiérrez, PhD and researcher at the University of Deusto, expert in data activism, data justice, data literacy and gender disinformation.
The contents and views reflected in this publication are the sole responsibility of its author.
In the era of data, we face the challenge of a scarcity of valuable data for building new digital products and services. Although we live in a time when data is everywhere, we often struggle to access quality data that allows us to understand processes or systems from a data-driven perspective. The lack of availability, fragmentation, security, and privacy are just some of the reasons that hinder access to real data.
However, synthetic data has emerged as a promising solution to this problem. Synthetic data is artificially created information that mimics the characteristics and distributions of real data, without containing personal or sensitive information. This data is generated using algorithms and techniques that preserve the structure and statistical properties of the original data.
Synthetic data is useful in various situations where the availability of real data is limited or privacy needs to be protected. It has applications in scientific research, software and system testing, and training artificial intelligence models. It enables researchers to explore new approaches without accessing sensitive data, developers to test applications without exposing real data, and AI experts to train models without the need to collect all the real-world data, which is sometimes simply impossible to capture within reasonable time and cost.
There are different methods for generating synthetic data, such as resampling, probabilistic and generative modeling, and perturbation and masking methods. Each method has its advantages and challenges, but overall, synthetic data offers a secure and reliable alternative for analysis, experimentation, and AI model training.
It is important to highlight that the use of synthetic data provides a viable solution to overcome limitations in accessing real data and address privacy and security concerns. Synthetic data allows for testing, algorithm training, and application development without exposing confidential information. However, ensuring the quality and fidelity of synthetic data is crucial through rigorous evaluations and comparisons with real data.
In this report, we provide an introductory overview of the discipline of synthetic data, illustrating some valuable use cases for different types of synthetic data that can be generated. Autonomous vehicles, DNA sequencing, and quality controls in production chains are just a few of the cases detailed in this report. Furthermore, we highlight the use of the open-source software SDV (Synthetic Data Vault), developed in the academic environment of MIT, which utilizes machine learning algorithms to create tabular synthetic data that imitates the properties and distributions of real data. We present a practical example in a Google Colab environment to generate synthetic data about fictional customers hosted in a fictional hotel. We follow a workflow that involves preparing real data and metadata, training the synthesizer, and generating synthetic data based on the learned patterns. Additionally, we apply anonymization techniques to protect sensitive data and evaluate the quality of the generated synthetic data.
In summary, synthetic data is a powerful tool in the data era, as it allows us to overcome the scarcity and lack of availability of valuable data. With its ability to mimic real data without compromising privacy, synthetic data has the potential to transform the way we develop AI projects and conduct analysis. As we progress in this new era, synthetic data is likely to play an increasingly important role in generating new digital products and services.
If you want to know more about the content of this report, you can watch the interview with its author.

Below, you can download the full report, the executive summary and a presentation-summary.
Aspects as relevant to our society as environmental sustainability, climate change mitigation or energy security have led to the energy transition taking on a very important role in the daily lives of nations, private and public organisations, and even in our daily lives as citizens of the world. The energy transition refers to the transformation of our energy production and consumption patterns towards less dependence on fossil fuels through low or zero carbon sources, such as renewable sources.
The measures needed to achieve a real transition are far-reaching and therefore complex. In this process, open data initiatives can contribute enormously by facilitating public awareness, improving the standardisation of metrics and mechanisms to measure the impact of measures taken to mitigate climate change globally, promoting the transparency of governments and companies in terms ofCO2emission reductions, or increasing the participation of citizens in the process citizen and scientific and scientific participation for the creation of new digital solutions, as well as the advancement of knowledge and innovation.
What initiatives are providing guidance?
The best way to understand how open data helps us to observe the effects of highCO2 emissions as well as the impact of different measures taken by all kinds of actors in favour of the energy transition is by looking at real examples.
The Energy Institute (IE), an organisation dedicated to accelerating the energy transition, publishes its annual World Energy Statistical Review, which in its latest version includes up to 80 datasets, some dating back as far as 1965, describing the behaviour of different energy sources as well as the use of key minerals in the transition to sustainability. Using its own online reporting tool to represent those variables we want to analyse, we can see how, despite the exponential growth of renewable energy generation in recent years (figure 1), there is still an increasing trend inCO2emissions (figure 2), although not as drastic as in the first decade of the 2000s.

Figure 1: Evolution of global renewable generation in TWh.
Source: Energy Institute Statistical Review 2023

Figure 2: Evolution of global CO2 emissions in MTCO2
Source: Energy Institute Statistical Review 2023
Another international energy transition driver that offers an interesting catalogue of data is the International Energy Agency (IEA). In this case we can find more than 70 data sets, not all of them open without subscription, which include both historical energy data and future projections in order to reach the Net Zero 2050targets. The following is an example of this data taken from their library of graphical displays, in particular the expected evolution of energy generation to reach the Net Zero targets in 2050. In Figure 3 we can examine how, in order to achieve these targets, two main simultaneous processes must occur: reducing the total annual energy demand and progressively moving to lowerCO2emitting generation sources.

Figure 3: Energy generation 2020-2050 to achieve Net Zero emissions targets in Exajulios.
Source: IEA, Total energy supply by source in the Net Zero Scenario, 2022-2050, IEA, Paris, IEA. Licence: CC BY 4.0
To analyse in more detail how these two processes must happen in order to achieve the Net Zero objectives, IEA offers another very relevant visualisation (figure 4). In it, we can see how, in order to achieve the reduction of the total annual energy demand, it is necessary to make accelerated progress in the decade 2025-2035, thanks to measures such as electrification, technical improvements in the efficiency of energy systems or demand reduction. In this way, a reduction of close to 100EJs per year should be achieved by 2035, which should then be maintained throughout the rest of the period analysed. To try to understand the significance of these measures and taking as a reference the average electricity consumption of Spanish households, some 3,500kWh/year, the desired annual reduction would be equivalent to avoiding the consumption of some 7,937,000,000 households or, in other words, to avoiding in one year the electricity consumption that all Spanish households would consume for 418 years.
ith respect to the transition to lower emission sources, we can see in this figure how the expectation is that solar energy will be the leader in growth, ahead of wind energy, while unabated coal (energy from burning coal without usingCO2capture systems) is the source whose use is expected to be reduced the most.

Figure 4: Changes in energy generation 2020-2050 to achieve Net Zero emissions targets in Exajulios.
Source: IEA, Changes in total energy supply by source in the Net Zero Scenario, 2022-2050, IEA, Paris, IEA. Licence: CC BY 4.0
Other interesting open data initiatives from an energy transition perspective are the catalogues of the European Commission (more than 1.5 million datasets) and of the Spanish Government through datos.gob.es (more than 70 thousand datasets). Both provide open datasets on topics such as environment, energy or transport.
In both portals, we can find a wide variety of information, such as energy consumption of cities and companies, authorised projects for the construction of renewable generation facilities or the evolution of hydrocarbon prices.
Finally, the REDatainitiative of Red Eléctrica Española (REE)offers a data area with a wide range of information related to the Spanish electricity system. Among others, information related to electricity generation, markets or the daily behaviour of the system.

Figure 5: Sections of information provided from REData
Source: El sistema eléctrico: Guía de uso de REData, November 2022. Red Eléctrica Española.
The website also offers an interactive viewer for consulting and downloading data, as shown below for electricity generation, as well as a programmatic interface (API - Application Programming Interface) for consulting the data repository provided by this entity.

Figure 6: REE REData Platform
Source: https://www.ree.es/es/datos/aldia
What conclusions can we draw from this movement?
As we have been able to observe, the enormous concern about the energy transition has motivated multiple organisations of different types to make data openly available for analysis and use by other organisations and the general public. Entities as varied as the Energy Institute, the International Energy Agency, the European Commission, the Spanish Government and Red Eléctrica Española publish valuable information through their data portals in search of greater transparency and awareness.
In this short article we have been able to examine how these data have been of great help to better understand the historical evolution ofCO2emissions, the installed wind power capacity or the expectations of energy demand to reach the Net Zero targets. Open data is a very good tool to improve the understanding of the need and depth of the energy transition, as well as the progress of the measures that are progressively being taken by multiple entities around the world, and we expect to see an increasing number of initiatives along these lines.
Content prepared by Juan Benavente, senior industrial engineer and expert in technologies linked to the data economy. The contents and points of view reflected in this publication are the sole responsibility of the author.
The Canary Islands Statistics Institute (ISTAC) has added more than 500 semantic assets and more than 2100 statistical cubes to its catalogue.
This vast amount of information represents decades of work by the ISTAC in standardisation and adaptation to leading international standards, enabling better sharing of data and metadata between national and international information producers and consumers.
The increase in datasets not only quantitatively improves the directory at datos.canarias.es and datos.gob.es, but also broadens the uses it offers due to the type of information added.
New semantic assets
Semantic resources, unlike statistical resources, do not present measurable numerical data , such as unemployment data or GDP, but provide homogeneity and reproducibility.
These assets represent a step forward in interoperability, as provided for both at national level with the National Interoperability Scheme ( Article 10, semantic assets) and at European level with the European Interoperability Framework (Article 3.4, semantic interoperability). Both documents outline the need and value of using common resources for information exchange, a maxim that is being pursued at implementing in a transversal way in the Canary Islands Government. These semantic assets are already being used in the forms of the electronic headquarters and it is expected that in the future they will be the semantic assets used by the entire Canary Islands Government.
Specifically in this data load there are 4 types of semantic assets:
- Classifications (408 loaded): Lists of codes that are used to represent the concepts associated with variables or categories that are part of standardised datasets, such as the National Classification of Economic Activities (CNAE), country classifications such as M49, or gender and age classifications.
- Concept outlines (115 uploaded): Concepts are the definitions of the variables into which the data are disaggregated and which are finally represented by one or more classifications. They can be cross-sectional such as "Age", "Place of birth" and "Business activity" or specific to each statistical operation such as "Type of household chores" or "Consumer confidence index".
- Topic outlines (2 uploaded): They incorporate lists of topics that may correspond to the thematic classification of statistical operations or to the INSPIRE topic register.
- Schemes of organisations (6 uploaded): This includes outlines of entities such as organisational units, universities, maintaining agencies or data providers.
All these types of resources are part of the international SDMX (Statistical Data and Metadata Exchange) standard, which is used for the exchange of statistical data and metadata. The SDMX provides a common format and structure to facilitate interoperability between different organisations producing, publishing and using statistical data.

The Canary Islands Statistics Institute (ISTAC) has added more than 500 semantic assets and more than 2100 statistical cubes to its catalogue.
This vast amount of information represents decades of work by the ISTAC in standardisation and adaptation to leading international standards, enabling better sharing of data and metadata between national and international information producers and consumers.
The increase in datasets not only quantitatively improves the directory at datos.canarias.es and datos.gob.es, but also broadens the uses it offers due to the type of information added.
New semantic assets
Semantic resources, unlike statistical resources, do not present measurable numerical data , such as unemployment data or GDP, but provide homogeneity and reproducibility.
These assets represent a step forward in interoperability, as provided for both at national level with the National Interoperability Scheme ( Article 10, semantic assets) and at European level with the European Interoperability Framework (Article 3.4, semantic interoperability). Both documents outline the need and value of using common resources for information exchange, a maxim that is being pursued at implementing in a transversal way in the Canary Islands Government. These semantic assets are already being used in the forms of the electronic headquarters and it is expected that in the future they will be the semantic assets used by the entire Canary Islands Government.
Specifically in this data load there are 4 types of semantic assets:
- Classifications (408 loaded): Lists of codes that are used to represent the concepts associated with variables or categories that are part of standardised datasets, such as the National Classification of Economic Activities (CNAE), country classifications such as M49, or gender and age classifications.
- Concept outlines (115 uploaded): Concepts are the definitions of the variables into which the data are disaggregated and which are finally represented by one or more classifications. They can be cross-sectional such as "Age", "Place of birth" and "Business activity" or specific to each statistical operation such as "Type of household chores" or "Consumer confidence index".
- Topic outlines (2 uploaded): They incorporate lists of topics that may correspond to the thematic classification of statistical operations or to the INSPIRE topic register.
- Schemes of organisations (6 uploaded): This includes outlines of entities such as organisational units, universities, maintaining agencies or data providers.
All these types of resources are part of the international SDMX (Statistical Data and Metadata Exchange) standard, which is used for the exchange of statistical data and metadata. The SDMX provides a common format and structure to facilitate interoperability between different organisations producing, publishing and using statistical data.
On September 14th, the II National Open Data Meeting took place under the theme "Urgent Call to Action for the Environment" at the Pignatelli building, the headquarters of the Government of Zaragoza. The event, held in person in the Crown Room, allowed attendees to participate and exchange ideas in real-time.
The event continued the tradition started in 2022 in Barcelona, establishing itself as one of the main gatherings in Spain in the field of public sector data reuse. María Ángeles Rincón, Director-General of Electronic Administration and Corporate Applications of the Government of Aragon, inaugurated the event, emphasizing the importance of open data in terms of transparency, reuse, economic development, and social development. She highlighted that high-quality and neutral data available on open data portals are crucial for driving artificial intelligence and understanding our environmental surroundings.
The day continued with a presentation by María Jesús Fernández Ruiz, Head of the Technical Office of Open Government of the City of Zaragoza, titled "Why Implement Data Governance in Our Institutions?" In her presentation, she stressed the need to manage data as a strategic asset and a public good, integrating them into governance and management policies. She also emphasized the importance of interoperability and the reuse of large volumes of data to turn them into knowledge, as well as the formation of interdisciplinary teams for data management and analysis.
The event included three panel discussions with the participation of professionals, experts, and scientists related to the management, publication, and use of open data, focusing on environmental data.
The first panel discussion highlighted the value of open data for understanding the environment we live in. In this video, you can revisit the panel discussion moderated by Borja Carvajal of the Diputación de Castellón: II National Open Data Meeting, Zaragoza, September 14, 2023 (morning session).
Secondly, Magda Lorente from the Diputación de Barcelona moderated the discussion "Open Data, Algorithms, and Artificial Intelligence: How to Combat Environmental Disinformation?" This second panel featured professionals from data journalism, science, and the public sector who discussed the opportunities and challenges of disseminating environmental information through open data.

Conclusions from Challenges 1 and 2 on Open Data: Interadministrative Collaboration and Professional Competencies
After the second panel discussion, the conclusions of Challenges 1 and 2 on open data were presented, two lines of work defined at the I National Open Data Meeting held in 2022.
In last year's conference, several challenges were identified in the field of open data. The first of them (Challenge 1) involved promoting collaboration between administrations to facilitate the opening of data sets and generate valuable exchanges for both parties. To address this challenge, annual work was carried out to establish the appropriate lines of action.
You can download the document summarizing the conclusions of Challenge 1 here: https://opendata.aragon.es/documents/90029301/115623550/Reto_1_encuentro_datos_Reto_1.pptx
On the other hand, Challenge 2 aimed to identify the need to define professional roles, as well as essential knowledge and competencies that public employees who take on tasks related to data opening should have.
To address this second challenge, a working group of professionals with expertise in the sector was also established, all pursuing the same goal: to promote the dissemination of open data and thus improve public policies by involving citizens and businesses throughout the opening process.
To resolve the key issues raised, the group addressed two related lines of work:
- Defining competencies and basic knowledge in the field of open data for different public professional profiles involved in data opening and use.
- Identifying and compiling existing training materials and pathways to provide workers with a starting point.
Key Professional Competencies for Data Opening
To specify the set of actions and attitudes that a worker should have to complete their work with open data, it was considered necessary to identify the main profiles in the administration needed, as well as the specific needs of each position. In this regard, the working group has based its analysis on the following roles:
- Open Data Manager role: responsible for technical leadership in promoting open data policies, data policy definition, and data model activities.
- Technical role in data opening (IT profile): encourages execution activities more related to system management, data extraction processes, data cleaning, etc., among others.
- Functional role in data opening (service technician): carries out execution activities more related to selecting data to be published, quality, promotion of open data, visualization, data analytics, for example.
- Use of data by public workers: performs activities involving data use for decision-making, basic data analytics, among others. Analyzing the functions of each of these roles, the team has established the necessary competencies and knowledge for performing the functions defined in each of these roles.
You can download the document with conclusions about professional capabilities for data opening here: https://opendata.aragon.es/documents/90029301/115623550/reto+2_+trabajadores+p%C3%BAblicos+capacitados+para+el+uso+y+la+apertura+de+datos.docx
Training Materials and Pathways on Open Data
In line with the second line of work, the team of professionals has developed an inventory of online training resources in the field of open data, which can be accessed for free. This list includes courses and materials in Spanish, co-official languages, and English, covering topics such as open data, their processing, analysis, and application.
You can download the document listing training materials, the result of the work of Challenge 2's group, here: [https://opendata.aragon.es/datos/catalogo/dataset/listado-de-materiales-formativos-sobre-datos-abiertos-fruto-del-trabajo-del-grupo-del-reto-2
In conclusion, the working group considered that the progress made during this first year marks a solid start, which will serve as a basis for administrations to design training and development plans aimed at the different roles involved in data opening. This, in turn, will contribute to strengthening and improving data policies in these entities.
Furthermore, it was noted that the effort invested in these months to identify training resources will be key in facilitating the acquisition of essential knowledge by public workers. On the other hand, it has been highlighted that there is a large number of free and open training resources with a basic level of specialization. However, the need to develop more advanced materials to train the professionals that the administration needs today has been identified.
The third panel discussion, moderated by Vicente Rubio from the Diputación de Castellón, focused on public policies based on data to improve the living environment of its inhabitants.
At the end of the meeting, it was emphasized how important it is to continue working on and shaping different challenges related to the functions and services of open data portals and data opening processes. In the III National Open Data Meeting to be held next year in the Province of Castellón, progress in this area will be presented.
From September 25th to 27th , Madrid will be hosting the fourth edition of the Open Science Fair, an international event on open science that will bring together experts from all over the world with the aim of identifying common practices, bringing positions closer together and, in short, improving synergies between the different communities and services working in this field.
This event is an initiative of OpenAIRE, an organisation that aims to create more open and transparent academic communication. This edition of the Open Science Fair is co-organised by the Spanish Foundation for Science and Technology (FECYT), which depends on the Ministry of Science and Innovation, and is one of the events sponsored by the Spanish Presidency of the spanish Presidency of the Council of the European Union.
The current state of open science
Science is no longer the preserve of scientists. Researchers, institutions, funding agencies and scientific publishers are part of an ecosystem that carries out work with a growing resonance with the public and a greater impact on society. In addition, it is becoming increasingly common for research groups to open up to collaborations with institutions around the world. Key to making this collaboration possible is the availability of data that is open and available for reuse in research.
However, to enable international and interdisciplinary research to move forward, it is necessary to ensure interoperability between communities and services, while maintaining the capacity to support different workflows and knowledge systems.
The objectives and programme of the Open Science Fair
In this context, the Open Science Fair 2023 is being held, with the aim of bringing together and empowering open science communities and services, identifying common practices related to open science to analyse the most suitable synergies and, ultimately sharing experiences that are developed in different parts of the world.
The event has an interesting programme that includes keynote speeches from relevant speakers, round tables, workshops, and training sessions, as well as a demonstration session. Attendees will be able to share experiences and exchange views, which will help define the most efficient ways for communities to work together and draw up tailor-made roadmaps for the implementation of open science.
This third edition of Open Science will focus on 'Open Science for Future Generations' and the main themes it will address, as highlighted on the the event's website, are:
- Progress and reform of research evaluation and open science. Connections, barriers and the way forward.
- Impact of artificial intelligence on open science and impact of open science on artificial intelligence.
- Innovation and disruption in academic publishing.
- Fair data, software and hardware.
- Openness in research and education.
- Public engagement and citizen science.
Open science and artificial intelligence
The artificial intelligence is gaining momentum in academia through data analysis. By analysing large amounts of data, researchers can identify patterns and correlations that would be difficult to reach through other methods. The use of open data in open science opens up an exciting and promising future, but it is important to ensure that the benefits of artificial intelligence are available to all in a fair and equitable way.
Given its high relevance, the Open Science Fair will host two keynote lectures and a panel discussion on 'AI with and for open science'. The combination of the benefits of open data and artificial intelligence is one of the areas with the greatest potential for significant scientific breakthroughs and, as such, will have its place at the event is one of the areas with the greatest potential for significant scientific breakthroughs and, as such, will have its place at the event. It will look from three perspectives (ethics, infrastructure and algorithms) at how artificial intelligence supports researchers and what the key ingredients are for open infrastructures to make this happen.
The programme of the Open Science Fair 2023 also includes the presentation of a demo of a tool for mapping the research activities of the European University of Technology EUt+ by leveraging open data and natural language processing. This project includes the development of a set of data-driven tools. Demo attendees will be able to see the developed platform that integrates data from public repositories, such as European research and innovation projects from CORDIS, patents from the European Patent Office database and scientific publications from OpenAIRE. National and regional project data have also been collected from different repositories, processed and made publicly available.
These are just some of the events that will take place within the Open Science Fair, but the full programme includes a wide range of events to explore multidisciplinary knowledge and research evaluation.
Although registration for the event is now closed, you can keep up to date with all the latest news through the hashtag #OSFAIR2023 on Twitter, LinkedIn and Facebook, as well as on the event's website website.
In addition, on the website of datos.gob.es and on our social networks you can keep up to date on the most important events in the field of open data, such as those that will take place during this autumn.
Mark them on your calendar, make a note in your agenda, or set reminders on your mobile to not forget about this list of events on data and open government taking place this autumn. This time of year brings plenty of opportunities to learn about technological innovation and discuss the transformative power of open data in society.
From practical workshops to congresses and keynote speeches, in this post, we present some of the standout events happening in October and November. Sign up before the slots fill up!
Data spaces in the EU: synergies between data protection and data spaces
At the beginning of the tenth month of the year, the Spanish Data Protection Agency (AEPD) and the European Cybersecurity Agency (ENISA) will hold an event in English to address the challenges and opportunities of implementing the provisions of the General Data Protection Regulation (GDPR) in EU data spaces.
During the conference, the conference will review best practices of existing EU data spaces, analyse the interaction between EU legislation and policies on data exchange and present data protection engineering as an integral element in the structure of data spaces, as well as its legal implications.
- Who is it aimed at? This event promises to be a platform for knowledge and collaboration of interest to anyone interested in the future of data in the region.
- When and where is it? On October 2nd in Madrid from 9:30 AM to 6:00 PM and available for streaming with prior registration until 2:45 PM.
- Registration: link no longer available
SEMIC Conference 'Interoperable Europe in the age of AI'
Also in October, the annual SEMIC conference organised by the European Commission in collaboration with the Spanish Presidency of the Council of the European Union returns. This year's event takes place in Madrid and will explore how interoperability in the public sector and artificial intelligence can benefit each other through concrete use cases and successful projects.
Sessions will address the latest trends in data spaces, digital governance, data quality assurance and generative artificial intelligence, among others. In addition, a proposal for an Interoperable Europe Act will be presented.
- Who is it aimed at? Public or private sector professionals working with data, governance and/or technology. Last year's edition attracted more than 1,000 professionals from 60 countries.
- When and where is it? The conference will be held on October 18th at the Hotel Riu Plaza in Madrid and can also be followed online. Pre-conference workshops will take place on October 17th at the National Institute of Public Administration
- Registration: https://semic2023.eu/registration/

Data and AI in action: sustainable impact and future realities
From October 25th to 27th, an event on the value of data in artificial intelligence is taking place in Valencia, with the collaboration of the European Commission and the Spanish Presidency of the Council of the European Union, among others.
Over the course of the three days, approximately one-hour presentations will be given on a variety of topics such as sectoral data spaces, the data economy and cybersecurity.
- Who is it aimed at? Members of the European Big Data Value Forum will receive a discounted entrance fee and associate members receive three tickets per organisation. The ticket price varies from 120 to 370 euros.
- When and where is it? It will take place on October 25th, 26th and 27th in Valencia.
- Registration: bipeek.
European Webinars: open data for research, regional growth with open data and data spaces
The European Open Data Portal organises regular webinars on open data projects and technologies. In datos.gob.es we report on this in summary publications on each session or in social networks. In addition, once the event is over, the materials used to carry out the didactic session are published. The October events calendar is now available on the portal's website. Sign up to receive a reminder of the webinar and, subsequently, the materials used.
Data spaces: Discovering block architecture
- When? On October 6th from 10:00 AM to 11:30 AM
- Registration: data.europa academy 'Data spaces: Discovering the building blocks' (clickmeeting.com)
How to use open data in your research?
- When? On October 19th from 10:00 AM to 11:30 AM
- Registration: How to use open data for your research (clickmeeting.com)
Open Data Maturity Report: The in-depth impact dimension
- When? On October 27th from 10:00 AM to 11.30 AM
- Registration: data.europa academy 'Open Data Maturity 2022: Diving deeper into the impact dimension' (clickmeeting.com)
ODI SUMMIT 2023: Changes in data
November starts with an Open Data Institute (ODI) event that poses the following question by way of introduction: how does data impact on technology development to address global challenges? For society to benefit from such innovative technologies as artificial intelligence, data is needed.
This year's ODI SUMMIT features speakers of the calibre of World Wide Web founder Tim Berners-Lee, Women Income Network co-founder Alicia Mbalire and ODI CEO Louise Burke. It is a free event with prior registration.
- Who is it aimed at? Teachers, students, industry professionals and researchers are welcome to attend the event.
- When and where is it? It is on November 7th, online.
- Entry: Form (hsforms.com)
These are some of the events that are scheduled for this autumn. Anyway, don't forget to follow us on social media so you don't miss any news about innovation and open data. We are on Twitter and LinkedIn; you can also write to us at dinamizacion@datos.gob.es if you want us to add any other event to the list or if you need extra information.
The UNA Women application offers a personalized dashboard with training options for young women according to their socioeconomic circumstances.
The main objective of the project is to contribute to reducing the gender employment gap. For this purpose, the company ITER IDEA has used more than 6 million lines of data processed from different sources, such as data.europa.eu, Eurostat, Censis, Istat (Italian National Institute of Statistics) or NUMBEO.
In terms of user experience, the application first asks the user to fill in a form to find out key data about the person seeking information: age, education or professional sector, training budget, etc. Once the data has been collected, the app offers an interactive map with all the training options in Europe. Each city has a panel that shows interesting data about studies, cost of living in the city, etc.