There is no doubt that digital skills training is necessary today. Basic digital skills are essential to be able to interact in a society where technology already plays a cross-cutting role. In particular, it is important to know the basics of the technology for working with data.
In this context, public sector workers must also keep themselves constantly updated. Training in this area is key to optimising processes, ensuring information security and strengthening trust in institutions.
In this post, we identify digital skills related to open data aimed at both publishing and using open data. Not only did we identify the professional competencies that public employees working with open data must have and maintain, we also compiled a series of training resources that are available to them.
Professional competencies for working with data
A working group was set up in 2024 National Open Data Gathering with one objective: to identify the digital competencies required of public administration professionals working with open data. Beyond conclusions of this event of national relevance, the working group defined profiles and roles needed for data opening, gathering information on their roles and the skills and knowledge required. The main roles identified were:
- Role responsible: has technical responsibility for the promotion of open data policies and organises activities to define policies and data models. Some of the skills required are:
- Leadership in promoting strategies to drive data openness.
- Driving the data strategy to drive openness with purpose.
- Understand the regulatory framework related to data in order to act within the law throughout the data lifecycle.
- Encourage the use of tools and processes for data management.
- Ability to generate synergies in order to reach a consensus on cross-cutting instructions for the entire organisation.
- Technical role of data entry technician (ICT profile): carries out implementation activities more closely linked to the management of systems, extraction processes, data cleansing, etc. EThis profile must have knowledge of, for example:
- How to structure the dataset, the metadata vocabulary, data quality, strategy to follow...
- Be able to analyse a dataset and identify debugging and cleaning processes quickly and intuitively.
- Generate data visualisations, connecting databases of different formats and origins to obtain dynamic and interactive graphs, indicators and maps.
- Master the functionalities of the platform, i.e. know how to apply technological solutions for open data management or know techniques and strategies to access, extract and integrate data from different platforms.
- Open data functional role (technician of a service): executes activities more related to the selection of data to be published, quality, promotion of open data, visualisation, data analytics, etc. For example:
- Handling visualisation and dynamisation tools.
- Knowing the data economy and knowing the information related to data in its full extent (generation by public administrations, open data, infomediaries, reuse of public information, Big Data, Data Driven, roles involved, etc.).
- To know and apply the ethical and personal data protection aspects that apply to the opening of data.
- Data use by public workers: this profile carries out activities on the use of data for decision making, basic data analytics, among others. In order to do so, it must have these competences:
- Navigation, search and filtering of data.
- Data assessment.
- Data storage and export
- Data analysis and exploitation.
In addition, as part of this challenge to increase capacities for open data, a list of free trainings and guides on open data and data analyticswas developed. We compile some of them that are available online and in open format.
| Institution | Resources | Link | Level |
|---|---|---|---|
| Knight Center for Journalism in the Americas | Data journalism and visualisation with free tools | https://journalismcourses.org/es/course/dataviz/ | Beginner |
| Data Europa Academy | Introduction to open data | https://data.europa.eu/en/academy/introducing-open-data | Beginner |
| Data Europa Academy | Understanding the legal side of open data | https://data.europa.eu/en/academy/understanding-legal-side-open-data | Beginner |
| Data Europa Academy | Improve the quality of open data and metadata | https://data.europa.eu/en/academy/improving-open-data-and-metadata-quality | Advanced |
| Data Europa Academy | Measuring success in open data initiatives | https://data.europa.eu/en/training/elearning/measuring-success-open-data-initiatives | Advanced |
| Escuela de Datos | Data Pipeline Course | https://escueladedatos.online/curso/curso-tuberia-de-datos-data-pipeline/ | Intermediate |
| FEMP | Strategic guidance for its implementation - Minimum data sets to be published | https://redtransparenciayparticipacion.es/download/guia-estrategica-para-su-puesta-en-marcha-conjuntos-de-datos-minimos-a-publicar/ | Intermediate |
| Datos.gob.es | Methodological guidelines for data opening | /es/conocimiento/pautas-metodologicas-para-la-apertura-de-datos | Beginner |
| Datos.gob.es | Practical guide to publishing open data using APIs |
/es/conocimiento/guia-practica-para-la-publicacion-de-datos-abiertos-usando-apis |
Intermediate |
| Datos.gob.es | Practical guide to publishing spatial data | /es/conocimiento/guia-practica-para-la-publicacion-de-datos-espaciales | Intermediate |
| Junta de Andalucía | Processing datasets with Open Refine | https://www.juntadeandalucia.es/datosabiertos/portal/tutoriales/usar-openrefine.html | Beginner |
Figure 1. Table of own elaboration with training resources. Source: https://encuentrosdatosabiertos.es/wp-content/uploads/2024/05/Reto-2.pdf
INAP''s continuing professional development training offer
The Instituto Nacional de Administración Pública (INAP) has a Training Activities Programme for 2025, framed in the INAP Learning Strategy 2025-2028.. This training catalogue includes more than 180 activities organised in different learning programmes, which will take place throughout the year with the aim of strengthening the competences of public staff in key areas such as open data management and the use of related technologies.
INAP''s 2025 training programme offers a wide range of courses aimed at improving digital skills and open data literacy. Some of the highlighted trainings include:
- Fundamentals and tools of data analysis.
- Introduction to Oracle SQL.
- Open data and re-use of information.
- Data analysis and visualisation with Power BI.
- Blockchain: technical aspects.
- Advanced Python programming.
These courses, aimed at different profiles of public employees, from open data managers to information management technicians, allow to acquire knowledge on data extraction, processing and visualisation, as well as on strategies for the opening and reuse of open data in the Public Administration. You can consult the full catalogue here..
Other training references
Some public administrations or entities offer training courses related to open data. For more information on its training offer, please see the catalogue with the programmed courses on offer.
- FEMP''s Network of Local Entities for Transparency and Citizen Participation: https://redtransparenciayparticipacion.es/.
- Government of Aragon: Aragon Open Data: https://opendata.aragon.es/informacion/eventos-de-datos-abiertos
- School of Public Administration of Catalonia (EAPC): https://eapc.gencat.cat/ca/inici/index.html#googtrans(ca|es
- Diputació de Barcelona: http://aplicacions.diba.cat/gestforma/public/cercador_baf_ens_locals
- Instituto Geográfico Nacional (IGN): https://cursos.cnig.es/
In short, training in digital skills, in general, and in open data, in particular, is a practice that we recommend at datos.gob.es. Do you need a specific training resource? Write to us in comments, we''ll read you!
As we do every year, the datos.gob.es team wishes you happy holidays. If this Christmas you feel like giving or giving yourself a gift of knowledge, we bring you our traditional Christmas letter with ideas to ask Father Christmas or the Three Wise Men.
We have a selection of books on a variety of topics such as data protection, new developments in AI or the great scientific discoveries of the 20th century. All these recommendations, ranging from essays to novels, will be a sure hit to put under the tree.
Maniac by Benjamin Labatut.
- What is it about? Guided by the figure of John von Neumann, one of the great geniuses of the 20th century, the book covers topics such as the creation of atomic bombs, the Cold War, the birth of the digital universe and the rise of artificial intelligence. The story begins with the tragic suicide of Paul Ehrenfest and progresses through the life of von Neumann, who foreshadowed the arrival of a technological singularity. The book culminates in a confrontation between man and machine in an epic showdown in the game of Go, which serves as a warning about the future of humanity and its creations.
- Who is it aimed at? This science fiction novel is aimed at anyone interested in the history of science, technology and its philosophical and social implications. Es ideal para quienes disfrutan de narrativas que combinan el thriller con profundas reflexiones sobre el futuro de la humanidad y el avance tecnológico. It is also suitable for those looking for a literary work that delves into the limits of thought, reason and artificial intelligence.
Take control of your data, by Alicia Asin.
- What is it about? This book compiles resources to better understand the digital environment in which we live, using practical examples and clear definitions that make it easier for anyone to understand how technologies affect our personal and social lives. It also invites us to be more aware of the consequences of the indiscriminate use of our data, from the digital trail we leave behind or the management of our privacy on social networks, to trading on the dark web. It also warns about the legitimate but sometimes invasive use of our online behaviour by many companies.
- Who is it aimed at? The author of this book is CEO of the data reuse company Libelium who participated in one of our Encuentros Aporta and is a leading expert on privacy, appropriate use of data and data spaces, among others. In this book, the author offers a business perspective through a work aimed at the general public.
Governance, management and quality of artificial intelligence by Mario Geraldo Piattini.
- What is it about? Artificial intelligence is increasingly present in our daily lives and in the digital transformation of companies and public bodies, offering both benefits and potential risks. In order to benefit properly from the advantages of AI and avoid problems it is very important to have ethical, legal and responsible systems in place. This book provides an overview of the main standards and tools for managing and assuring the quality of intelligent systems. To this end, it provides clear examples of best available practices.
- Who is it aimed at? Although anyone can read it, the book provides tools to help companies meet the challenges of AI by creating systems that respect ethical principles and align with engineering best practices.
Nexus, by Yuval Noah.
- What is it about? In this new installment, one of the most fashionable writers analyzes how information networks have shaped human history, from the Stone Age to the present era. This essay explores the relationship between information, truth, bureaucracy, mythology, wisdom and power, and how different societies have used information to impose order, with both positive and negative consequences. In this context, the author discusses the urgent decisions we must make in the face of current threats, such as the impact of non-human intelligence on our existence.
- Who is it aimed at? It is a mainstream work, i.e. anyone can read it and will most likely enjoy reading it. It is a particularly attractive option for readers seeking to reflect on the role of information in modern society and its implications for the future of humanity, in a context where emerging technologies such as artificial intelligence are challenging our way of life.
Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play by David Foster (second edition 2024)
- What is it about? This practical book dives into the fascinating world of generative deep learning, exploring how machines can create art, music and text. Throughout, Foster guides us through the most innovative architectures such as VAEs, GANs and broadcasting models, explaining how these technologies can transform photographs, generate music and even write text. The book starts with the basics of deep learning and progresses to cutting-edge applications, including image creation with Stable Diffusion, text generation with GPT and music composition with MuSEGAN. It is a work that combines technical rigour with artistic creativity.
- Who is it aimed at? This technical manual is intended for machine learning engineers, data scientists and developers who want to enter the field of generative deep learning. It is ideal for those who already have a background in programming and machine learning, and wish to explore how machines can create original content. It will also be valuable for creative professionals interested in understanding how AI can amplify their artistic capabilities. The book strikes the perfect balance between mathematical theory and practical implementation, making complex concepts accessible through concrete examples and working code.
Information is beautiful, by David McCandless.
- What is it about? Esta guía visual en inglés nos ayuda a entender cómo funciona el mundo a través de impactantes infografías y visualizaciones de datos. This new edition has been completely revised, with more than 20 updates and 20 new visualisations. It presents information in a way that is easy to skim, but also invites further exploration.
- Who is it aimed at? This book is aimed at anyone interested in seeing and understanding information in a different way. It is perfect for those looking for an innovative and visually appealing way to understand the world around us. It is also ideal for those who enjoy exploring data, facts and their interrelationships in an entertaining and accessible way.
Collecting Field Data with QGIS and Mergin Maps, de Kurt Menke y Alexandra Bucha Rasova.
- What is it about? This book teaches you how to master the Mergin Maps platform for collecting, sharing and managing field data using QGIS. The book covers everything from the basics, such as setting up projects in QGIS and conducting field surveys, to advanced workflows for customising projects and managing collaborations. In addition, details on how to create maps, set up survey layers and work with smart forms for data collection are included.
- Who is it aimed at? Although it is a somewhat more technical option than the previous proposals, the book is aimed at new users of Mergin Maps and QGIS. It is also useful for those who are already familiar with these tools and are looking for more advanced workflows.
A terrible greenery by Benjamin Labatut.
- What is it about? This book is a fascinating blend of science and literature, narrating scientific discoveries and their implications, both positive and negative. Through powerful stories, such as the creation of Prussian blue and its connection to chemical warfare, the mathematical explorations of Grothendieck and the struggle between scientists like Schrödinger and Heisenberg, the author, Benjamin Labatut, leads us to explore the limits of science, the follies of knowledge and the unintended consequences of scientific breakthroughs. The work turns science into literature, presenting scientists as complex and human characters.
- Who is it aimed at? The book is aimed at a general audience interested in science, the history of discoveries and the human stories behind them, with a focus on those seeking a literary and in-depth approach to scientific topics. It is ideal for those who enjoy works that explore the complexity of knowledge and its effects on the world.
Designing Better Maps: A Guide for GIS Users, de Cynthia A. Brewer.
- What is it about? It is a guide in English written by the expert cartographer that teaches how to create successful maps using any GIS or illustration tool. Through its 400 full-colour illustrations, the book covers the best cartographic design practices applied to both reference and statistical maps. Topics include map planning, using base maps, managing scale and time, explaining maps, publishing and sharing, using typography and labels, understanding and using colour, and customising symbols.
- Who is it aimed at? This book is intended for all geographic information systems (GIS) users, from beginners to advanced cartographers, who wish to improve their map design skills.
Although in the post we link many purchase links. If you are interested in any of these options, we encourage you to ask your local bookshop to support small businesses during the festive season. Do you know of any other interesting titles? Write it in comments or send it to dinamizacion@datos.gob.es. We read you!
Citizen science is consolidating itself as one of the most relevant sources of most relevant sources of reference in contemporary research contemporary research. This is recognised by the Centro Superior de Investigaciones Científicas (CSIC), which defines citizen science as a methodology and a means for the promotion of scientific culture in which science and citizen participation strategies converge.
We talked some time ago about the importance importance of citizen science in society in society. Today, citizen science projects have not only increased in number, diversity and complexity, but have also driven a significant process of reflection on how citizens can actively contribute to the generation of data and knowledge.
To reach this point, programmes such as Horizon 2020, which explicitly recognised citizen participation in science, have played a key role. More specifically, the chapter "Science with and for society"gave an important boost to this type of initiatives in Europe and also in Spain. In fact, as a result of Spanish participation in this programme, as well as in parallel initiatives, Spanish projects have been increasing in size and connections with international initiatives.
This growing interest in citizen science also translates into concrete policies. An example of this is the current Spanish Strategy for Science, Technology and Innovation (EECTI), for the period 2021-2027, which includes "the social and economic responsibility of R&D&I through the incorporation of citizen science" which includes "the social and economic responsibility of I through the incorporation of citizen science".
In short, we commented some time agoin short, citizen science initiatives seek to encourage a more democratic sciencethat responds to the interests of all citizens and generates information that can be reused for the benefit of society. Here are some examples of citizen science projects that help collect data whose reuse can have a positive impact on society:
AtmOOs Academic Project: Education and citizen science on air pollution and mobility.
In this programme, Thigis developed a citizen science pilot on mobility and the environment with pupils from a school in Barcelona's Eixample district. This project, which is already replicable in other schoolsconsists of collecting data on student mobility patterns in order to analyse issues related to sustainability.
On the website of AtmOOs Academic you can visualise the results of all the editions that have been carried out annually since the 2017-2018 academic year and show information on the vehicles used by students to go to class or the emissions generated according to school stage.
WildINTEL: Research project on life monitoring in Huelva
The University of Huelva and the State Agency for Scientific Research (CSIC) are collaborating to build a wildlife monitoring system to obtain essential biodiversity variables. To do this, remote data capture photo-trapping cameras and artificial intelligence are used.
The wildINTEL project project focuses on the development of a monitoring system that is scalable and replicable, thus facilitating the efficient collection and management of biodiversity data. This system will incorporate innovative technologies to provide accurate and objective demographic estimates of populations and communities.
Through this project which started in December 2023 and will continue until December 2026, it is expected to provide tools and products to improve the management of biodiversity not only in the province of Huelva but throughout Europe.
IncluScience-Me: Citizen science in the classroom to promote scientific culture and biodiversity conservation.
This citizen science project combining education and biodiversity arises from the need to address scientific research in schools. To do this, students take on the role of a researcher to tackle a real challenge: to track and identify the mammals that live in their immediate environment to help update a distribution map and, therefore, their conservation.
IncluScience-Me was born at the University of Cordoba and, specifically, in the Research Group on Education and Biodiversity Management (Gesbio), and has been made possible thanks to the participation of the University of Castilla-La Mancha and the Research Institute for Hunting Resources of Ciudad Real (IREC), with the collaboration of the Spanish Foundation for Science and Technology - Ministry of Science, Innovation and Universities.
The Memory of the Herd: Documentary corpus of pastoral life.
This citizen science project which has been active since July 2023, aims to gather knowledge and experiences from sheperds and retired shepherds about herd management and livestock farming.
The entity responsible for the programme is the Institut Català de Paleoecologia Humana i Evolució Social, although the Museu Etnogràfic de Ripoll, Institució Milà i Fontanals-CSIC, Universitat Autònoma de Barcelona and Universitat Rovira i Virgili also collaborate.
Through the programme, it helps to interpret the archaeological record and contributes to the preservation of knowledge of pastoral practice. In addition, it values the experience and knowledge of older people, a work that contributes to ending the negative connotation of "old age" in a society that gives priority to "youth", i.e., that they are no longer considered passive subjects but active social subjects.
Plastic Pirates Spain: Study of plastic pollution in European rivers.
It is a citizen science project which has been carried out over the last year with young people between 12 and 18 years of age in the communities of Castilla y León and Catalonia aims to contribute to generating scientific evidence and environmental awareness about plastic waste in rivers.
To this end, groups of young people from different educational centres, associations and youth groups have taken part in sampling campaigns to collect data on the presence of waste and rubbish, mainly plastics and microplastics in riverbanks and water.
In Spain, this project has been coordinated by the BETA Technology Centre of the University of Vic - Central University of Catalonia together with the University of Burgos and the Oxygen Foundation. You can access more information on their website.
Here are some examples of citizen science projects. You can find out more at the Observatory of Citizen Science in Spain an initiative that brings together a wide range of educational resources, reports and other interesting information on citizen science and its impact in Spain. do you know of any other projects? Send it to us at dinamizacion@datos.gob.es and we can publicise it through our dissemination channels.
Data literacy has become a crucial issue in the digital age. This concept refers to the ability of people to understand how data is used, how it is accessed, created, analysed, used or reused, and communicated.
We live in a world where data and algorithms influence everyday decisions and the opportunities people have to live well. Its effect can be felt in areas ranging from advertising and employment provision to criminal justice and social welfare. It is therefore essential to understand how data is generated and used.
Data literacy can involve many areas, but we will focus on its relationship with digital rights on the one hand and Artificial Intelligence (AI) on the other. This article proposes to explore the importance of data literacy for citizenship, addressing its implications for the protection of individual and collective rights and the promotion of a more informed and critical society in a technological context where artificial intelligence is becoming increasingly important.
The context of digital rights
More and more studies studies increasingly indicate that effective participation in today's data-driven, algorithm-driven society requires data literacy indicating that effective participation in today's data-driven, algorithm-driven society requires data literacy. Civil rights are increasingly translating into digital rights as our society becomes more dependent on digital technologies and environments digital rights as our society becomes more dependent on digital technologies and environments. This transformation manifests itself in various ways:
- On the one hand, rights recognised in constitutions and human rights declarations are being explicitly adapted to the digital context. For example, freedom of expression now includes freedom of expression online, and the right to privacy extends to the protection of personal data in digital environments. Moreover, some traditional civil rights are being reinterpreted in the digital context. One example of this is the right to equality and non-discrimination, which now includes protection against algorithmic discrimination and against bias in artificial intelligence systems. Another example is the right to education, which now also extends to the right to digital education. The importance of digital skills in society is recognised in several legal frameworks and documents, both at national and international level, such as the Organic Law 3/2018 on Personal Data Protection and Guarantee of Digital Rights (LOPDGDD) in Spain. Finally, the right of access to the internet is increasingly seen as a fundamental right, similar to access to other basic services.
- On the other hand, rights are emerging that address challenges unique to the digital world, such as the right to be forgotten (in force in the European Union and some other countries that have adopted similar legislation1), which allows individuals to request the removal of personal information available online, under certain conditions. Another example is the right to digital disconnection (in force in several countries, mainly in Europe2), which ensures that workers can disconnect from work devices and communications outside working hours. Similarly, there is a right to net neutrality to ensure equal access to online content without discrimination by service providers, a right that is also established in several countries and regions, although its implementation and scope may vary. The EU has regulations that protect net neutrality, including Regulation 2015/2120, which establishes rules to safeguard open internet access. The Spanish Data Protection Act provides for the obligation of Internet providers to provide a transparent offer of services without discrimination on technical or economic grounds. Furthermore, the right of access to the internet - related to net neutrality - is recognised as a human right by the United Nations (UN).
This transformation of rights reflects the growing importance of digital technologies in all aspects of our lives.
The context of artificial intelligence
The relationship between AI development and data is fundamental and symbiotic, as data serves as the basis for AI development in a number of ways:
- Data is used to train AI algorithms, enabling them to learn, detect patterns, make predictions and improve their performance over time.
- The quality and quantity of data directly affect the accuracy and reliability of AI systems. In general, more diverse and complete datasets lead to better performing AI models.
- The availability of data in various domains can enable the development of AI systems for different use cases.
Data literacy has therefore become increasingly crucial in the AI era, as it forms the basis for effectively harnessing and understanding AI technologies.
In addition, the rise of big data and algorithms has transformed the mechanisms of participation, presenting both challenges and opportunities. Algorithms, while they may be designed to be fair, often reflect the biases of their creators or the data they are trained on. This can lead to decisions that negatively affect vulnerable groups.
In this regard, legislative and academic efforts are being made to prevent this from happening. For example, the EuropeanArtificial Intelligence Act (AI Act) includes safeguards to avoid harmful biases in algorithmic decision-making. For example, it classifies AI systems according to their level of potential risk and imposes stricter requirements on high-risk systems. In addition, it requires the use of high quality data to train the algorithms, minimising bias, and provides for detailed documentation of the development and operation of the systems, allowing for audits and evaluations with human oversight. It also strengthens the rights of persons affected by AI decisions, including the right to challenge decisions made and their explainability, allowing affected persons to understand how a decision was reached.
The importance of digital literacy in both contexts
Data literacy helps citizens make informed decisions and understand the full implications of their digital rights, which are also considered, in many respects, as mentioned above, to be universal civil rights. In this context, data literacy serves as a critical filter for full civic participation that enables citizens to influence political and social decisions full civic participation that enables citizens to influence political and social decisions. That is,those who have access to data and the skills and tools to navigate the data infrastructure effectively can intervene and influencepolitical and social processes in a meaningful way , something which promotes the Open Government Partnership.
On the other hand, data literacy enables citizens to question and understand these processes, fostering a culture of accountability and transparency in the use of AI. There arealso barriers to participation in data-driven environments. One of these barriers is the digital divide (i.e. deprivation of access to infrastructure, connectivity and training, among others) and, indeed, lack of data literacy. The latter is therefore a crucial concept for overcoming the challenges posed by datification datification of human relations and the platformisation of content and services.
Recommendations for implementing a preparedness partnership
Part of the solution to addressing the challenges posed by the development of digital technology is to include data literacy in educational curricula from an early age.
This should cover:
- Data basics: understanding what data is, how it is collected and used.
- Critical analysis: acquisition of the skills to evaluate the quality and source of data and to identify biases in the information presented. It seeks to recognise the potential biases that data may contain and that may occur in the processing of such data, and to build capacity to act in favour of open data and its use for the common good.
- Rights and regulations: information on data protection rights and how European laws affect the use of AI. This area would cover all current and future regulation affecting the use of data and its implication for technology such as AI.
- Practical applications: the possibility of creating, using and reusing open data available on portals provided by governments and public administrations, thus generating projects and opportunities that allow people to work with real data, promoting active, contextualised and continuous learning.
By educating about the use and interpretation of data, it fosters a more critical society that is able to demand accountability in the use of AI. New data protection laws in Europe provide a framework that, together with education, can help mitigate the risks associated with algorithmic abuse and promote ethical use of technology. In a data-driven society, where data plays a central role, there is a need to foster data literacy in citizens from an early age.
1The right to be forgotten was first established in May 2014 following a ruling by the Court of Justice of the European Union. Subsequently, in 2018, it was reinforced with the General Data Protection Regulation (GDPR)which explicitly includes it in its Article 17 as a "right of erasure". In July 2015, Russia passed a law allowing citizens to request the removal of links on Russian search engines if the information"violates Russian law or if it is false or outdated". Turkey has established its own version of the right to be forgotten, following a similar model to that of the EU. Serbia has also implemented a version of the right to be forgotten in its legislation. In Spain, the Ley Orgánica de Protección de Datos Personales (LOPD) regulates the right to be forgotten, especially with regard to debt collection files. In the United Statesthe right to be forgotten is considered incompatible with the Constitution, mainly because of the strong protection of freedom of expression. However, there are some related regulations, such as the Fair Credit Reporting Act of 1970, which allows in certain situations the deletion of old or outdated information in credit reports.
2Some countries where this right has been established include Spain, regulated by Article 88 of Organic Law 3/2018 on Personal Data Protection; France, which, in 2017, became the first country to pass a law on the right to digital disconnection; Germany, included in the Working Hours and Rest Time Act(Arbeitszeitgesetz); Italy, under Law 81/201; and Belgium. Outside Europe, it is, for example, in Chile.
Content prepared by Miren Gutiérrez, PhD and researcher at the University of Deusto, expert in data activism, data justice, data literacy and gender disinformation. The contents and views reflected in this publication are the sole responsibility of the author.
Open data should be inherently accessible, meaning it must be available for free and without barriers that could restrict access and reuse. Accessibility is a fundamental and complex issue because it means that these data sets should not only be available in reusable formats but also that anyone should be able to access and interpret them.
To ensure that access to open data is democratic, it must meet fundamental accessibility criteria that affect both the platform (web) and the way its content is displayed (e.g., through visualizations). In this context, this post delves into the essential principles to ensure that open data is inclusive and useful for a diverse audience. Discover recommendations aimed at improving the accessibility of open data portals and platforms, as well as best practices for data visualization, with a focus on the importance of inclusive design that considers the needs of all users.
Levels of Web Accessibility
When focusing on the platform, open data portals can refer to the web accessibility specifications identified by the World Wide Web Consortium (W3C), the leading international organization for web standardization, which sets guidelines for web accessibility that a website should meet.
-
Perceivable: Information and user interface components must be presented to users in ways they can perceive, regardless of any physical or cognitive disabilities they might have.
-
Operable: User interface components and navigation must be operable. Therefore, users who use the keyboard instead of the mouse must be able to interact correctly with a webpage; no time limit should be imposed on users to complete interactions, and there should be ways to navigate and find content easily.
-
Understandable: Text must be clear and easy to understand, the user interface and navigation must be consistent and predictable, and webpages must help users when they make mistakes filling out a form, for example.
- Robust: Content must be robust enough to be reliably interpreted by a variety of web browsers and other software, such as screen readers.
Each guideline has compliance criteria that can be tested. These criteria are classified into three levels: A, AA, AAA. The levels, from least to most, are:
- A (Minimum): All non-text content like images and videos must have textual alternatives; videos and audios must have subtitles; navigation should be possible using only the keyboard; the page must have a clear title and assigned language.
- AA (Acceptable): In addition to all level A requirements, other functionalities are added, such as live videos also having subtitles; the contrast ratio between text and background must be at least 4.5:1; text must be resizable up to 200% without losing content or functionality; text images should not be used.
- AAA (Optimal): This level requires all the features of levels A and AA, along with other requirements such as sign language interpretation for videos or a contrast ratio between text and background of at least 7:1.
Accessible Open Data Websites and Visualizations
Considering the conditions and recommendations set by W3C, the European Open Data Portal offers a Data Visualization Guide that includes best practices for accessibility in data visualization. Following the guidelines of this Guide, to respect inclusivity from the design stage, a good data visualization must meet three conditions: it must be perceivable, understandable, and adaptable.
-
Perceivable: Colors must be adapted for people with vision problems, and the font size and contrast must be adequate.
-
Understandable: The interface must be user-friendly and intuitive. Whenever possible, the graphic should be understandable regardless of the user's background.
- Adaptable: The visualization must be responsive, meaning it adapts to the dimensions of each electronic device, flexible, editable, or with viewing options for people with cognitive disabilities.
Once these three conditions are identified, we can analyze if our graphic meets them by paying attention to issues such as the use of an appropriate color palette for people with vision problems, good contrast, and understandable titles and text. It is also advisable to include alternative text (adapted for people with intellectual disabilities) and, when necessary, a visualization guide to understand the graphic.
Tools to Improve Accessibility
To apply accessibility principles in data visualization, we can use three resources:
-
Accessibility Audit Tools: Conducting accessibility audits is a good practice, for example, using Chartability which analyzes websites considering all aspects related to inclusion.
-
HTML: The fundamental web markup language was developed with accessibility in mind, so using its elements semantically correctly is a simple way to ensure a basic level of accessibility. This applies to the context of a visualization (which should use elements like headers and paragraphs correctly, for example), interactive elements (like links, buttons, and inputs), and the elements of a visualization itself. It is better to offer a visualization in HTML than in image format (jpg or png) whenever possible. When not possible, it is necessary to provide an accessible alternative (an alternative text, as mentioned earlier).
- SVG: Scalable Vector Graphics (SVG) is a format for two-dimensional vector graphics, both static and animated, in Extensible Markup Language (XML) format, meaning it is composed of code and its specification is an open standard developed by W3C to generate accessible graphics.
- Datawrapper: Among many data visualization tools, Datawrapper offers the possibility to test accessible color palettes and write alternative descriptions, among other accessibility-related functions.
In summary, data visualization is a method to make a data set and its visualizations more accessible. Taking these accessibility tips into account and incorporating them by default into the design when presenting a data set visually will enrich the result and reach a wider audience.
Content developed based on the Data Visualization Guide from the European Open Data Portal: https://data.europa.eu/apps/data-visualisation-guide/accessibility-of-data-visualisation
The National Centre for Geographic Information publishes open geospatial data from the National Cartographic System, the National Geographic Institute and other organisations through web applications and mobile applications to facilitate access to and consultation of geographic data by citizens.
Geospatial data is published via web services and APIs for reuse, so in the case of high-value datasets, it can be used in a variety of ways high-value datasets such as geographic names, hydrography or addresses as required by the as required by the EUthe EU has already made these datasets available to the public by June 2024 as they are associated with major benefits for society, the environment and the economy.
But in the applications listed below, the geographic data are visualised and consulted through web services, so that for downloading the data, it is possible to use web services and APIs directly, through a platform accessible to any user with a wide range of geographic information, ranging from topographic maps to satellite images.
But not only data can be reused, also application software is reusable, for example, the Solar Energy Potential of Buildings visualiser which is based on a visualiser API, named API-CNIG and allows the same tool to be used for different thematic areas.
Some examples of applications are:

Solar Energy Potential of Buildings
Provides the photovoltaic capacity of a building according to its location and characteristics. It also provides the average over the year and a point grid to identify the best location for solar panels.
National Geographic Gazetteer
It is a toponym search engine that collects the names, official or standardised by the corresponding competent bodies , with geographical references.
Unified postal address calculator
It is a converter that allows to know the geographical coordinates (latitude and longitude in WGS84) of the postal addresses of a place, and vice versa. In both cases, the input file is a CSV file, supporting both coordinates and postal addresses.
Basic Maps of Spain
It facilitates connection to IGN services and to the CNIG download centre to obtain maps and routes. With this mobile application you can follow the routes of the National Parks or the stages of the Camino de Santiago. It allows you to plan excursions using maps, navigate and take guided tours, without the need for an internet connection after downloading data.
Map a la carte
It allows you to create a customised map using the printed series of the National Topographic Map at scales 1:25.000 and 1:50.000. It offers the possibility of defining its area, incorporating contents, personalising the cover, obtaining a pdf file and even acquiring paper copies by post.
IGN Earthquakes
It allows the reception and visualisation of all seismic events in Spain and its surroundings. It provides the distance to the epicentre of the seismic event and epicentral parameters, as well as the geolocation of the user's position and the epicentre.
Maps of Spain
It is a free mobile viewer ideal for hiking, cycling, running, skiing, etc., which uses as background cartography the services of the National Geographic Institute and another set of services from other Ministries, such as the Cadastral information of the plots provided by the General Directorate of Cadastre.
Camino de Santiago
It includes information of a cultural and practical nature on each of the stages (hostels, monuments, etc.), as well as a complete Pilgrim's Guide detailing what you should know before starting out on any of the routes. This application is based on ESRI software.
National Parks
Displays information on the history, fauna, flora and excursions in Spain's National Parks. It includes hundreds of points of interest such as information centres, accommodation, viewpoints, refuges and even routes through the parks, indicating their duration and difficulty. The app is available for download on Android e iOS. This application is based on ESRI software.
GeoSapiens IGN
It presents interactive maps, free to use and free of charge, to study the physical and political geography of Spain and the world. It consists of different games relating to the whole of Spain or by autonomous communities, the whole world and by continent.
In addition to the applications developed by the CNIG, which are also presented in this video this videothere are many other digital solutions developed by third parties that reuse open geospatial data to offer a service to society. For example, in the list of data reusing applications.gob.es you can find from a map that shows the fires that are active in Spain in real time in Spain in real time to an app that shows where the parking spaces for people with reduced mobility parking spaces for people with reduced mobility in each town.
In short, anyone can make use of the open geographic data of the National Cartographic System, the National Geographic Institute and other bodies published by the CNIG , thus extending the advantages offered by the availability of open geographic data. do you know of any other application resulting from the reuse of open data? You can send it to us at dinamizacion@datos.gob.es
Today, 23 April, is World Book Day, an occasion to highlight the importance of reading, writing and the dissemination of knowledge. Active reading promotes the acquisition of skills and critical thinking by bringing us closer to specialised and detailed information on any subject that interests us, including the world of data.
Therefore, we would like to take this opportunity to showcase some examples of books and manuals regarding data and related technologies that can be found on the web for free.
1. Fundamentals of Data Science with R, edited by Gema Fernandez-Avilés and José María Montero (2024)
Access the book here.
- What is it about? The book guides the reader from the problem statement to the completion of the report containing the solution to the problem. It explains some thirty data science techniques in the fields of modelling, qualitative data analysis, discrimination, supervised and unsupervised machine learning, etc. It includes more than a dozen use cases in sectors as diverse as medicine, journalism, fashion and climate change, among others. All this, with a strong emphasis on ethics and the promotion of reproducibility of analyses.
- Who is it aimed at? It is aimed at users who want to get started in data science. It starts with basic questions, such as what is data science, and includes short sections with simple explanations of probability, statistical inference or sampling, for those readers unfamiliar with these issues. It also includes replicable examples for practice.
- Language: Spanish.
2. Telling stories with data, Rohan Alexander (2023).
Access the book here.
- What is it about? The book explains a wide range of topics related to statistical communication and data modelling and analysis. It covers the various operations from data collection, cleaning and preparation to the use of statistical models to analyse the data, with particular emphasis on the need to draw conclusions and write about the results obtained. Like the previous book, it also focuses on ethics and reproducibility of results.
- Who is it aimed at? It is ideal for students and entry-level users, equipping them with the skills to effectively conduct and communicate a data science exercise. It includes extensive code examples for replication and activities to be carried out as evaluation.
- Language: English.
3. The Big Book of Small Python Projects, Al Sweigart (2021)
Access the book here.
- What is it about? It is a collection of simple Python projects to learn how to create digital art, games, animations, numerical tools, etc. through a hands-on approach. Each of its 81 chapters independently explains a simple step-by-step project - limited to a maximum of 256 lines of code. It includes a sample run of the output of each programme, source code and customisation suggestions.
- Who is it aimed at? The book is written for two groups of people. On the one hand, those who have already learned the basics of Python, but are still not sure how to write programs on their own. On the other hand, those who are new to programming, but are adventurous, enthusiastic and want to learn as they go along. However, the same author has other resources for beginners to learn basic concepts.
- Language: English.
4. Mathematics for Machine Learning, Marc Peter Deisenroth A. Aldo Faisal Cheng Soon Ong (2024)
Access the book here.
- What is it about? Most books on machine learning focus on machine learning algorithms and methodologies, and assume that the reader is proficient in mathematics and statistics. This book foregrounds the mathematical foundations of the basic concepts behind machine learning
- Who is it aimed at? The author assumes that the reader has mathematical knowledge commonly learned in high school mathematics and physics subjects, such as derivatives and integrals or geometric vectors. Thereafter, the remaining concepts are explained in detail, but in an academic style, in order to be precise.
- Language: English.
5. Dive into Deep Learning, Aston Zhang, Zack C. Lipton, Mu Li, Alex J. Smola (2021, continually updated)
Access the book here.
- What is it about? The authors are Amazon employees who use the mXNet library to teach Deep Learning. It aims to make deep learning accessible, teaching basic concepts, context and code in a practical way through examples and exercises. The book is divided into three parts: introductory concepts, deep learning techniques and advanced topics focusing on real systems and applications.
- Who is it aimed at? This book is aimed at students (undergraduate and postgraduate), engineers and researchers, who are looking for a solid grasp of the practical techniques of deep learning. Each concept is explained from scratch, so no prior knowledge of deep or machine learning is required. However, knowledge of basic mathematics and programming is necessary, including linear algebra, calculus, probability and Python programming.
- Language: English.
6. Artificial intelligence and the public sector: challenges, limits and means, Eduardo Gamero and Francisco L. Lopez (2024)
Access the book here.
- What is it about? This book focuses on analysing the challenges and opportunities presented by the use of artificial intelligence in the public sector, especially when used to support decision-making. It begins by explaining what artificial intelligence is and what its applications in the public sector are, and then moves on to its legal framework, the means available for its implementation and aspects linked to organisation and governance.
- Who is it aimed at? It is a useful book for all those interested in the subject, but especially for policy makers, public workers and legal practitioners involved in the application of AI in the public sector.
- Language: Spanish
7. A Business Analyst’s Introduction to Business Analytics, Adam Fleischhacker (2024)
Access the book here.
- What is it about? The book covers a complete business analytics workflow, including data manipulation, data visualisation, modelling business problems, translating graphical models into code and presenting results to stakeholders. The aim is to learn how to drive change within an organisation through data-driven knowledge, interpretable models and persuasive visualisations.
- Who is it aimed at? According to the author, the content is accessible to everyone, including beginners in analytical work. The book does not assume any knowledge of the programming language, but provides an introduction to R, RStudio and the "tidyverse", a series of open source packages for data science.
- Language: English.
We invite you to browse through this selection of books. We would also like to remind you that this is only a list of examples of the possibilities of materials that you can find on the web. Do you know of any other books you would like to recommend? let us know in the comments or email us at dinamizacion@datos.gob.es!
The Provincial Council of Bizkaia the University of the Basque Country (UPV/EHU) and the Bilbao City Council collaborate in the Bilbao Bizkaia Open Data Classroom an initiative that aims to develop the use of open data from the two Biscayan institutions (Provincial Council and City Council) for use in university projects. The ultimate goal is that, thanks to this re-use, public services can be improved and new knowledge can be generated to contribute to the resolution of social problems.
The initiative, aimed at university students as well as teaching and research staff, was born as a way to research staff, was born as a a collaboration agreement between the three administrations (Provincial Council of Bizkaia, Bilbao City Council and UPV/EHU). For this purpose, other agreements made with the Bilbao School of Engineering for the creation of Business Classrooms were taken as a reference, but in this case it will be an open data classroom, which will promote the opening of data generated and the reuse of public information.
The Bilbao Bizkaia Open Data Classroom has been in operation since 2022 and its operation is similar to that of the twelve Business Classrooms that were already in operation at the Bilbao School of Engineering. These company classrooms are laboratory-classrooms within the school, created and financed by companies and institutions to promote their innovation activities. In this sense, as the organisers of the Aula state, "they are an effective instrument of collaboration between the Departments of the Bilbao School of Engineering and the business world, both in activities related to research, technological development and innovation and in everything related to training".
Open data for innovation in the classroom
In addition to developing projects based on the reuse of open data that improve the services provided by the regional and municipal authorities, the Aula also creates data visualisations based on open information processing initiatives proposed by the university community with the aim of improving the welfare of citizens. Another of its areas of work is the implementation of training activities that contribute to the improvement of the digital skills of the university community.
During the first edition of the Bilbao-Bizkaia Open Data Classroom, in the 2022-2023 academic year, the students developed projects on the reuse of data on recycling or outdoor activities, among others. All of them were created using regional data. You can consult the projects here: https://sites.google.com/view/opendatabilbaobizkaia/home?authuser=0.
How can I join Aula Open Data Bilbao-Bizkaia?
The Aula Open data Bilbao Bizkaia has its own space in the headquarters of the Bilbao School of Engineering, in San Mamés. This space has been fitted out thanks to a grant awarded by the Provincial Council of Bizkaia and the City Council of Bilbao, which also collaborate by financing the management costs of the classroom.
The programme is aimed at engineering bachelor's and master's degree students carrying out their bachelor's and master's degree final projects, respectively. However, it is not necessary to be in the final year of a Bachelor's or Master's degree to participate in the Aula. The initiative is open to anyone with an interest in data.
Training in Power BI and data analysis toolsis provided at the beginning of the course.
The programme is free of charge, and students working under the agreement are paid. The selection process is by CV.
In the following link you can find all the information about the Classroom.
EXTENDED: You can submit your project until September 20th!
The deadline to participate in the II edition of the Datathon UniversiData is now open. This competition recognises the value of projects that reuse open university data published on the portal UniversiDATA,a public-private initiative that was born at the end of 2020. Its aim was, and is, to promote open data in the Spanish higher education sector in a harmonised way.
UniversiDATA is currently made up of the Universidad Rey Juan Carlos, the Universidad Complutense de Madrid, the Universidad Autónoma de Madrid, the Universidad Carlos III de Madrid, the Universidad de Valladolid and the Universidad de Huelva, in collaboration with the company DIMETRICAL, The Analytics Lab, S.L.
What is the UniversiDATA Datathon about?
As previously indicated, participants must submit an open data processing project using one or more of the datasets published in UniversiDATA. These data may be combined with other data sources, but always bearing in mind that their use should not be secondary or ancillary.
There are no limitations on the nature of the project, the technologies involved or the formats of presentatiing the results. You can compete with a mobile app, a web application, a data analysis in Jupyter or R-Markdown, etc. Works already submitted to other competitions, as well as internships, master's or bachelor's degree theses or research articles are also valid .
For inspiration, you can visit the "UniversiDATA-Lab" where examples of applications and data analysis are shown. You can also check out the winning projects of the first edition.
How does the competition unfold?
The competition is divided into two phases:
- Knockout stage
Those interested in participating can submit their candidature from 6 March until September 20, using this form. In addition to the personal data, the following information must be provided in the application:
- Members of the project
- Project title
- Problem to be solved
- Proposed solution
- Identification of addressees
- Usefulness of the project
- Data sets to be used
All the projects submitted will be evaluated by a jury. The jury will select 10 finalists, who will go on to the final phase. The list of selected projects will be made public on September 27, 2024.
- Final Phase
Once selected, the finalists will start preparing their projects for the presentation to the jury, which will take place during an online event on December 16. The projects will be presented by videoconference.
The winners will be announced on December 23.
Who can participate?
The competition is open to any natural person with tax residence in the European Union, whether they are students, working professionals or amateurs.
You can participate as a group or as an individual.
what are the prizes?
This year, the financial endowment has been increased to a total of €9,000, divided as follows:
- First prize: € 4,000
- Second prize: € 3,000
- Third prize: € 1,500
In addition to these general prizes, the aim is also to recognise the best university student project that has been a finalist but has not won a prize. A special prize of €500 has been created for this purpose.
In case of group participation, the prize will be divided among all members of the group.
Do you have any queries?
Before participating, it is necessary to download and read the specific rules of the competition. If you have any questions, you can contact the organisers through this form. You will also be informed of any new developments on the the UniversiDATA Twitter profile.
In addition, throughout the competition, a direct communication channel will be established with the participants for any questions that may arise, including those concerning the datasets to be used.
The II Datathon UniversiDATA arises as a result of the success achieved in its first edition. it is a very positive experience that offers participants, once again this year, the opportunity not only to win financial recognition, but also to gain visibility by showing their talent when it comes to processing data that can provide answers to various questions of social and economic interest.