Application

LocalizaTodo is a real-time geoportal for viewing maritime (AIS) and air (ADS-B) traffic, as well as other tracked objects, on a web map. Free to access and requiring no installation, it allows you to view ships and flights in the same viewer, find out their current position, course and speed, and check their route over the last 24 hours.

It includes search by name/identifier, filters by type of element and additional layers such as OpenStreetMap cartography, the OpenSeaMap nautical layer, antenna coverage and a weather layer with wind, waves, temperature and pressure.

Its tools include a distance and time meter, CPA (closest point of approach) calculation between two elements or in relation to a point, map centring on coordinates and historical data consultation by area/identifier. Each element has a file with the latest data received and a gallery of photos contributed by users.

If you log in to Google, Facebook, Twitter, or Microsoft, you can access extra features, such as managing your own devices connected to the platform or requesting data downloads (KML, CSV, Excel, GeoJSON, or SHP). For advanced uses, there is a Pro version with areas, alarms, filters, and more analysis options.

calendar icon
Application

SUBSIDIA ONERIS (Latin for “burden of subsidies”) is a library of applets that enable mass access to data from the National Subsidies Database portal directly from Excel, using PowerQuery ETL to access the official SNPSAP API. It is shared under a CC BY-NC-SA 4.0 license.

Its use requires only very basic knowledge of Excel. The available applets are as follows:

De minimis Inspector 2.0: By entering one or multiple tax identification numbers (NIF) of beneficiaries, it returns all de minimis grants existing on the Portal on the date the query is performed, granted over the previous three years, providing transactional information on these grants as well as various aggregates and reports. Useful for checking compliance with the three-year de minimis accumulation requirements per beneficiary established in the applicable European regulations.

State Aid (AdE) Inspector 2.1: By entering one or multiple beneficiary NIFs, it returns all State Aid grants existing on the Portal on the date the query is performed, providing transactional information on these grants as well as various aggregates and reports.

All Grants Inspector 2.0: By entering one or multiple beneficiary NIFs, it returns all grants existing in the “All” tab of the Portal on the date the query is performed, providing transactional information on these grants as well as various aggregates and reports.

Grants by Date Query Tool (more than 10,000) 2.1: By entering a start and end date, it returns all grants existing in the “Grants – All” tab of the Portal for the reference period on the date the query is performed, providing transactional information as well as various aggregates and reports. Time intervals should preferably not exceed one month, as the large number of grants may cause the download to fail, depending on the memory resources of the computer running Excel and the transactional load on the servers.

Grants by Call for Proposals Query Tool (more than 10,000) 2.1: For calls for proposals that have more than 10,000 grants, this tool allows downloading all of them, up to 1,000,000 grants. Only one call code can be specified.

State Aid by Date Query Tool (more than 10,000) 2.1: By entering a start and end date, it returns all grants existing in the “State Aid” tab of the Portal for the reference period on the date the query is performed, providing transactional information as well as various aggregates and reports. Time intervals should preferably not exceed one year, as the large number of grants may cause the download to fail, depending on the memory resources of the computer running Excel and the transactional load on the servers.

De minimis by Date Query Tool (more than 10,000) 2.1: By entering a start and end date, it returns all grants existing in the de minimis tab of the Portal for the reference period on the date the query is performed, providing transactional information as well as various aggregates and reports. Time intervals should preferably not exceed one year, as the large number of grants may cause the download to fail, depending on the memory resources of the computer running Excel and the transactional load on the servers.

Multi-SA State Aid Query Tool (less than 10,000) 2.0: By entering one or multiple references to State Aid measures (SA.number), it returns all State Aid grants existing on the Portal on the date the query is performed for those measures, providing transactional information as well as various aggregates and reports. If any measure (SA.number) has more than 10,000 grants, only the first 10,000 are downloaded, so it should not be used in that case. If an SA has more than 10,000 grants, the Multi-code Grants by Call Query Tool can be used instead. To do so, you must first identify the calls for proposals that have been formalized for that SA measure.

Multi-code Grants by Call Query Tool (less than 10,000) 2.1: By entering one or multiple BDNS call codes, it returns all grants for those calls existing on the Portal on the date the query is performed, providing transactional information as well as various aggregates and reports.

If any call has more than 10,000 grants, it only downloads the first 10,000, so it should not be … (text truncated in the original).

calendar icon
Evento

The Cabildo Insular de Tenerife has announced the II Open Data Contest: Development of APPs, an initiative that rewards the creation of web and mobile applications that take advantage of the datasets available on its datos.tenerife.es portal. This call represents a new opportunity for developers, entrepreneurs and innovative entities that want to transform public information into digital solutions of value for society. In this post, we tell you the details about the competition.

A growing ecosystem: from ideas to applications

This initiative is part of the Cabildo de Tenerife's Open Data project, which promotes transparency, citizen participation and the generation of economic and social value through the reuse of public information.

The Cabildo has designed a strategy in two phases:

  • The I Open Data Contest: Reuse Ideas (already held) focused on identifying creative proposals.

  • The II Contest: Development of PPPs (current call) that gives continuity to the process and seeks to materialize ideas in functional applications.

This progressive approach makes it possible to build an innovation ecosystem that accompanies participants from conceptualization to the complete development of digital solutions.

The objective is to promote the creation of digital products and services that generate social and economic impact, while identifying new opportunities for innovation and entrepreneurship in the field of open data.

Awards and financial endowment

This contest has a total endowment of 6,000 euros distributed in three prizes:

  • First prize: 3,000 euros

  • Second prize: 2,000 euros

  • Third prize: 1,000 euros

Who can participate?

The call is open to:

  • Natural persons: individual developers, designers, students, or anyone interested in the reuse of open data.

  • Legal entities: startups, technology companies, cooperatives, associations or other entities.

As long as they present the development of an application based on open data from the Cabildo de Tenerife. The same person, natural or legal, can submit as many applications as they wish, both individually and jointly. 

What kind of applications can be submitted?

Proposals must be web or mobile applications that use at least one dataset from the datos.tenerife.es portal. Some ideas that can serve as inspiration are:

  • Applications to optimize transport and mobility on the island.

  • Tools for visualising tourism or environmental data.

  • Real-time citizen information services.

  • Solutions to improve accessibility and social participation.

  • Economic or demographic data analysis platforms.

Evaluation criteria: what does the jury assess?

The jury will evaluate the proposals considering the following criteria:

  • Use of open data: degree of exploitation and integration of the datasets available in the portal.

  • Impact and usefulness: value that the application brings to society, ability to solve real problems or improve existing services.

  • Innovation and creativity: originality of the proposal and innovative nature of the proposed solution.

  • Technical quality: code robustness, good programming practices, scalability and maintainability of the application.

  • Design and usability: user experience (UX), attractive and intuitive visual design, guarantee of digital accessibility on Android and iOS devices.

How to participate: deadlines and form of submission: 

Applications can be submitted until March 10, 2026, three months from the publication of the call in the Official Gazette of the Province.

Regarding the required documentation, proposals must be submitted in digital format and include:

  • Detailed technical description of the application.

  • Report justifying the use of open data.

  • Specification of technological environments used.

  • Video demonstration of how the application works.

  • Complete source code.

  • Technical summary sheet.

The organising institution recommends electronic submission through the Electronic Office of the Cabildo de Tenerife, although it is also possible to submit it in person at the official registers enabled. The complete bases and the official application form are available at the Cabildo's Electronic Office.

With this second call, the Cabildo de Tenerife consolidates its commitment to transparency, the reuse of public information and the creation of a digital innovation ecosystem. Initiatives like this demonstrate how open data can become a catalyst for entrepreneurship, citizen participation, and local economic development.

calendar icon
Application

Open SDG Index is a platform for managing and visualising results and progress in the fulfilment of the Sustainable Development Goals and the 2030 Agenda. It allows reporting and systematizing the progress and effort made by public and private entities to meet the SDGs. On the one hand, it facilitates the self-evaluation of this progress and, on the other, it makes the results known to the general public. It is part of the SDG system and allows any entity to incorporate and update its profile. 

The portal includes advanced searches, sectoral and territorial filters and organisation profiles for open consultation. The information is verified by independent agents and presented geolocalized to facilitate comparison between organizations and territories.

For the deployment of the Open SDG Index, a proprietary methodology has been developed in collaboration with the UNDP (United Nations Development Programme). The project won the First Prize for Social Entrepreneurship "La Noria" from the Diputación de Málaga in 2020.

calendar icon
Application

The Smart Agro – Irrigation Recommendations website is a free platform developed by the Cabildo de La Palma as part of the La Palma Smart Island project. It aims to improve the efficiency of water use in local agriculture, especially for banana and avocado crops.

To get a personalized recommendation, the user must select a crop and an area of the island through a drop-down menu or on the map. Instead, the app:

  • It provides detailed graphs showing the recent evolution of precipitation and evapotranspiration (ETo) over the past 7 days in the selected area.
  • It generates irrigation recommendations adjusted to the municipality and local climatic conditions.

To carry out the calculations, data from the island's network of weather and air quality stations are used, together with a calculation engine that processes the information to generate weekly recommendations. Likewise, the data generated by this engine is integrated into the open data portal, promoting an open innovation ecosystem that feeds back on each other.

calendar icon
Application

CLIMA TERRA is a progressive web application (PWA) that provides real-time environmental information in a clear and accessible way. It allows users to view key parameters such as temperature, relative humidity, wind speed and UV index, based on open meteorological and geospatial data.
The app has been designed with a minimalist and bilingual (Spanish/English) approach, with the aim of bringing open data closer to the public and promoting more informed and sustainable everyday decisions.

calendar icon
Application

  MOVACTIVA is a digital platform developed by the Department of Geography of the Universitat Autònoma de Barcelona, which works as an interactive atlas focused on active mobility, i.e. the transport of people using non-motorised means, such as walking or cycling. The atlas collects information from five Spanish cities: Barcelona, Granada, Madrid, Palma de Mallorca and Valencia.

The project maps five urban indicators that are decisive for active mobility, based on 57 georeferenced variables:

The combination of these five elements makes it possible to create an objective and standardised indicator: the Global Active Mobility Indicator. 

In addition, the website also offers information on:

  • Micromobility, which includes electric, small and light modes of transport (Personal Mobility Vehicles or PMVs), such as electric bicycles and scooters, hoverboards, segways and monowheels.
  • Intermodality, which involves the use of two or more modes of transport within the framework of a single trip.

To bring information closer to users, it has an interactive viewer that allows geographic data to be explored visually, facilitating comparison between cities and promoting a healthier and more sustainable urban approach. The indicators are superimposed on open access base maps such as the PNOA orthophotography (from the IGN) and OpenStreetMap.

calendar icon
Application

Edalitics is a cloud-based analytics service that allows you to connect data, model it, create reports and dashboards without deploying your own infrastructure and without technical knowledge. It is based on EDA (Enterprise Data Analytics), the open source platform  of the company Jortilles and is offered as SaaS (Software as a Service), which reduces technical complexity: the user accesses through a browser, selects their sources and builds visualizations by simply dragging and dropping, or through SQL. 

Edalitics works as a corporate and public data platform: it can connect to databases and web services, and it also supports CSV files that the user uploads to enrich their model. From there, dashboards, KPIs and email alerts are created, and private or public reports are published for different decision profiles, with access control and traceability. It allows you to have unlimited users, which makes it interesting for large organizations with many users. 

It is important to clarify that Edalitics does not incorporate datasets by default, but integrates with APIs or open portals. Organisations such as the Baix Empordà County Council have used Edalitics to deploy their open data catalogues. 

Edalitics offers two modes of use: 

  • Cloud version. The platform can be used directly in the cloud, with a tiered pricing model. This version is free for organizations with limited usage. Organizations with higher data usage or volume demands can access a paid version for a monthly fee.
  • Installation on own servers (On-Premise). For those organizations that prefer to host Edalitics on their own infrastructure, Jortilles offers:
    • Assistance in installation and configuration, adapting to the customer's environment.
    • Possibility of contracting annual maintenance that includes: direct technical support from the developer team and access to updates and improvements proactively, ensuring the proper functioning and evolution of the platform. 
calendar icon
Blog

How many times have you had a dataset in your hands that you needed to analyze, but you've run into errors, inconsistencies, or formatting issues that have caused you to lose hours of work? The reality is that, although we have more data available every day, we do not always have the necessary tools or knowledge to work with it efficiently.

There are several options to address this process. One of them is Open Data Editor, a free and open-source tool that the Open Knowledge Foundation (OKFN) has designed with  the aim of democratizing access to and exploitation of data.

Key features and functionalities

As indicated by OKFN, this application is designed for people who work with tabular data (Excel, Google Sheets, CSV) and who do not know how to program or do not have access to specialized technical tools. Its no-code approach makes it an accessible alternative that focuses specifically on tabular data cleansing and validation.

The tool implements a process known as "data validation," which involves finding errors in datasets and correcting them efficiently. Also, verify that spreadsheets or datasets contain all the information necessary for others to use. Therefore, it also takes into account interoperability, a very relevant value when it comes to dataset reuse.

Beyond guaranteeing reuse, Open Data Editor also ensures privacy and security thanks to its local architecture, i.e. the data remains on the user's device.

Pilot projects: global impact and tangible results

Although it is a very intuitive tool, the organization makes available to the user a free online course  to learn how to get the most out of it. The course is currently in English, but Spanish translation will be available soon.

In addition to the main course, the Open Knowledge Foundation has implemented a "train the trainer" program  that trains people to teach the course locally in different regions of the world. Within the framework of this training programme, pilot projects are being implemented in different sectors and communities. These pilot projects have focused especially on encouraging access to basic training in quality data analysis tools, something that OKFN believes should not be limited by economic or technological barriers.

The documented use cases show diverse applications ranging from human rights organizations to local government institutions, all leveraging the data validation and cleansing capabilities offered by the tool. The educational approach of Open Data Editor goes beyond the simple use of the tool: it is about training in open data and promoting open and accessible knowledge.

Next steps: Integrating artificial intelligence

The results of this first phase have been so promising that the Open Knowledge Foundation has decided to move towards a second stage, this time incorporating artificial intelligence technologies to further expand the capabilities of the tool. The new version, which offers validation-focused AI support and trust-building features, has just been announced and released.

The philosophy behind this AI integration is to maintain the educational character of the tool. Rather than creating a "black box" that simply provides results, the new functionality will explain every step that artificial intelligence takes, allowing users to understand not only what is being done with their data, but also why certain decisions are being made.

This transparent approach to AI is especially important in the context of open and government data, as we explain in this episode of the datos.gob.es podcast. Open Data Editor users will be able to see how AI identifies potential problems, suggests corrections, and validates data quality, becoming a learning tool as well as a practical utility.

Impact on the open data ecosystem

This new functionality will add to the purpose of offering a sustainable and open tool. It is precisely this commitment to open source that makes Open Data Editor adaptable and improved by the global developer community. To do this, they use the Frictionless Framework as a technological basis, which ensures that the standards used are open and widely adopted in the open data ecosystem.

There is no doubt that the tool is especially aligned with government open data principles, providing public administrations with a way to improve the quality of their data publications without requiring significant investments in technical infrastructure or specialized training. For data journalists and civil society organizations, Open Data Editor offers the ability to work with complex datasets more efficiently, allowing them to focus on analysis and interpretation rather than technical data cleansing.

In short, more than a technical tool, Open Data Editor symbolizes a paradigmatic shift towards the democratization of data analysis. Because its impact extends beyond its immediate functionalities, contributing to a broader ecosystem of open and accessible data.

calendar icon
Blog

In the usual search for tricks to make our prompts more effective, one of the most popular is the activation of the chain of thought. It consists of posing a multilevel problem and asking the AI system to solve it, but not by giving us the solution all at once, but by making visible step by step the logical line necessary to solve it. This feature is available in both paid and free AI systems, it's all about knowing how to activate it.

Originally, the reasoning string was one of many tests of semantic logic that developers put language models through. However, in 2022, Google Brain researchers demonstrated for the first time that providing examples of chained reasoning in the prompt could unlock greater problem-solving capabilities in models.

From this moment on, little by little, it has positioned itself as a useful technique to obtain better results from use, being very questioned at the same time from a technical point of view. Because what is really striking about this process is that language models do not think in a chain: they are only simulating human reasoning before us.

How to activate the reasoning chain

There are two possible ways to activate this process in the models: from a button provided by the tool itself, as in the case of DeepSeek with the "DeepThink" button that activates the R1 model:

 

Graphical User Interface, Application</p>
<p>AI-generated content may be incorrect.

Figure 1. DeepSeek with the "DeepThink" button that activates the R1 model.

Or, and this is the simplest and most common option, from the prompt itself. If we opt for this option, we can do it in two ways: only with the instruction (zero-shot prompting) or by providing solved examples (few-shot prompting).

  • Zero-shot prompting: as simple as adding at the end of the prompt an instruction such as "Reason step by step", or "Think before answering". This assures us that the chain of reasoning will be activated and we will see the logical process of the problem visible.

Graphical User Interface, Text, Application</p>
<p>AI-generated content may be incorrect.

Figure 2. Example of Zero-shot prompting.

  • Few-shot prompting: if we want a very precise response pattern, it may be interesting to provide some solved question-answer examples. The model sees this demonstration and imitates it as a pattern in a new question.​

Text, Application, Letter</p>
<p>AI-generated content may be incorrect.

Figure 3. Example of Few-shot prompting.

Benefits and three practical examples

When we activate the chain of reasoning, we are asking the system to "show" its work in a visible way before our eyes, as if it were solving the problem on a blackboard. Although not completely eliminated, forcing the language model to express the logical steps reduces the possibility of errors, because the model focuses its attention on one step at a time. In addition, in the event of an error, it is much easier for the user of the system to detect it with the naked eye.

When is the chain of reasoning useful? Especially in mathematical calculations, logical problems, puzzles, ethical dilemmas or questions with different stages and jumps (called multi-hop). In the latter, it is practical, especially in those in which you have to handle information from the world that is not directly included in the question.

Let's see some examples in which we apply this technique to a chronological problem, a spatial problem and a probabilistic problem.

  • Chronological reasoning

Let's think about the following prompt:

If Juan was born in October and is 15 years old, how old was he in June of last year?

Graphical User Interface, Text, Application</p>
<p>AI-generated content may be incorrect.

Figure 5. Example of chronological reasoning.

For this example we have used the GPT-o3 model, available in the Plus version of ChatGPT and specialized in reasoning, so the chain of thought is activated as standard and it is not necessary to do it from the prompt. This model is programmed to give us the information of the time it has taken to solve the problem, in this case 6 seconds. Both the answer and the explanation are correct, and to arrive at them the model has had to incorporate external information such as the order of the months of the year, the knowledge of the current date to propose the temporal anchorage, or the idea that age changes in the month of the birthday, and not at the beginning of the year.

  • Spatial reasoning

  • A person is facing north. Turn 90 degrees to the right, then 180 degrees to the left. In what direction are you looking now?

    Graphical User Interface, Text, Application, Email</p>
<p>AI-generated content may be incorrect.

    Figure 6. Example of spatial reasoning.

    This time we have used the free version of ChatGPT, which uses the GPT-4o model by default (although with limitations), so it is safer to activate the reasoning chain with an indication at the end of the prompt: Reason step by step. To solve this problem, the model needs general knowledge of the world that it has learned in training, such as the spatial orientation of the cardinal points, the degrees of rotation, laterality and the basic logic of movement.

  • Probabilistic reasoning

  • In a bag there are 3 red balls, 2 green balls and 1 blue ball. If you draw a ball at random without looking, what's the probability that it's neither red nor blue?

    Text</p>
<p>AI-generated content may be incorrect.

    Figure 7. Example of probabilistic reasoning.

    To launch this prompt we have used Gemini 2.5 Flash, in the Gemini Pro version of Google. The training of this model was certainly included in the fundamentals of both basic arithmetic and probability, but the most effective for the model to learn to solve this type of exercise are the millions of solved examples it has seen. Probability problems and their step-by-step solutions are the model to imitate when reconstructing this reasoning.

    The Great Simulation

    And now, let's go with the questioning. In recent months, the debate about whether or not we can trust these mock explanations has grown, especially since, ideally, the chain of thought should faithfully reflect the internal process by which the model arrives at its answer. And there is no practical guarantee that this will be the case.

    The Anthropic team (creators of Claude, another great language model) has carried out a trap experiment with Claude Sonnet in 2025, to which they suggested a key clue for the solution before activating the reasoned response.

    Think of it like passing a student a note that says "the answer is [A]" before an exam. If you write on your exam that you chose [A] at least in part because of the grade, that's good news: you're being honest and faithful. But if you write down what claims to be your reasoning process without mentioning the note, we might have a problem.

    The percentage of times Claude Sonnet included the track among his deductions was only 25%. This shows that sometimes models generate explanations that sound convincing, but that do not correspond to their true internal logic to arrive at the solution, but are rationalizations a posteriori: first they find the solution, then they invent the process in a coherent way for the user. This shows the risk that the model may be hiding steps or relevant information for the resolution of the problem.

    Closing

    Despite the limitations exposed, as we see in the study mentioned above, we cannot forget that in the original Google Brain research, it was documented that, when applying the reasoning chain, the PaLM model improved its performance in mathematical problems from 17.9% to 58.1% accuracy. If, in addition, we combine this technique with the search in open data to obtain information external to the model, the reasoning improves in terms of being more verifiable, updated and robust.

    However, by making language models "think out loud", what we are really improving in 100% of cases is the user experience in complex tasks. If we do not fall into the excessive delegation of thought to AI, our own cognitive process can benefit. It is also a technique that greatly facilitates our new work as supervisors of automatic processes.


Content prepared by Carmen Torrijos, expert in AI applied to language and communication. The contents and points of view reflected in this publication are the sole responsibility of the author.

calendar icon