A draft Regulation on Artificial Intelligence has recently been made public as part of the European Commission's initiative in this area. It is directly linked to the proposal on data governance, the Directive on the re-use of public sector information and open data, as well as other initiatives in the framework of the European Data Strategy.
This measure is an important step forward in that it means that the European Union will have a uniform regulatory framework that will make it possible to go beyond the individual initiatives adopted by each of the Member States which, as in the case of Spain, have approved their own strategy under a Coordinated Plan that has recently been updated with the aim of promoting the global leadership of the European Union in the commitment to a reliable Artificial Intelligence model.
Why a Regulation?
Unlike the Directive, the EU Regulation is directly applicable in all Member States, and therefore does not need to be transposed through each Member State's own legislation. Although the national strategies served to identify the most relevant sectors and to promote debate and reflection on the priorities and objectives to be considered, the fact is that there was a risk of fragmentation in the regulatory framework given the possibility that each of the States to establish different requirements and guarantees. Ultimately, this potential diversity could negatively affect the legal certainty required by Artificial Intelligence systems and, above all, impede the objective of pursuing a balanced approach that would make the articulation of a reliable regulatory framework possible, based on the fundamental values and rights of the European Union in a global social and technological scenario.
The importance of data
The White Paper on Artificial Intelligence graphically highlighted the importance of data in relation to the viability of this technology by stating categorically that "without data, there is no Artificial Intelligence". This is precisely one of the reasons why a draft Regulation on data governance was promoted at the end of 2020, which, among other measures, attempts to address the main legal challenges that hinder access to and reuse of data.
In this regard, as emphasised in the aforementioned Coordinated Plan, an essential precondition for the proper functioning of Artificial Intelligence systems is the availability of high-quality data, especially in terms of their diversity and respect for fundamental rights. Specifically, based on this elementary premise, it is necessary to ensure that:
- Artificial Intelligence systems are trained on sufficiently large datasets, both in terms of quantity and diversity.
- The datasets to be processed do not generate discriminatory or unlawful situations that may affect rights and freedoms.
- The requirements and conditions of the regulations on personal data protection are considered, not only from the perspective of their strict compliance, but also from the perspective of the principle of proactive responsibility, which requires the ability to demonstrate compliance with the regulations in this area.
The importance of access to and use of high-quality datasets has been particularly emphasised in the draft regulation, in particular with regard to the so-called Common European Data Spaces established by the Commission. The European regulation aims to ensure reliable, responsible and non-discriminatory access to enable, above all, the development of high-risk Artificial Intelligence systems with appropriate safeguards. This premise is particularly important in certain areas such as health, so that the training of AI algorithms can be carried out on the basis of high ethical and legal standards. Ultimately, the aim is to establish optimal conditions in terms of guarantees of privacy, security, transparency and, above all, to ensure adequate institutional governance as a basis for trust in their correct design and operation.
Risk classification at the heart of regulatory obligations
The Regulation is based on the classification of Artificial Intelligence systems considering their level of risk, distinguishing between those that pose an unacceptable risk, those that entail a minimal risk and those that, on the contrary, are considered to be of a high level. Thus, apart from the exceptional prohibition of the former, the draft establishes that those that are classified as high risk must comply with certain specific guarantees, which will be voluntary in the case of system providers that do not have this consideration. What are these guarantees?
- Firstly, it establishes the obligation to implement a data quality management model to be documented in a systematic and orderly manner, one of the main aspects of which refers to data management systems and procedures, including data collection, analysis, filtering, aggregation, labelling.
- Where techniques involving the training of models with data are used, system development is required to take place on the basis of training, validation and test datasets that meet certain quality standards. Specifically, they must be relevant, representative, error-free and complete, taking into account, to the extent required for the intended purpose, the characteristics or elements of the specific geographical, behavioural or functional environment in which the Artificial Intelligence system is intended to be used.
- These include the need for a prior assessment of the availability, quantity and adequacy of the required datasets, as well as an analysis of possible biases and gaps in terms of data gaps, in which case it will be necessary to establish how such gaps can be addressed.
In short, in the event that the Regulation continues to be processed and is finally approved, we will have a regulatory framework at European level which, based on the requirements of respect for rights and freedoms, could contribute to the consolidation and future development of Artificial Intelligence not only from the perspective of industrial competitiveness but also in accordance with legal standards in line with the values and principles on which the European Union is based.
Content prepared by Julián Valero, Professor at the University of Murcia and Coordinator of the Research Group "Innovation, Law and Technology" (iDerTec).
The contents and views expressed in this publication are the sole responsibility of the author.
Artificial intelligence is transforming companies, with supply chain processes being one of the areas that is obtaining the greatest benefit. Its management involves all resource management activities, including the acquisition of materials, manufacturing, storage and transportation from origin to final destination.
In recent years, business systems have been modernized and are now supported by increasingly ubiquitous computer networks. Within these networks, sensors, machines, systems, vehicles, smart devices and people are interconnected and continuously generating information. To this must be added the increase in computational capacity, which allows us to process these large amounts of data generated quickly and efficiently. All these advances have contributed to stimulating the application of Artificial Intelligence technologies that offer a sea of possibilities.
In this article we are going to review some Artificial Intelligence applications at different points in the supply chain.
Technological implementations in the different phases of the supply chain
Planning
According Gartner, volatility in demand is one of the aspects that most concern entrepreneurs. The COVID-19 crisis has highlighted the weakness in planning capacity within the supply chain. In order to properly organize production, it is necessary to know the needs of the customers. This can be done through techniques of predictive analytics that allow us to predict demand, that is, estimate a probable future request for a product or service. This process also serves as the starting point for many other activities, such as warehousing, shipping, product pricing, purchasing raw materials, production planning, and other processes that aim to meet demand.
Access to real-time data allows the development of Artificial Intelligence models that take advantage of all the contextual information to obtain more precise results, reducing the error significantly compared to more traditional forecasting methods such as ARIMA or exponential smoothing.
Production planning is also a recurring problem where variables of various kinds play an important role. Artificial intelligence systems can handle information involving material resources; the availability of human resources (taking into account shifts, vacations, leave or assignments to other projects) and their skills; the available machines and their maintenance and information on the manufacturing process and its dependencies to optimize production planning in order to satisfactorily meet the objectives.
Production
Within of the stages of the production process, one of the stages more driven by the application of artificial intelligence is the quality control and, more specifically, the detection of defects. According to European Comission, 50% of the production can end up as scrap due to defects, while, in complex manufacturing lines, the percentage can rise to 90%. On the other hand, non-automated quality control is an expensive process, as people need to be trained to be able to perform the inspections properly and, furthermore, these manual inspections could cause bottlenecks in the production line, delaying delivery times. Coupled with this, inspectors do not increase in number as production increases.
In this scenario, the application of computer vision algorithms can solve all these problems. These systems learn from defect examples and can thus extract common patterns to be able to classify future production defects. The advantages of these systems is that they can achieve the precision of a human or even better, since they can process thousands of images in a very short time and are scalable.
On the other hand, it is very important to ensure the reliability of the machinery and reduce the chances of production stoppage due to breakdowns. In this sense, many companies are betting on predictive maintenance systems that are capable of analyzing monitoring data to assess the condition of the machinery and schedule maintenance if necessary.
Open data can help when training these algorithms. As an example, the Nasa offers a collection of data sets donated by various universities, agencies or companies useful for the development of prediction algorithms. These are mostly time series of data from a normal operating state to a failed state. This article shows how one of these specific data sets (Turbofan Engine Degradation Simulation Data Set, which includes sensor data from 100 engines of the same model) can be taken to perform a exploratory analysis and a model of linear regression reference.
Transport
Route optimization is one of the most critical elements in transportation planning and business logistics in general. Optimal planning ensures that the load arrives on time, reducing cost and energy to a minimum. There are many variables that intervene in the process, such as work peaks, traffic incidents, weather conditions, etc. And that's where artificial intelligence comes into play. A route optimizer based on artificial intelligence is able to combine all this information to offer the best possible route or modify it in real time depending on the incidents that occur during the journey.
Logistics organizations use transport data and official maps to optimize routes in all modes of transport, avoiding areas with high congestion, improving efficiency and safety. According to the study “Open Data impact Map”, The open data most demanded by these companies are those directly related to the means of transport (routes, public transport schedules, number of accidents…), but also geospatial data, which allow them to better plan their trips.
In addition, exist companies that share their data in B2B models. As stated in the Cotec Foundation report “Guide for opening and sharing data in the business environment”, The Spanish company Primafrio, shares data with its customers as an element of value in their operations for the location and positioning of the fleet and products (real-time data that can be useful to the customer, such as the truck license plate, position, driver , etc.) and for billing or accounting tasks. As a result, your customers have optimized order tracking and their ability to advance billing.
Closing the transport section, uOne of the objectives of companies in the logistics sector is to ensure that goods reach their destination in optimal conditions. This is especially critical when working with companies in the food industry. Therefore, it is necessary to monitor the state of the cargo during transport. Controlling variables such as temperature, location or detecting impacts is crucial to know how and when the load deteriorated and, thus, be able to take the necessary corrective actions to avoid future problems. Technologies such as IoT, Blockchain and Artificial Intelligence are already being applied to these types of solutions, sometimes including the use of open data.
Customer service
Offering good customer service is essential for any company. The implementation of conversational assistants allows to enrich the customer experience. These assistants allow users to interact with computer applications conversationally, through text, graphics or voice. By means of speech recognition techniques and natural language processing, these systems are capable of interpreting the intention of users and taking the necessary actions to respond to their requests. In this way, users could interact with the wizard to track their shipment, modify or place an order. In the training of these conversational assistants it is necessary to use quality data, to achieve an optimal result.
In this article we have seen only some of the applications of artificial intelligence to different phases of the supply chain, but its capacity is not only limited to these. There are other applications such as automated storage used by Amazon at its facilities, dynamic prices depending on the demand or the application of artificial intelligence in marketing, which only give an idea of how artificial intelligence is revolutionizing consumption and society.
Content elaborated by Jose Antonio Sanchez, expert in Data Science and enthusiast of the Artificial Intelligence.
Contents and points of view expressed in this publication are the exclusive responsibility of its author.
On April 27, the agreement was signed that gives rise to the new Càtedra Dades Obertes, also called CATEDRADES, whose objective is to offer data, information and open knowledge to civil society. Driven MESURA and ACICOM, this new chair becomes part of the Polytechnic University of Valencia (UPV) from the Higher Technical School of Computer Engineering, the Faculty of Business Administration and Management, and the Center for Quality and Change Management.
It is the first University Chair promoted by civil society associations. This is a great differentiating nuance, since the rest of the chairs are supported mainly by companies or Public Administrations, and in some cases by foundations.
What will be the work of CATEDRADES?
The purpose of the CATEDRADES Chair is the promotion and development of activities focused on obtaining and using data from public administrations and companies, as a basis for their transparency and accountability at the service of social transformation initiatives and social problems of interest to the citizenship. It seeks to build a space for exchange and technical collaboration aimed at improving the environment through:
- Support of data platforms
- The development of basic competencies for obtaining, managing and analyzing data in general and data generated by citizens (DGC) in particular, with actions aimed at all citizens and with a special focus on young people.
- The momentum of data journalism
- The design of indicators related to the Sustainable Development Goals (SDGs) of the 2030 Agenda from the perspective of citizens.
The activities of the new chair will be focused on environmental, health, sustainability, communication and citizen participation issues, always promoting the management of open knowledge and collective intelligence through the transfer of data, information and knowledge between civil society, public and private entities, the UPV and students. For this, the Learning-Service (ApS) methodology will be used: while learning, students also offer a service to society, thus promoting critical citizenship. On this post You can find some of the activities carried out and that continue to be carried out by the UPV following this methodology.
MESURA and ACICOM: civil associations to promote open data
The MESURA and ACICOM entities bring together citizens, professionals and entities from different sectors and specialties united with a common goal: the defense of citizens' rights, especially those related to information and communication.
From MESURA and ACICOM, they promote different projects related to open data and that may also be treated in the CATEDRADES Chair, such as:
- Data Platform "Albufera Junts". Initiative to improve access to information and promote cooperation between entities that they intervene in the Albufera.
- Citizen Observatory "Nostre Aire". It brings together professionals and entities of different types with the purpose of building a space for exchange and technical collaboration oriented to the observation and generation of open data to improve the air quality of the Valencian Community.
They are also promoting data platforms on the impact of addiction to gambling and various substances or the urban toll.
In summary, the Chair will favor the making available to the public of data in general and OD4D in particular, presented in the most friendly way possible, using different elements that go from the data itself, to infographics, presentations or websites, etc. These data will facilitate awareness, analysis of complex reality, decision-making and positions, accountability or collaboration and governance, among others.
More and more media outlets have articles on their pages linked to so-called data journalism. This form of journalism uses the technologies and tools related to to data to provide readers with more documented, easier to understand and more engaging information.
In this article we explain what data journalism consists of and we show you some examples of media that already incorporate this modality within their informational processes.
What is data journalism?
Data journalism or data journalism is a journalistic discipline that incorporates different fields such as computer science, programming, engineering, statistics, design and journalism. to combine in the same space the analysis of data together with the narrative of the press. According to him Data Journalism Handbookdata can be the tool used to tell a story, the source a story is based on, or both.
Data journalism has its origin in Precision Journalism -evolution of investigative journalism-, where disciplines such as sociology and statistics are incorporated into traditional journalism, and Computer Assisted Journalism (Computer Assisted Reporting or CAR), which emerged in 1969 when journalists began to use computer systems to support them when dealing with the information they collected.
Data Journalism goes one step further and is the result of the digital transformation present today in the daily work of many newsrooms throughout the world. Using resources and tools related to data analytics, information is extracted from large databases. In this way, information of greater value and more complete is offered, adapted to the dynamism that digital reading requires.
What products does data journalism offer?
According to the digital magazine Journalists notebooks, there are at least four productions that can derive from this discipline, and that are generally complementary to each other:
- Data-driven articles: These are short articles that are made from large databases. These types of articles are typical of investigative journalism, since their common denominator is based on surveys or statistics.
- Applications news: The services that group information and send notifications to users about news of their interest from different media are becoming more and more common. For example, the main providers. For instance, Google Discover or Samsung Daily.
- Open datasets (datasets): Some media offer open data, the result of their research, in order to democratize information through the accessibility of data and its availability on the Internet in reusable and free formats. As an example, the New York Times offers data on the coronavirus openly.
- Interactive visualizations: such as infographics, graphics or applications that allow the information obtained from the databases to be viewed more clearly, facilitating the understanding of complex topics by readers. Visualizations can complement articles or be a product of their own when accompanied by short explanatory texts.
Data journalism in the media
More and more media have a news production based on data. Here are some examples:
At the national level
Among others, The country,The world or The newspaper They have a specific section within their digital newspapers dedicated to data journalism. Within both, newspapers address current affairs from the perspective offered by the data and through which they generate visualizations.like the following. In it, you can see a map that shows the inequalities in mortality according to the area in which you reside.
For its part, EpData is the platform created by Europa Press to facilitate the use of public data by journalists, with the aim ofboth to enrich the news with graphs and context analysis and to contrast the figures offered by the various sources. EpData also offers tools for creating and editing charts. An example of the activity they carry out is this article, where you can check the status of the dependency waiting lists in Spain.
Another example of data journalism is Newtral. It is an audiovisual content startup founded in 2018 in which data is the basis of all his work, especially in the fight against fake news. For instance, in this article different visualizations of data related to the oscillation of the price of light can be seen during different months.
On an international level
Data journalism is part of some of the most important international newspapers. It is the case of The Guardian, which also has a specific section dedicated to the production of journalistic material through articles, graphics and data-based visualizations. For instance, on this interactive map You can check which areas of Australia suffered the greatest natural disasters in 2020.
Another international media that also has a specific section for data-based journalism is the Argentine newspaper The nation, that through its section The Data Nation offers numerous informative productions in which it combines facts and news. For instance, in this article you can see an indicator of mobility of Argentine citizens.
Masters and studies related to data journalism
The digital transformation has meant that data journalism is here to stay. For this reason, more and more universities and education centers offer studies related to data journalism. For example, him Own Master's Degree in Data and Visualization Journalism from the University of Alcalá; the Master in Digital and Data Journalism taught by Atresmedia and the University of Nebrija or the Data Journalism Course from the UAM-El País School of Journalism.
In short, we are facing a modality with a future, which needs media that continue to bet on this discipline and professional capecesto handle data analysis and treatment tools, but also to tell stories, capable of transmitting what is happening in our environment with the support of data in a truthful and close way.
Therefore, it is not surprising that this form of journalism continues to grow in the coming years and that, in addition to the examples included in this article, the mass media and studies related to data journalism will increase. If you know any more and want to share it, do not hesitate to write to us contacto@datos.gob.es or leave us a message in the comments.
Content prepared by the datos.gob.es team.
Group of professionals specialized in data management and use at different levels.
A reference team in the development, extraction and processing of information, turning it into strategy and value for their clients.
Based on the idea that “the future reusers are, nowadays, in schools”, the Barcelona City Council organizes, once again, the Barcelona Dades Obertes Challenge, a contest with a high social impact whose main objective is to bring open data benefits closer to students and thus increase the number of people with open data knowledge and skills, taking advantage of all the benefits that entails. And it is aimed at the youngest citizens, from very soon.
What does it consist of?
The Barcelona Dades Obertes Challenge Third edition 2020 is a contest where students will have to develop real analysis and / or interpretation projects, using data sets from the Open Data BCN portal. It is intended that students apply their critical vision to suggest improvements that affect the city and the lives of its inhabitants, while discovering the potential of open data.
In addition to the Barcelona City Council, other organism that participate in the organization of the Barcelona Dades Obertes Challenge are: the Consorci d'Educació de Barcelona, the Center for Specific Pedagogical Resources for Support to the Innovation and Educational Research (CESIRE) and Barcelona Activa S.A.
Who can participate?
The contest is aimed, mainly, at students from the 3rd and 4th grades of the E.S.O., and students from formative courses, prioritizing public funding centres. Participation takes place through their teachers, without restrictions by the subject they teach.
Teachers who wish can take part in a volunteer training program. There are currently 12 centers enrolled in this course, which includes training on general open data concepts, the Open Data BCN portal and tools for the treatment of data.
What is the deadline for submitting projects?
The deadline for submitting applications will open on February 17 and close on April 17, 2020.
Each centre will participate with only one project which will be evaluated by the jury of experts in the field. This jury will select a maximum of 10 projects from all the proposals received.
The students of the selected centers will have to defend their project before the jury in a final act that will be held on May 7, 2020, where the 3 finalists will be chosen.
Are you looking for inspiration? Discover the finalist projects of the 2 previous editions
The previous editions of the Barcelona Dades obertes Challenge were a success. 14 educational centers participated, more than 40 teachers and about 500 students who demonstrated their ability to acquire new knowledge and the educational opportunities offered by open data, presenting all projects of high quality and interest.
The first year, the winner was the Institut Ferran Tallada, with a work titled "Social cohesion goes by neighborhoods" that analyze social cohesion indexes (ICS) to measure and compare the city inequalities by districts. You can watch the video summary here.
The second year, the Institut Vila de Gràcia won the main award, thanks to the project “Gentrification in the neighborhoods in Barcelona”. This project showed the process of urban transformation caused by the phenomenon of gentrification, using datasets such as the number of inhabitants who leave their neighborhoods or the rent variation in € / m2. The projects of the EAT Institut Lluïsa Cura and the Institut Joan Brossa won a deserved second and third prize. You can watch the video summary here.
How can I participate?
The registration period is not yet open although the rules will be published shortly and then the call. From Barcelona City Council and datos.gob.es we will inform you of all the news that may arise.
You can follow all the information about the Barcelona Dades Obertes Challenge on twitter under the #OpenDataBCN hashtag.
The Junta de Castilla y León has launched their third edition of the open data contest. Its objective is "to recognize the development of projects that use datasets from the Open Data portal of the Junta de Castilla y León". In short, it look for boosting open data and re-users community in Castilla y León.
On this occasion, 3 categories have been established:
- Ideas Category: Ideas to create a study, service, website or application for mobile devices, using datasets from the Open Data portal of the Junta de Castilla y León. It give an opportunity to those people or entities that, without having the technical capacity, the resources or the time to implement a project with open data, do have the idea.
- Products and Services Category: Projects that are accessible to all citizens via the internet through a url. These projects must consist of studies, services, websites or applications for mobile devices, using datasets from the Open Data portal of the Junta de Castilla y León.
- Educational Resource Category: Creation of new and innovative open educational resources (published with Creative Commons licenses) using data sets from the Open Data portal of the Junta de Castilla y León to help teaching in the classroom.
A jury, made up of representatives of the Autonomous Administration and the sponsoring company, GMV, will choose the eight winners based on a series of criteria, such as its usefulness, originality or public value. Three prizes will be awarded to the Ideas Category, three to the Products and Services Category, and two to the Educational Resources Category made by educational centers. In the first two categories (Ideas and Products and Services) an award will be reserved for students enrolled in the courses 2017-2018 or 2018-2019.
The prizes will consist of an economic endowment (12,000 euros to be distributed) and individualized advice on business development, provided by the Business Development Area of the Institute for Business Competitiveness of Castilla y León (ICE):
- Ideas Category: First prize: € 2,000 / Second prize: € 1,000 / Prize for students: € 1,000.
- Products and Services Category: First prize: € 3,000 / Second prize: € 1,500 / Prize for students: € 2,000.
- Educational Resource Category: First prize: 1,000 euros in technological material / Second prize: 500 euros in technological material.
The candidatures can be presented at the electronic headquarters of the Regional Government of Castilla y León: Open Data Contest of the Community of Castilla y León (2018). The contest is open to both natural and legal persons, with the exception of public administrations and those individuals or legal entities that have participated directly or indirectly in the call. The deadline to submit the projects is from December 21, 2018, to February 10, 2019.
The international open data community has an appointment in Buenos Aires on September 27-28, 2018, in the new edition of the International Open Data Conference (IODC). Under the title "The future is open", a participatory event has been launched to address open data challenges and opportunities. The ultimate goal is to promote collaboration among professionals to define a strategy to promote the use of open data both globally and locally.
People interested in attending only have to fill out this online form. Registration is free and the process will be open until the day of the event. Also, journalists wishing to cover the conference can contact contact@opendatacon.org.
An inclusive and innovative agenda
The collaborative atmosphere of the event was reflected on the agenda. Through a global call for proposals, citizens and researchers could include their vision of open data, emphasizing their interests and concerns. The result is an agenda aligned with the needs of the attendees, which includes presentations, discussion panels, discussion groups and dynamic workshops.
The event will begin on the 27th at 9:00 a.m. local time with an official welcome, followed by an opening plenary session, where speakers will share an overview of the current status of open data in the world. Then, there will be a series of parallel sessions where different topics will be addressed:
- General sessions, focused for example on how to implement an open data policy, and other more specific sessions focused on the influence of open data in specific fields such as agriculture, journalism, Smart cities or the environment.
- Some sessions will address how open data can help solve some of humanity's current challenges (such as migration and refugee crises, gender issues or climate change).
- Regional sessions will also be held to provide information on the status of open data initiatives in specific territories such as LATAM, Asia, Western Europe or sub-Saharan Africa, among others.
All sessions that will take place in the main plenary room will be livestreamed. Also, there will be simultaneous translation services available, both for English and Spanish, in all rooms.
A week full of activities
In addition to these sessions, a series of pre-events will be held the prior days. These events are complementary to the program and allow more opportunities to engage and learn about different topics. Some example of these events are:
- September 24th. Attendees can visit the Open cities Summit, whit the support of Open Data for Development (OD4D). The objective of this event is to create a road map that includes concrete actions to develop an open city to improve the lives of citizens. Through presentations, panels and working groups, solutions will be sought to overcome previously identified challenges.
- September 25th. The Open Data Research Symposium is held, with the participation of The Governance Lab (The GovLab), ), Open Data for Development (OD4D), Open Data Research Network (ODRN) e International Development Research Center (IDRC). In this event, researchers present 8 -12 papers that provide a critical perspective and allow the development of empirically tested theories on the publication and use of open data. These papers will address issues such as the role of open data for decision-making or its value for developing economies. In addition, during the event, there will be a workshop to share relevant tools or processes for the research community.
- September 26. A day before the IODC start, attendees could enjoy ABRELATAM, an event whose organizers describe as a "no-conference", since it moves away from the traditional format of speaker who introduce a topic to a reactive audience. In this case, there will be multiple simultaneous sessions moderated by a facilitator that will encourage dialogue among small groups of attendees, based on topics gathered from the common agenda (entrepreneurship, security and privacy, algorithms and technology, etc.).
All these events will serve as a prelude to the intense IODC days, full of activities. As in the previous edition, celebrated with great success in the city of Madrid, it is expected that the event will consolidate international relations and encourage concrete actions that will go a step further in the development of open data strategies around the world.
The Basque Government organizes two open data competitions with the help of the Provincial Councils of Álava, Bizkaia and Gipuzkoa and the city councils of Bilbao, Donostia-San Sebastián and Vitoria-Gasteiz. The objective is to promote the culture of reuse information generated by the local public administrations. For this reason, the projects must use at least one of the datasets belonging to the following catalogs: Open Data Euskadi, Gipuzkoa Irekia, Bilbao Open Data, Open Data Vitoria-Gasteiz, Open data of the City of Donostia-San Sebastián and Open Data Bizkaia.
The registration period begins on May 22th. You can register on the website of each of the contests, free of charge, until June 30.
Next, we tell you the main characteristic of each contests.
Ideas contest
The ideas contest seeks innovative ideas to create a service, study, web application or mobile application, using some of the aforementioned open datasets. The two best ideas will be awarded 4,000 and 2,000 euros respectively.
This contest is aimed at students, professionals, companies or organizations from all sectors. To participate, it is not necessary to have technical knowledge, but an idea of what could be achieved using open data.
Application contest
On the contrary, the application contest focuses on real projects that provide any type of service, study, web application or mobile application, using the open datasets previously indicated. In this case, the economic endowment is 8,000 euros, to be divided between the first and the second classified (5,000 and 3,000 respectively). Due to they are real projects, in this case programming knowledge will be necessary.
Both in the ideas contest and the applications contest, the projects will be selected based on a series of criteria, such as its usefulness - taking into account the magnitude of the problem it solves and the number of potential users -, its potential to generate business and obtain profitability, its social value or its innovative character, among other factors.
The commitment of the regional and local administrations in favor of the reuse of open data is driving an increasing number of meetings of this type, where technology and open data are combined to create solutions that add value to society.
Join in and participate!
The interest in open data is growing and proof of this is the large number of events around this subject that will be held in our country during the coming months. Here we summarize the most important ones.
A must-attend event is the Open Gov Week, which will take place from May 7th to 11th. This international event is promoted by the Open Government Partnership, a multilateral initiative of 76 countries, including Spain, to "promote transparency, empower citizens, fight corruption, and harness new technologies to strengthen governance". The activities include courses, seminars, public debates, presentations, open days, contests and hackathons, among other activities (you can see all the activities here).
Public information opening, to promote its reuse and generate valuable services for citizens, is one of the topics that will be addressed. The opening session, entitled The Open State: Main Challenges and Opportunities for Public Authorities and Civil Society, include a panel discussion where representatives of public authorities, experts and civil society will share their vision on the value of open data and the need to protect information. This session will take place on Monday, May 7th from 9:00 a.m.
In addition, during the Open Gov Week, different activities have been organized to promote some of the Spanish open data portals. This is the case of Madrid City Council Open Data Portal. During 2 sessions - on Thursday, May 10th at 3:30 p.m. and Friday 11th at 12:00 p.m. – the people in charge of the service will explain how they manage public information access. This activity is aimed at teachers of secondary school and university.
The Transparency and Data Protection Council of Andalusia will also promote its Open Data Portal, in a session that include, among other things, simple examples of public information reuse. The event can be followed by streaming or in-person on Friday 11th at 10:30 a.m.
But not only public administrations promote events around open data, but also we increasingly see private events that address this topic, among other issues. On June 6th and 7th, the OpenExpo Europe 2018 will be held in Madrid, where experts will share the latest trends in Open Source, Free Software and Open World Economy (where open data has a prominent role). It is a professional event where companies linked to technological innovation from different fields, such as Business Intelligence, Cloud Computing, cybersecurity or IoT, will showcase their innovations and technological solutions.
Finally, it is also important to highlight the activities aimed at promoting the use of open data among the youngest citizens. On the 3rd of May, the final presentation of a pilot project of the City Council of Barcelona is held. Through a contest, 3rd and 4th ESO students have learned to use analysis tools and to elaborate proposals based on data from Open Data BCN. Another example is the Open Summer of Code, an international program to be held in July in Spain and Belgium with the aim of "providing students with the training, support and network necessary to transform open innovation projects into powerful real-world services".
These are just some of the appointments that will take place in the coming months, but every day there are more and more activities designed to give citizens an understanding of open data world, spreading its value and promoting its reuse.