Saltar al contenido
05/26/2019 / José Quintás Alonso

Pablo Manuel y Unidas Podemos

Me sorprende Pablo Manuel; tanto él como Casado han sido los grandes perdedores de las anteriores elecciones y, como cabezas de sus formaciones, pensaba yo que podrían hacer algo de autocrítica.

Pues Pablo Manuel aspira a ser Ministro e incluso Vicepresidente

Pienso que no comprende el cuadro en el que está; lo mismo que no comprende que los tractores-segadoras han desterrado a las hoces en los trabajos agrícolas.

Personalmente veo una porción del cuadro de esta forma:

  1. Parte de sus electores eran “prestados” y se vuelven hacia un voto más útil; esa “parte” son muchos miles de ciudadanos y ciudadanas que, además, ven con recelo algunas de sus propuestas y actuaciones. Pedro Sánchez no debe estar preocupado por este movimiento electoral, es más, creo que está contento, ha mejorado sensiblemente los resultados electorales del PSOE.
  2. El “abrazo del oso” que vimos con distintos grupos de Izquierda Unida (jeje) que acabaron en las listas del PSOE, o con la “nadificación” de Unión Valenciana por el PP, deberían inducirle a tener cuidado y por ello mantener su personalidad política (lo que llevaría a marcar distancias con el Gobierno desde el Gobierno)…y eso es indeseable. Sacará a relucir la vía valenciana, pero no creo que convenza demasiado a quien tira del carro (fué Ximo Puig quien se sumó al 28 de Mayo, no al revés… ¿ qué resultado tendrá hoy el PSPV -en número de votos- en las municipales hoy)
  3. Frente a ello está la opción de la negociación de Ley por Ley con las distintas fuerzas del arco parlamentario y, si las cosas van mal, se repetirá la jugada: nuevas elecciones y viernes gozosos. Claro, otras fuerzas se moverán…la economía evolucionará… el cuadro es más grande y complejo
  4. Las personas que votaron al PSOE en Mayo, tuvieron la opción de votar a Unidas Podemos y no lo hicieron (antes al contrario)… de forma que la pelota va a estar en el tejado de Podemos, no en el del PSOE

Pienso que Pablo Manuel, si hoy (26 de mayo de 2019) su formación tiene peores resultados que en las anteriores elecciones (comparables)… debería de plantearse, no entrar en el Gobierno, si no salir de la Secretaría de Unidas Podemos; en fin, pero este no es mi tema…que hagan lo que les de la real o republicana gana…de hecho, no se ni por qué he escrito estas líneas… máxime con la alegría que se ve en xotos y granotas (aunque algunos de estos muestren algo de ¿indiferencia?)

 

05/24/2019 / José Quintás Alonso

La inercia no es solamente un concepto útil en Física

The world is approaching a major inflection point and the intense amount of global angst we are experiencing now stems from deep, structural forces that have been building over decades

Reva Goujon

El mundo se está acercando a un punto de inflexión importante y la intensa cantidad de angustia global que estamos experimentando ahora, se debe a las fuerzas estructurales profundas que se han construido durante décadas.

(Wikipedia.- Todo cuerpo continúa en su estado de reposo o movimiento uniforme en línea recta, no muy lejos de las fuerzas impresas a cambiar su posición)

 

05/21/2019 / José Quintás Alonso

Inteligencia Artificial en la UE: No perder el tren

Colaborar en la formación del tren y no perderlo: Potenciar nuestros centros de investigación, establecer una política (¿Para cuando el Libro Blanco de IA de España?)…; bien podría ser nuestra actitud.

Enlace a documento sobre IA en la UE

Enlace a GTI-IA

 

EUROPEAN

 COMMISSION

Brussels, 8.4.2019

COM(2019) 168 final

 

COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN

PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS

Building Trust in Human-Centric Artificial Intelligence

EN                                                                     EN

1. INTRODUCTION — THE EUROPEAN AI STRATEGY

Artificial intelligence (AI) has the potential to transform our world for the better: it can improve healthcare, reduce energy consumption, make cars safer, and enable farmers to use water and natural resources more efficiently. AI can be used to predict environmental and climate change, improve financial risk management and provides the tools to manufacture, with less waste, products tailored to our needs. AI can also help to detect fraud and cybersecurity threats, and enables law enforcement agencies to fight crime more efficiently.

AI can benefit the whole of society and the economy. It is a strategic technology that is now being developed and used at a rapid pace across the world. Nevertheless, AI also brings with it new challenges for the future of work, and raises legal and ethical questions.

To address these challenges and make the most of the opportunities which AI offers, the Commission published a European strategy[1] in April 2018. The strategy places people at the centre of the development of AI — human-centric AI. It is a three-pronged approach to boost the EU’s technological and industrial capacity and AI uptake across the economy, prepare for socio-economic changes, and ensure an appropriate ethical and legal framework.

To deliver on the AI strategy, the Commission developed together with Member States a coordinated plan on AI2, which it presented in December 2018, to create synergies, pool data — the raw material for many AI applications — and increase joint investments. The aim is to foster cross-border cooperation and mobilise all players to increase public and private investments to at least EUR 20 billion annually over the next decade[2]. The Commission doubled its investments in AI in Horizon 2020 and plans to invest EUR 1 billion annually from Horizon Europe and the Digital Europe Programme, in support notably of common data spaces in health, transport and manufacturing, and large experimentation facilities such as smart hospitals and infrastructures for automated vehicles and a strategic research agenda.

To implement such a common strategic research, innovation and deployment agenda the Commission has intensified its dialogue with all relevant stakeholders from industry, research institutes and public authorities. The new Digital Europe programme will also be crucial in helping to make AI available to small and medium-size enterprises across all Member States through digital innovation hubs, strengthened testing and experimentation facilities, data spaces and training programmes.

Building on its reputation for safe and high-quality products, Europe’s ethical approach to AI strengthens citizens’ trust in the digital development and aims at building a competitive advantage for European AI companies. The purpose of this Communication is to launch a comprehensive piloting phase involving stakeholders on the widest scale in order to test the practical implementation of ethical guidance for AI development and use.

2. BUILDING TRUST IN HUMAN-CENTRIC AI

The European AI strategy and the coordinated plan make clear that trust is a prerequisite to ensure a human-centric approach to AI: AI is not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being. To achieve this, the trustworthiness of AI should be ensured. The values on which our societies are based need to be fully integrated in the way AI develops.

The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities[3]. These values are common to the societies of all Member States in which pluralism, non-discrimination, tolerance, justice, solidarity and equality prevail. In addition, the EU Charter of Fundamental Rights brings together – in a single text – the personal, civic, political, economic and social rights enjoyed by people within the EU.

The EU has a strong regulatory framework that will set the global standard for humancentric AI. The General Data Protection Regulation ensures a high standard of protection of personal data, and requires the implementation of measures to ensure data protection by design and by default[4]. The Free Flow of Non-Personal Data Regulation removes barriers to the free movement of non-personal data and ensures the processing of all categories of data anywhere in Europe. The recently adopted Cybersecurity Act will help to strengthen trust in the online world, and the proposed ePrivacy Regulation[5] also aims at this goal.

Nevertheless, AI brings new challenges because it enables machines to “learn” and to take and implement decisions without human intervention. Before long, this kind of functionality will become standard in many types of goods and services, from smart phones to automated cars, robots and online applications. Yet, decisions taken by algorithms could result from data that is incomplete and therefore not reliable, they may be tampered with by cyber-attackers, or they may be biased or simply mistaken. Unreflectively applying the technology as it develops would therefore lead to problematic outcomes as well as reluctance by citizens to accept or use it.

Instead, AI technology should be developed in a way that puts people at its centre and is thus worthy of the public’s trust. This implies that AI applications should not only be consistent with the law, but also adhere to ethical principles and ensure that their implementations avoid unintended harm. Diversity in terms of gender, racial or ethnic origin, religion or belief, disability and age should be ensured at every stage of AI development. AI applications should empower citizens and respect their fundamental rights. They should aim to enhance people’s abilities, not replace them, and also enable access by people with disabilities.

Therefore, there is a need for ethics guidelines that build on the existing regulatory framework and that should be applied by developers, suppliers and users of AI in the internal market, establishing an ethical level playing field across all Member States. This is why the Commission has set up a high-level expert group on AI[6] representing a wide range of stakeholders and has tasked it with drafting AI ethics guidelines as well as preparing a set of recommendations for broader AI policy. At the same time, the European AI Alliance[7], an open multi-stakeholder platform with over 2700 members, was set up to provide broader input for the work of the AI high-level expert group.

The AI high-level expert group published a first draft of the ethics guidelines in December 2018. Following a stakeholder consultation[8] and meetings with representatives from Member States[9], the AI expert group has delivered a revised document to the Commission in March 2019. In their feedback so far, stakeholders overall have welcomed the practical nature of the guidelines and the concrete guidance they offer to developers, suppliers and users of AI on how to ensure trustworthiness.

2.1. Guidelines for trustworthy AI drafted by the AI high-level expert group

The guidelines drafted by the AI high-level expert group, to which this Communication refers[10], build in particular on the work done by the European Group on Ethics in Science and New Technologies and the Fundamental Rights Agency.

The guidelines postulate that in order to achieve ‘trustworthy AI’, three components are necessary: (1) it should comply with the law, (2) it should fulfil ethical principles and (3) it should be robust.

Based on these three components and the European values set out in section 2, the guidelines identify seven key requirements that AI applications should respect to be considered trustworthy. The guidelines also include an assessment list to help check whether these requirements are fulfilled.

The seven key requirements are:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental well-being
  • Accountability

While these requirements are intended to apply to all AI systems in different settings and industries, the specific context in which they are applied should be taken into account for their concrete and proportionate implementation, taking an impact-based approach. For illustration, AI application suggesting an unsuitable book to read is much less perilous than misdiagnosing a cancer and could therefore be subject to less stringent supervision.

The guidelines drafted by the AI high-level expert group are non-binding and as such do not create any new legal obligations. However,  many existing (and often use- or domain-specific) provisions of Union law of course already reflect one or several of these key requirements, for example safety, personal data protection, privacy or environmental protection rules.

The Commission welcomes the work of the AI high-level expert group and considers it valuable input for its policy-making.

2.2. Key requirements for trustworthy AI

The Commission supports the following key requirements for trustworthy AI, which are based on European values. It encourages stakeholders to apply the requirements and to test the assessment list that operationalises them in order to create the right environment of trust for the successful development and use of AI. The Commission welcomes feedback from stakeholders to evaluate whether this assessment list provided in the guidelines requires further adjustment.

  1. Human agency and oversight

AI systems should support individuals in making better, more informed choices in accordance with their goals. They should act as enablers to a flourishing and equitable society by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. The overall wellbeing of the user should be central to the system’s functionality.

Human oversight helps ensuring that an AI system does not undermine human autonomy or causes other adverse effects. Depending on the specific AI-based system and its application area, the appropriate degrees of control measures, including the adaptability, accuracy and explainability of AI-based systems, should be ensured[11]. Oversight may be achieved through governance mechanisms such as ensuring a human-in-the-loop, humanon-the-loop, or human-in-command approach.[12] It must be ensured that public authorities have the ability to exercise their oversight powers in line with their mandates. All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.

  1. Technical robustness and safety

Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of the AI system, and to adequately cope with erroneous outcomes. AI systems need to be reliable, secure enough to be resilient against both overt attacks and more subtle attempts to manipulate data or algorithms themselves, and they must ensure a fall-back plan in case of problems. Their decisions must be accurate, or at least correctly reflect their level of accuracy, and their outcomes should be reproducible.

In addition, AI systems should integrate safety and security-by-design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned. This includes the minimisation and where possible the reversibility of unintended consequences or errors in the system’s operation. Processes to clarify and assess potential risks associated with the use of AI systems, across various application areas, should be put in place.

  • Privacy and Data Governance

Privacy and data protection must be guaranteed at all stages of the AI system’s life cycle. Digital records of human behaviour may allow AI systems to infer not only individuals’ preferences, age and gender but also their sexual orientation, religious or political views. To allow individuals to trust the data processing, it must be ensured that they have full control over their own data, and that data concerning them will not be used to harm or discriminate against them.

In addition to safeguarding privacy and personal data, requirements must be fulfilled to ensure high quality AI systems. The quality of the data sets used is paramount to the performance of AI systems. When data is gathered, it may reflect socially constructed biases, or contain inaccuracies, errors and mistakes. This needs to be addressed prior to training an AI system with any given data set.  In addition, the integrity of the data must be ensured. Processes and data sets used must be tested and documented at each step such as planning, training, testing and deployment. This should also apply to AI systems that were not developed in-house but acquired elsewhere. Finally, the access to data must be adequately governed and controlled.

  1. Transparency

The traceability of AI systems should be ensured; it is important to log and document both the decisions made by the systems, as well as the entire process (including a description of data gathering and labelling, and a description of the algorithm used) that yielded the decisions. Linked to this, explainability of the algorithmic decision-making process, adapted to the persons involved, should be provided to the extent possible. Ongoing research to develop explainability mechanisms should be pursued. In addition, explanations of the degree to which an AI system influences and shapes the organisational decision-making process, design choices of the system, as well as the rationale for deploying it, should be available (hence ensuring not just data and system transparency, but also business model transparency).

Finally, it is important to adequately communicate the AI system’s capabilities and limitations to the different stakeholders involved in a manner appropriate to the use case at hand. Moreover, AI systems should be identifiable as such, ensuring that users know they are interacting with an AI system and which persons are responsible for it.

  1. Diversity, non-discrimination and fairness

Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models. The continuation of such biases could lead to (in)direct discrimination. Harm can also result from the intentional exploitation of (consumer) biases or by engaging in unfair competition. Moreover, the way in which AI systems are developed (e.g. the way in which the programming code of an algorithm is written) may also suffer from bias. Such concerns should be tackled from the beginning of the system’ development.

Establishing diverse design teams and setting up mechanisms ensuring participation, in particular of citizens, in AI development can also help to address these concerns. It is advisable to consult stakeholders who may directly or indirectly be affected by the system throughout its life cycle. AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility through a universal design approach to strive to achieve equal access for persons with disabilities.

  1. Societal and environmental well-being

For AI to be trustworthy, its impact on the environment and other sentient beings should be taken into account. Ideally, all humans, including future generations, should benefit from biodiversity and a habitable environment. Sustainability and ecological responsibility of AI systems should hence be encouraged. The same applies to AI solutions addressing areas of global concern, such as for instance the UN Sustainable Development Goals.

Furthermore, the impact of AI systems should be considered not only from an individual perspective, but also from the perspective of society as a whole. The use of AI systems should be given careful consideration particularly in situations relating to the democratic process, including opinion-formation, political decision-making or electoral contexts. Moreover, AI’s social impact should be considered. While AI systems can be used to enhance social skills, they can equally contribute to their deterioration.

  • Accountability

Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their implementation. Auditability of AI systems is key in this regard, as the assessment of AI systems by internal and external auditors, and the availability of such evaluation reports, strongly contributes to the trustworthiness of the technology. External auditability should especially be ensured in applications affecting fundamental rights, including safety-critical applications.

Potential negative impacts of AI systems should be identified, assessed, documented and minimised. The use of impact assessments facilitates this process. These assessments should be proportionate to the extent of the risks that the AI systems pose. Trade-offs between the requirements – which are often unavoidable – should be addressed in a rational and methodological manner, and should be accounted for. Finally, when unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress.

2.3. Next steps: a piloting phase involving stakeholders on the widest scale

Reaching consensus on these key requirements for AI systems is a first important milestone towards guidelines for ethical AI. As a next step, the Commission will ensure that this guidance can be tested and implemented in practice.

To this end, the Commission will now launch a targeted piloting phase designed to obtain structured feedback from stakeholders. This exercise will focus in particular on the assessment list which the high-level expert group has drawn up for each of the key requirements.

This work will have two strands: (i) a piloting phase for the guidelines involving stakeholders who develop or use AI, including public administrations, and (ii) a continued stakeholder consultation and awareness-raising process across Member States and different groups of stakeholders, including industry and service sectors:

  • Starting in June 2019, all stakeholders and individuals will be invited to test the assessment list and provide feedback on how to improve it. In addition, the AI highlevel expert group will set up an in-depth review with stakeholders from the private and the public sector to gather more detailed feedback on how the guidelines can be implemented in a wide range of application domains. All feedback on the guidelines’ workability and feasibility will be evaluated by the end of 2019.
  • In parallel, the Commission will organise further outreach activities, giving representatives of the AI high-level expert group the opportunity to present the guidelines to relevant stakeholders in the Member States, including industry and service sectors, and providing these stakeholders with an additional opportunity to comment on and contribute to the AI guidelines.

The Commission will take into account the work of the group of experts on ethics for connected and automated driving[13] and work with EU-funded research projects on AI and with relevant public-private partnerships on implementing the key requirements[14]. For example, the Commission will support, in coordination with Member States, the development of a common database of health images initially dedicated to the most common forms of cancer, so that algorithms can be trained to diagnose symptoms with very high accuracy. Similarly, the cooperation of the Commission and Member States enables an increasing number of cross-border corridors for testing connected and automated vehicles. The guidelines should be applied in these projects and tested, and the results will feed into the evaluation process.

The piloting phase and the stakeholder consultation will benefit from the contribution of the European AI Alliance and AI4EU, the AI on-demand platform. The AI4EU project[15], launched in January 2019, brings together algorithms, tools, datasets and services to help organisations, in particular small and medium-size enterprises, to implement AI solutions. The European AI Alliance, together with AI4EU, will continue to mobilise the AI ecosystem across Europe, also in view of piloting the AI ethics guidelines and promoting the respect for human-centric AI.

At the beginning of 2020, building on the evaluation of feedback received during the piloting phase, the AI high-level expert group will review and update the guidelines. Based on the review and on the experience acquired, the Commission will evaluate the outcome and propose any next steps.

Ethical AI is a win-win proposition. Guaranteeing the respect for fundamental values and rights is not only essential in itself, it also facilitates acceptance by the public and increases the competitive advantage of European AI companies by establishing a brand of humancentric, trustworthy AI known for ethical and secure products. This builds more generally on the strong reputation of European companies for providing safe and secure products of high quality. The pilot phase will help to ensure that AI products fulfil this promise.

2.4. Towards international AI ethics guidelines

International discussions on AI ethics have intensified after Japan’s G7 Presidency put the topic high on the agenda in 2016. Given the international interlinkages of AI development in terms of data circulation, algorithmic development and research investments, the Commission will continue its efforts to bring the Union’s approach to the global stage and build a consensus on a human-centric AI[16].

The work done by the AI high-level expert group, and more specifically the list of requirements and the engagement process with stakeholders, provides the Commission with additional valuable input for contributing to the international discussions. The European Union can have a leadership role in developing international AI guidelines and, if possible, a related assessment mechanism.

Therefore, the Commission will:

Strengthen cooperation with like-minded partners:

  • exploring the extent to which convergence can be achieved with third countries’ draft ethics guidelines (e.g. Japan, Canada, Singapore) and, building on this group of likeminded countries, to prepare for a broader discussion, supported by actions implementing the Partnership Instrument for cooperation with Third Countries[17]; and
  • exploring how companies from non-EU countries and international organisations can contribute to the ‘pilot phase’ of the guidelines through testing and validation. Continue to play an active role in international discussions and initiatives:
  • contributing to multilateral fora such as the G7 and G20;
  • engaging in dialogues with non-EU countries and organising bilateral and multilateral meetings to build a consensus on human-centric AI;
  • contributing to relevant standardisation activities in international standards development organisations to promote this vision; and
  • strengthening the collection and diffusion of insights on public policies, working jointly with relevant international organisations.

 

3. CONCLUSIONS

The EU is founded on a set of fundamental values and has constructed a strong and balanced regulatory framework on these foundations. Building on this existing regulatory framework, there is a need for ethics guidelines for the development and use of AI due to its novelty and the specific challenges this technology brings. Only if AI is developed and used in a way that respects widely-shared ethical values, it can be considered trustworthy.

With a view to this objective, the Commission welcomes the input prepared by the AI highlevel expert group. Based on the key requirements for AI to be considered trustworthy, the Commission will now launch a targeted piloting phase to ensure that the resulting ethical guidelines for AI development and use can be implemented in practice. The Commission will also work to forge a broad societal consensus on human-centric AI, including with all involved stakeholders and our international partners.

The ethical dimension of AI is not a luxury feature or an add-on: it needs to be an integral part of AI development. By striving towards human-centric AI based on trust, we safeguard the respect for our core societal values and carve out a distinctive trademark for Europe and its industry as a leader in cutting-edge AI that can be trusted throughout the world.

To ensure the ethical development of AI in Europe in its wider context, the Commission is pursuing a comprehensive approach including in particular the following lines of action to be implemented by the third quarter of 2019:

  • It will start launching a set of networks of AI research excellence centres through Horizon 2020. It will select up to four networks, focusing on scientific or technological major challenges such as explainability and advanced human-machine interaction, which are key ingredients for trustworthy AI.
  • It will begin setting up networks of digital innovation hubs[18] focussing on AI in manufacturing and on big data.
  • Together with Member States and stakeholders, the Commission will start preparatory discussions to develop and implement a model for data sharing and making best use of common data spaces, with a focus notably on transport, healthcare and industrial manufacturing.[19]

In addition, the Commission is working on a report on the challenges posed by AI to the safety and liability frameworks and a guidance document on the implementation of the Product Liability Directive[20]. At the same time, the European High-Performance Computing

Joint Undertaking (EuroHPC)[21] will develop the next generation of supercomputers because computing capacity is essential for processing data and training AI and Europe needs to master the full digital value chain. The ongoing partnership with Member States and industry on microelectronic components and systems (ECSEL)[22] as well as the European Processor Initiative[23] will contribute to the development of low-power processor technology for trustworthy and secure high-performance and edge computing.

Just like the work on ethical guidelines for AI, all these initiatives build on close cooperation of all concerned stakeholders, Member States, industry, societal actors and citizens. Overall, Europe’s approach to Artificial Intelligence shows how economic competitiveness and societal trust must start from the same fundamental values and mutually reinforce each other.

[1] COM(2018) 237. 2               COM(2018) 795.

[2] To help reach this goal, the Commission proposed, under the next programming period 2021-2027, that the Union allocates at least EUR 1 billion per year in funding from the Horizon Europe and Digital Europe programmes to invest in AI.

[3] In addition, the EU is a party to the UN Convention on the Rights of persons with disabilities.

[4] Regulation (EU) 2016/679. The General Data Protection Regulation (GDPR) guarantees the free flow of personal data within the Union. It contains provisions on decision-making based solely on automated processing, including profiling. The individuals concerned have the right to be informed about the existence of automated-decision making and to receive meaningful information about the logic involved in the automated decision-making and about the significance and envisaged consequences of the processing for them. They also have the right in such cases to obtain human intervention, to express their point of view and to contest the decision.

[5]    COM(2017) 10.

[6] https://ec.europa.eu/digitalsinglemarket/en/highlevelexpertgroupartificialintelligence

[7] https://ec.europa.eu/digitalsinglemarket/en/europeanaialliance

[8] This consultation resulted in comments from 511 organisations, associations, companies, research institutes, individuals and others. A summary of the feedback received is available at: https://ec.europa.eu/futurium/en/system/files/ged/consultation_feedback_on_draft_ai_ethics_guidelines_4. pdf

[9] The work of the expert group was positively received by Member States, with the Council conclusions adopted on 18 February 2019  inter alia taking note of the forthcoming publication of the ethics guidelines and supporting the Commission’s effort to bring an EU ethical approach to the global stage:

https://data.consilium.europa.eu/doc/document/ST61772019INIT/en/pdf

[10] https://ec.europa.eu/futurium/en/aiallianceconsultation/guidelines#Top

[11] The General Data Protection Regulation gives individuals the right not to be subject to a decision based solely on automated processing when this produces legal effects on users or similarly significantly affects them (Article 22 GDPR).

[12] Human-in-the-loop (HITL) refers to the human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable. Human-on-the-loop (HOTL) refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation. Human-incommand (HIC) refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the system in any particular situation. This can include the decision not to use an AI system in a particular situation, to establish levels of human discretion during the use of the system, or to ensure the ability to override a decision made by the system.

[13] See the Commission’s Communication on connected and automated mobility, COM(2018) 283.

[14] In the Framework of the European Defence Fund, the Commission will also develop specific ethical guidance for the evaluation of project proposals in the area of AI for defence.

[15] https://ec.europa.eu/digitalsinglemarket/en/news/artificialintelligenceai4euprojectlaunches1january2019

[16] The High Representative of the Union for Foreign Affairs and Security Policy will, with the support of the Commission, build on consultations in the United Nations, the Global Tech Panel, and other multilateral fora, and in particular coordinate proposals for addressing the complex security challenges involved.

[17] Regulation (EU) No 234/2014 of the European Parliament and of the Council of 11 March 2014 establishing a Partnership Instrument for cooperation with third countries (OJ L 77, 15.3.2014, p. 77). For instance the planned project on ‘An international alliance for a human-centric approach to artificial intelligence’ will facilitate joint initiatives with like-minded partners, in order to promote ethical guidelines and to adopt common principles and operational conclusions. It will enable the EU and likeminded countries to discuss operational conclusions resulting from the ethical guidelines on AI proposed by the expert group in order to reach a common approach. Moreover, it will provide for monitoring the uptake of AI technology globally. Finally, the project plans to organise public diplomacy activities accompanying international events e.g. by the G7, G20 and the Organisation for Economic Cooperation and Development.

[18] http://s3platform.jrc.ec.europa.eu/digitalinnovationhubs

[19] The necessary resources will be mobilised from Horizon 2020 (under which close to 1.5 billion EUR are dedicated to AI for the period 2018-2020) and its planned successor Horizon Europe, the Digital part of the Connecting Europe Facility and especially the future Digital Europe Programme. Projects will also draw on resources from the private sector and Member State programmes.

[20] See the Commission’s Communication Artificial Intelligence for Europe, COM (2018) 237.

[21] https://eurohpcju.europa.eu

[22]     http://www.ecsel.eu

[23] http://www.european-processor-initiative.eu

05/17/2019 / José Quintás Alonso

Inteligencia Artificial

Han pasado los años en los que, en la primera clase de IA se hacía el consabido chiste: ¿Qué es la IA?. “Algo” que tiene mucho de artificial y poco de inteligencia.

Ojeo con cierta nostalgia este texto de 1987

Hoy parece que el tema está mucho más maduro,  que se está usando con éxito en campos diversos y que se investiga (lo que implica existencia de fondos)

Es tiempo de preocuparse un poco de las implicaciones de su desarrollo

Hace unos días he visto Trascendence, película que recomiendo ver pues introduce el tema ( si bien “avanzando” en el tiempo…hoy el cerebro humano sigue siendo una incógnita -un poco menos que ayer-… todavía no nos ponemos un “gorro” y transmitimos nuestros pensamientos a otra persona equipada de la misma forma…pero movemos el mouse…)

En esta Dirección está un artículo del recientemente fallecido Hawking y otros, sobre el tema.

Adjunto la traducción automática:

Independent

Con el éxito de taquilla de Hollywood Transcendence tocando en cines, con Johnny Depp y Morgan Freeman mostrando visiones en conflicto para el futuro de la humanidad, es tentador descartar la noción de máquinas altamente inteligentes como mera ciencia ficción. Pero esto sería un error, y potencialmente nuestro peor error en la historia.

La investigación en inteligencia artificial (IA) ahora está progresando rápidamente. Puntos de referencia recientes, como los autos que conducen por cuenta propia, ¡una computadora que gana en Jeopardy! y los asistentes personales digitales Siri, Google Now y Cortana son simplemente síntomas de una carrera en TI impulsada por inversiones sin precedentes y construyendo sobre una base teórica cada vez más madura. Tales logros probablemente palidecerán contra lo que traerán las próximas décadas.

Los beneficios potenciales son enormes; todo lo que la civilización tiene para ofrecer es un producto de la inteligencia humana; no podemos predecir lo que podríamos lograr cuando esta inteligencia sea magnificada por las herramientas que la IA pueda proporcionar, pero la erradicación de la guerra, la enfermedad y la pobreza sería una prioridad en la lista de cualquiera. El éxito en la creación de la IA sería el evento más grande en la historia de la humanidad.

Desafortunadamente, también podría ser el último, a menos que aprendamos cómo evitar los riesgos. A corto plazo, los militares mundiales están considerando sistemas de armas autónomas que pueden elegir y eliminar objetivos; La ONU y Human Rights Watch han abogado por un tratado que prohíba tales armas. En el mediano plazo, como enfatizaron Erik Brynjolfsson y Andrew McAfee en The Second Machine Age , la IA puede transformar nuestra economía para traer tanto una gran riqueza como una gran dislocación.

De cara al futuro, no hay límites fundamentales para lo que se puede lograr: no existe una ley física que impida que las partículas se organicen de manera que realicen cálculos incluso más avanzados que los arreglos de partículas en el cerebro humano. Una transición explosiva es posible, aunque podría ser diferente a la de la película: como se dio cuenta Irving Good en 1965, las máquinas con inteligencia sobrehumana podrían mejorar su diseño en repetidas ocasiones, provocando lo que Vernor Vinge denominó una “singularidad” y el personaje de la película de Johnny Depp llamadas “trascendencia”

Uno puede imaginar que esa tecnología supera a los mercados financieros, inventa a los investigadores humanos, manipula a los líderes humanos y desarrolla armas que ni siquiera podemos entender. Mientras que el impacto a corto plazo de la IA depende de quién la controla, el impacto a largo plazo depende de si se puede controlar o no.

Entonces, al enfrentar posibles futuros de beneficios y riesgos incalculables, los expertos seguramente están haciendo todo lo posible para asegurar el mejor resultado, ¿verdad? Incorrecto. Si una civilización extraterrestre superior nos enviara un mensaje que dijera: “Llegaremos en unas pocas décadas”, ¿simplemente responderíamos: “Ok, llámenos cuando llegue aquí, dejaremos las luces encendidas”? Probablemente no, pero esto es más o menos lo que está sucediendo con la IA. Aunque enfrentamos potencialmente lo mejor o lo peor que le puede pasar a la humanidad en la historia, se dedican pocas investigaciones serias a estos temas fuera de los institutos sin fines de lucro, como el Centro de Cambridge para el Estudio del Riesgo Existencial , el Instituto de Futuro de la Humanidad , la Máquina Instituto de Investigación de Inteligencia ,. Todos debemos preguntarnos qué podemos hacer ahora para mejorar las posibilidades de obtener beneficios y evitar los riesgos.”

Y, si te agrada la Ficción, un clásico que viene a cuento:

05/15/2019 / José Quintás Alonso

Senderismo en Anaga

Supongamos que vas a Tenerife con muchos objetivos o con ninguno: los días transcurrirán de una forma parecida si bien con variedad pues hay muchas cosas que ver, por ejemplo:

  1. ¿Qué no es interesante ver una o varias plantaciones de plátano canario?.
  2. ¿Qué no es interesante ver una o varias viñas y sus correspondientes bodegas?.
  3. ¿Qué no es interesante ver uno o varios ejemplares de dragos?.
  4. ¿Qué no es interesante ver arquitectura autóctona, por ejemplo en La Laguna?.
  5. ¿Qué no es interesante ver cómo influye el IVA en los diversos artículos?.
  6. ¿Qué no es interesante ver estrechos y vertiginosos barrancos ?.
  7. ¿Qué no es interesante ver una puesta de sol en la caldera del Teide?.
  8. etc, etc, etc….

Sin embargo, ya he pasado esta etapa.

Si voy otra vez a Tenerife, será para hacer algo similar a esto:

Descripción ID Wikiloc Km Desnivel+ Desnivel –
Mirador Cruz Carmen-Aguaide 36226347 12,79 555 555

 

Ha de considerarse que algunos lugares requieren permiso y hay un número limitado de visitantes por día (en ocasiones inferior a 50 personas). Es decir, hay que prever el viaje con MESES de antelación; pienso que primero hay que obtener los permisos necesarios y luego acoplar el viaje ( vuelos, hotel-piso….)

En  la info del parque rural de Anaga cuento un total de 20 senderos ( alguno puede ser integrado en otro para no ir y volver por el mismo); de ellos, la mayoría son de dificultad media, le siguen los de baja y únicamente hay tres catalogados como de dificultad Alta. En dicha relación no está incluido el “Bosque encantado” o El Pijaral (Permiso necesario ). Interesante el Albergue

En fin, no olvidarse de la zona de Masca, Las Portelas… tampoco del Teide

¿Senderos en la Gomera que está cerquita?

En Wikiloc, Nacho1951 tiene un montón de tacks

Bueno, ¿en dónde estaba yo?…Cierto: “Si voy otra vez a Tenerife, será para hacer algo similar a esto”… es decir, lo escrito anteriormente

05/13/2019 / José Quintás Alonso

MIEDO…

Artículo escrito por Andrés Betancor. Catedrático de Derecho Administrativo de la Universidad Pompeu Fabra. Publicado en El Mundo.- Jueves, 25 abril 2019.

“El libro de Bob Woodward dedicado al Presidente Trump se titula Miedo. Comienza con una frase que Woodward pone en boca de Trump: “El verdadero poder es -ni tan siquiera quiero utilizar la palabra- el miedo”.

El miedo es poder; el “verdadero poder”. Hasta la jovencísima activista ambiental Greta Thunberg lo sabe cuando, ante la Comisión de Medio Ambiente del Parlamento Europeo, afirmó: “Quiero que entréis en pánico, porque la casa está en llamas”. Así pretende conseguir movilizar las conciencias y las voluntades para reducir las emisiones que causan el cambio climático.

Quien controla el miedo, gana. Es el miedo político. El miedo como instrumento del y al servicio del poder. Es una emoción innata; muy útil e, incluso, necesaria. Nos aparta de los peligros; el gran obrador de nuestra supervivencia como especie. Frente al peligro, una reacción pre progamada, desde la huida, hasta la defensa. Esa reacción es la que resulta importante para el poder. Dirige voluntades; las manipula, las condiciona, las conduce hacia donde interesa al poder.

El miedo político siempre ha estado presente en la reflexión sobre el Estado. Hobbes habló del miedo como fuente legitimadora y justificadora del dominio, del Estado absoluto. El soberano libera a las personas del miedo del estado de naturaleza, a cambio del sacrificio de entregar su libertad. Maquiavelo recomendaba al Príncipe que se sirviera del miedo para conservar el poder: “Es más seguro ser temido que amado”; “el amor es un vínculo de gratitud que los hombres, perversos por naturaleza, rompen cada vez que pueden beneficiarse; pero el temor es miedo al castigo que no se pierde”.

El miedo esclaviza; es el arma del despotismo contra la libertad. Crea miedo, y como decía Maquiavelo, te obedecerán. Frente a ese miedo dominador, el liberalismo ofrece, con Montesquieu a la cabeza, la exigencia de dotarse de mecanismos institucionales (división de poderes e imperio de la Ley) que liberen a las personas del miedo para ser libres. Porque, como afirmara J. Locke, el paso del estado de naturaleza a la sociedad civil lo es para proteger a la persona, pero sin renunciar a la libertad y a la propiedad. El Estado se ha de configurar no para esclavizar, como quería Hobbes, sino para dispensar seguridad, o sea, liberar al hombre del miedo. Es el Estado democrático de derecho; el Estado sometido a la ley de los ciudadanos y protector de las libertades, en particular, las de las minorías, las del disidente.

Franklin D. Roosevelt fue el primero que colocó, como ha recordado H. Bude, el miedo en la agenda política del siglo XX cuando en el discurso presidencial de marzo de 1933 afirmó, tras los terribles años de la Gran Depresión, que “lo único de lo que tenemos que tener miedo es del propio miedo”. La primera tarea del Gobierno, por lo tanto, es la de quitarle el miedo a los ciudadanos. Es el objetivo del Estado del bienestar. Que la pobreza, la enfermedad, el desempleo, la vejez y tantas otras desventuras no sean las fuentes aterradoras que esclavizan.

Ni el Estado de derecho, ni el Estado democrático, ni el Estado del bienestar han podido acabar con la política del miedo. La historia del siglo XX hasta nuestros días ha seguido ofreciendo muestras, incluso terribles, de ese uso. Entre nosotros sigue siendo habitual. La utilizó el PP respecto de la irrupción de Podemos y ahora, el PSOE con la de Vox. Siempre el que está en el poder, para mantenerse, la utiliza. Los políticos definen una amenaza que gestionan, esgrimen, para provocar la reacción que les beneficia. La amenaza, para producir ese resultado, ha de ser creíble. Será real o imaginaria, pero ha de serlo. La Historia de España, indudablemente, ayuda.

Toda la violencia que ha ennegrecido la Historia de España desde el siglo XIX, pasando por las Guerras Carlistas y la Guerra Civil, la dictadura franquista y acabando con el terrorismo, ha creado el contexto en el que la semilla del miedo puede crecer y prosperar. El enfrentamiento entre españoles, bajo distintas banderas, ideológicas y políticas durante tantos años, ha alimentado el guerracivilismo, que ofrece credibilidad al uso de la amenaza política. Mantenerlo vivo es la condición imprescindible para que perdure la política del miedo. Sacar a Franco de paseo, mantener viva la Memoria histórica son necesarios para sostener el marco de referencia ideológico del miedo.

En la campaña electoral se ha llegado a decir, como la portavoz socialista, A. Lastra, que “tenemos el fascismo a las puertas del Congreso”. Las palabras elegidas no son fruto del azar o de una calentura. Es alimentar el miedo, creíble en el contexto del guerracivilismo. Nada preocupa, y aún menos interesa, las consecuencias de alimentar el miedo: es el caballo de Troya del populismo.

Una ciudadanía miedosa, acobardada, no es libre. Como recordaba Bude, parafraseando a Roosevelt, “los hombres libres no deben tener ningún miedo del miedo, porque eso puede costarles su autodeterminación”. Es presa fácil de la angustia. En un contexto de tantas inseguridades, donde las clases medias, como informaba la OCDE (Under Pressure: The Squeezed Middlee Cass), se consideran injustamente tratadas, aterradas ante la pérdida de lo que tiene y de caer en la pobreza, el miedo está dando alas a la angustia, tanto como la angustia al miedo. El círculo vicioso del miedo y la angustia está empujando hacia el populismo: la seguridad que les ofrece; las soluciones simples; la dilución del yo en la masa amorfa tras el líder que tiene la solución a sus problemas tras los muros de la soberanía, de la nación, del Estado, del poder.

El miedo no solo acobarda a la ciudadanía, sino que alimenta al populismo. El miedo a Vox es, paradójicamente, el mayor alimento de Vox. Y, a su vez, ¿acaso Vox no se está sirviendo del miedo? El miedo de unos contra el miedo de otros; miedo contra miedo; la espiral hacia el infierno a la que se nos pretende arrastrar.

Sólo tenemos que tenerle miedo a los augures del terror, a los fascistas del fascismo, a los terroristas del terror. Las instituciones del Estado democrático de derecho son las únicas, como postulaba J. Shklar y su “liberalismo del miedo”, que ofrecen protección frente al miedo. Un Estado democrático de derecho que ha soportado con entereza y determinación dos golpes de Estado, ¿a quién le puede temer? La reacción al golpe de Estado de octubre en Cataluña es la prueba más determinante de la fortaleza institucional de España. Nuestra democracia gestiona, con el imperio de la ley, la ilegalidad y administrará el castigo que corresponda a los responsables. Además, soporta, incluso, la falsedad y la mendacidad en foros internacionales. No nos asusta que Torra o Puigdemont vaya diciendo que España no es una democracia. El movimiento cívico catalán contra la secesión y por la democracia es la muestra quintaesencial de que al miedo se le puede y se le debe derrotar.

¿A quién le podemos tener miedo? Sólo al miedo mismo porque, una vez sale a pasear, ante la complacencia y el beneplácito de algunos, irá proyectándose, a elección de los políticos del miedo, sobre unos y otros. ¿Quiénes serán las amenazas, los inmigrantes, los pobres, los transexuales, los adversarios…? No hay límite a la política del miedo; basta la amenaza, la manipulación, la mentira, para crear al «enemigo».

El miedo es una emoción política tan estrecha (y tan miserable) que sólo los déspotas o los candidatos a serlo siguen viendo su utilidad en una democracia avanzada como la nuestra. No reparan en que ofrece el alimento a la angustia que atenaza el corazón de millones de personas, víctimas propiciatorias a caer cautivadas por la respuesta que el populismo les ofrece. El miedo lo alimenta. Y algunos, llevados en volandas por su cortoplacismo, siguen entretenidos en estrategias electorales que sólo piensan en mañana; lo que venga después, ni les importa ni les interesa”

 

05/03/2019 / José Quintás Alonso

Elecciones del 28 de abril de 2019

Resultados con más del 99,95% escrutado:

Resultados 28-3-2019 Resultados 2016
PSOE 123 85
PP 66 137
Ciudadanos 57 32
Podemos 35 45
Nacionalistas y otros 45 50
VOX 24 0
350 349 (¿1 diputado?)

Campeones a “grosso modo”:

Ganadores Perdedores
PSOE +38 PP -71
Ciudadanos +25 Podemos -10
Vox +24 Nac y otros  -5
Suman 88 diputados Suman 86 diputados

Obviamente:

  1. El PSOE buscará gobernar en solitario. No veo factible que TODOS se pongan en su contra (lo que llevaría a unas nuevas elecciones en Noviembre…pero del 2020 o más allá).
  2. Las pérdidas del PP son muy grandes; cabe preguntarse si seguirá el camino de la UCD
  3. A la idea de dimisiones de Casado e Iglesias, cabe hacer dos objeciones:
    1. No, hasta que pasen las municipales del próximo mes
    2. No parece que den señales de autocrítica personal.

 Diputados——Votos——Coste Votos por Dipu.

VOX

24

2.677.173 111.549
PODEMOS-IU-EQUO 35 3.118.191 89.091
ECP-GUANYEM EL CANVI 7 614.738 87.820
Cs 57 4.136.600 72.572
JxCAT-JUNTS 7 497.638 71.091
CCa-PNC 2 137.196 68.598
ERC-SOBIRANISTES 15 1.015.355 67.690
PP 66 4.356.023 66.000
EAJ-PNV 6 394.627 65.771
EH Bildu 4 258.840 64.710
PSOE 123 7.480.755 60.819
NA+ 2 107.124 53.562

 

A %d blogueros les gusta esto: