Europe should ban AI for mass surveillance and social credit scoring, says advisory group – TechCrunch

An independent expert group tasked with advising the European Commission to inform its regulatory response to artificial intelligence — to underpin EU lawmakers’ stated aim of ensuring AI developments are “human centric” — has published its policy and investment recomm…

Europe should ban AI for mass surveillance and social credit scoring, says advisory group – TechCrunch

Curated via Twitter from TechCrunch’s twitter account….

The HLEG also calls for support for developing mechanisms for the protection of personal data, and for individuals to “control and be empowered by their data” — which they argue would address “some aspects of the requirements of trustworthy AI”. “Tools should be developed to provide a technological implementation of the GDPR and develop privacy preserving/privacy by design technical methods to explain criteria, causality in personal data processing of AI systems (such as federated machine learning),” they write. “Support technological development of anonymisation and encryption techniques and develop standards for secure data exchange based on personal data control.

Concern and queasiness about rampant datafication of children, including via commercial tracking of their use of online services, has been raised  in multiple EU member states. “The integrity and agency of future generations should be ensured by providing Europe’s children with a childhood where they can grow and learn untouched by unsolicited monitoring, profiling and interest invested habitualisation and manipulation,” the group writes. “Children should be ensured a free and unmonitored space of development and upon moving into adulthood should be provided with a “clean slate” of any public or private storage of data related to them.

It also urges governments to commit to not engage in blanket surveillance of populations for national security purposes. (So perhaps it’s just as well the UK has voted to leave the EU, given the swingeing state surveillance powers it passed into law at the end of 2016. “While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the HLEG writes. “Governments should commit not to engage in mass surveillance of individuals and to deploy and procure only Trustworthy AI systems, designed to be respectful of the law and fundamental rights, aligned with ethical principles and socio-technically robust.

The group also calls for commercial surveillance of individuals and societies to be “countered” — suggesting the EU’s response to the potency and potential for misuse of AI technologies should include ensuring that online people-tracking is “strictly in line with fundamental rights such as privacy”, including (the group specifies) when it concerns ‘free’ services (albeit with a slight caveat on the need to consider how business models are impacted).

There are also calls to encourage public sector uptake of AI, such as by fostering digitalisation by transforming public data into a digital format; providing data literacy education to government agencies; creating European large annotated public non-personal databases for “high quality AI”; and funding and facilitating the development of AI tools that can assist in detecting biases and undue prejudice in governmental decision-making.

Member states and the Commission should also devise ways to continuously “analyse, measure and score the societal impact of AI”, suggests the HLEG — to keep tabs on positive and negative impacts so that policies can be adapted to take account of shifting effects. “A variety of indices can be considered to measure and score AI’s societal impact such as the UN Sustainable Development Goals and the Social Scoreboard Indicators of the European Social Pillar.

Other ideas in the HLEG’s report include developing and implementing a European curriculum for AI; and monitoring and restricting the development of automated lethal weapons — including technologies such as cyber attack tools which are not “actual weapons” but which the group points out “can have lethal consequences if deployed.

Link to original article….

Leave a Reply

Leave a comment
%d bloggers like this:
scroll to top