A Journey to the Legal Alley of AI: Short Report on Istanbul Annual Privacy Symposium 2020

Image for post
Image for post

While Timnit Gebru claimed[1] last week, that she was fired by Google management staff over a paper highlighting AI bias[2], the worldwide discussion about the implementation of ‘AI’ into our daily lives is maintaining its presence. As the countries have been creating partially national- or regional- research hubs, announcing national strategic road maps, it may not be so accurate to call that technologic applications ‘AI’ due to the complexity and broadness of the topic. But one thing remains in question within the storm of AI implementation craziness all around the world, which is the legal status and the possible way outs when it comes to explain algorithms in potential incidents and harm cases. In that aspect one would like to take a glance at some legal explanations to these crucial issues involving a very wide range of topics like privacy, ethics and fairness within itself. Therefore, I would like to take you on a short journey through what is the talk of the town around legal scholars.

Last Friday, on December 4, 2020, during the pandemic restrictions, Istanbul Bilgi University held the 3rd of the Annual Istanbul Privacy Symposium: Law, Ethics and Technology with the great and friendly moderation of the Director of IT Law Institute Leyla Keser, and Associate Dean of Faculty of Law, Bilgi University, M.Bedii Kaya. With a broad range of speakers the conference main track focused on AI, namely “AI: What the Future Holds? Opportunities and Pitfalls”. The event lasted three hours on Zoom and amazed the audience with very distinct points of views on AI-related issues and burning scholar discussions.

Alexa Hasse, Berkman Klein Center for Internet& Society, drew our attention to the gender gap in the technology jobs –specifically computer science and AI- and questioned how we can achieve to involve underrepresented groups of people in AI workplace and education. She gave examples around the world and I would like to appreciate one of the examples about Turkey’s inclusion of women in technology which shows surprisingly a rise from %29 to %33 in an 12 year span when the world has approximately a %10 drop from %35 to %25 in technology jobs held by women in twenty years.[3] [4] The speaker concluded with an assessment on digital citizenship to enhance the equality among AI workforce and education.

Emre Bayamlıoğlu, from KU Leuven/CITIP Leuven, took a start with very crucial –and sadly at the same time- term anthropocene just before he explained physics entropy rules vis-à-vis ecological collapse and climatic issues and he highlighted the borders of law(s) when it comes to find a solution to the so-called climate change. The route of his speech moved towards another point which gave perspectives on ephemeralization and datafication. He then called the situation we are now in ‘data anthropocene’ as he addressed the data driven governance modalities and transition of everything as a service. He presented data as the new environment and concluded with philosophical meditations which luckily lead us to think about the theory of science, law(s) and politics. Rightfully, maybe his points were a different interpretation of that unless we do not give up on our ‘humanist selfishness’ the law(s) has nothing to do with bringing a concrete solution to the environmental sustainability problems. Backed up with the constitutional systems in which the right to property is prioritized, datafication makes perfect the financial profits around the globe and proceeds consuming behaviors faster day by day to a blind alley for our one-and-only Earth cruising its way in a vast universe.

Michael Veale, Lecturer in Digital Rights & Regulation at UCL, uncovered several encryption methods including secure multi-party computation and homomorphic encryption and mentioned some implementations of these crypto systems into the agriculture, business and socio-economic disparity researches. He continued talking about the thin line between privacy as confidentiality and information power while finishing with some proposals on data processing processes to be legitimate and to be on behalf of availability of the code to the end users.

Burkhard Schafer, Director of SCRIPT Centre, Edinburgh University, reseted our minds to three historical figures from the fifteenth century which are Sigismund, Archduke of Austria, his first wife Elanour of Scotland and Heinrich Steinhöwel in how they created a scientific approach to translation among Europe. He pointed out Thomas Craig, the Scottish Institutional Writer, who is considered the one who brought the continental law from France to Scotland and -luckily!- made way to a uniform European legal practice and mentality. Therefore he stated that due to the emergence of national law(s), this uniformity of the continuance legal market and practice seemed to be got stuck, so did the access to justice. While scrutinizing the opportunities and the pitfalls, he proposed using a legal domain MT engine. Consensually, the hardest part is the processing of input data from the minority languages which are not widely used. So the input data is lacking to teach the nuances of the languages to the machine and lead it to get it right through the work. He concluded with the reasonable concerns on potential discrimination issues in which legal MT engine can have- as all AI-based systems might potentially have some.

Carlos Affonso Souza, Director at ITS Rio, focused on national AI strategies around the world as he enlightened us about predictions on several nations’ strategies. He opened up the floor with different states’ action plans of leading countries such as the U.K., the U.S. and Japan on AI (and robotics)and questioned the potential for research monopolization which might stem from the global competition in the field. Enlarging with the insights upon investments worldwide (e.g., India’s AI garage, SMEs in Mexico ), the presentation ended with emphasizing AI principles guidelines in which fairness ,ethics and diversity are the key components of the human-centric and minimum risk formula.

From the European Centre of Excellence on the Regulation of Robotics & AI (EURA), Andrea Bertolini took the floor to expand the discussion into liability and the risk management in AI & Robotics. After a short review on what would be the approximate definition of AI & Robots and an assessment on their autonomy and unforeseeability he dived into the liability issues, in which one human being is liable regardless of the parameters of the situation. So, reasonably, he raised the argument that the entire puzzle can be solved when/if we can get through a good ascertainment. To split liability wisely and to pose it on the right person should we follow a functional analysis- which he called CoA (Classes of applications) — he asked and he linked up to a needed bottom-up approach which would eventually help to create such a technology classification environment. In that point he evoked the Product Liability Directive (Directive 85/374/EEC ) in which the producers are found liable with no fault liability understanding and then he finished the talk by giving a case study example over autonomous-vehicles. For further perspectives curious ones can check his report which was requested by the European Parliament’s Committee on Legal Affairs (i.e., the JURI Committee).

The closing keynote speaker Paul de Hert, at Research Group Law Science Technology & Society (LSTS), Vrei Universiteit Brussels and Department Law, Technology, Markets and Society (LTMS), Tilburg University, gave perspectives on the EU Regulations on AI and personality and made analysis on the ecosystem of trust. He steered the discussion towards achieving an efficient classifying system of AI technologies which would help to regulate the increasingly problematic series of AI related issues and cases. Just to contribute to the classifying issue he pointed out the risk approach of the European regulators which contains several criticisms against it. In his own words, to remark the focal point of AI regulatory issues and to avoid an abstract discussion that goes nowhere he showcased proximity, flexibility and efficiency as key elements to justify the idea of awarding legal personality to AI.

The event as a whole was a distinct opportunity for shaping new angles through AI related issues which reveals reflections from the future and beyond. Additionally, Q&A session at the end was very vigorous with a fast flowing series of questions. It’s great to witness that Bilgi University holds the doors open to international discussion and to diverse interaction in which it provides involvement of some voices from the Middle East to this criminally important topic that relates each and every of human beings. May the curiosity come and maintain its presence for more discoveries on what the future brings in terms of AI and Robotics.

[1] https://www.siliconrepublic.com/companies/timnit-gebru-google-ai-scientist-fired-for-highlighting-bias

[2] https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

[3] Huyer, 2015

[4] https://www.ncwit.org/resources/women-tech-facts-2016-update (Online)

Image resource :<a href=’https://www.freepik.com/vectors/background'>Background vector created by pikisuperstar — www.freepik.com</a>

Legal Trainee | Amazed by discovering.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store