Blog-Layout

GDPR & IoT

Challenges of GDPR in Telematics Insurance

IOT devices in Telematics insurance record a great deal of personal data, but where does it all end up? The GDPR imposes strict rules on capturing, using and saving this data. It increases the need for transparency towards the customers and introduces the 'Privacy by Design' principle. GDPR ensures that organizations can only use data for the purpose for which it was collected. A more proportional collection and use of data should be the result, as well as clear communication towards the customer about the policy followed.

By Sahil Tharia

August 30, 2020

Intersection of Ethical training of AI with GDPR in Telematics insurance



The relation between AI technology and the GDPR is multidimensional. On the one hand, the story tells us that AI effectively helps detect GDPR violations. On the other hand, several elements of the GDPR themselves challenge the effective use of AI. There are four different aspects of the regulation that create legal issues when it comes to the use of artificial intelligence: the principle of data minimization, the principle of transparency, the right for access related to the automated decision making algorithm and the admissibility of the automated decision making as such.



The extent to which pricing executives consider consumer perceptions of deception, fairness, and social justice is positioned within an emerging area of research that triangulates the dynamic between legal constraints, ethical considerations, and algorithmic models to make decisions about pricing , premium value and claims.





Principle of Data Minimization



According to the data minimisation principle, as few data as necessary, for the purposes of the processing shall be processed. Similarly, the objective of ‘ The principle of storage limitation’ is to ensure that controllers do not keep data longer than necessary for the initial purpose of the processing. Thus, the purpose of initial collection of data shall be processed and as soon as the initial purpose of the collection is fulfilled, the personal data has to be deleted. If we consider the nature of Telematics insurance working model or its requirements for training of ethical AI, it is arguably impossible to be specific about the purposes of big data analysis because for training of AI we need a lot of profiling and data for labeling and testing the model and its tools to respond such as ‘Chatbots’. Since the data would have to be deleted as soon as the initial purpose is fulfilled, data reuse would be generally impossible according to these principles. The principles of data minimisation and storage limitation may have a negative impact on the accuracy of the data analysis carried out to determine individual risk profiles and willingness to pay, which questions the accuracy of ethical or unbiased automated decision making for individualisation of insurance contracts.

 

Principle of Transparency & Purpose Limitation



The principle of transparency obliges controllers to be transparent with regard to their processing operations. This principle is closely connected to the principle of purpose limitation as it requires the controller to provide information on the purpose of its processing. In Telematics insurance, when we talk about ‘Chatbot’, then transparency of algorithm in personalized insurance contract for paying premium is clearly a blackbox and lack of transparency is there. How ‘Chatbot’ gets into any decision for refusal & acceptance of claims is also another blg blackbox area. As many past studies have shown that decisions are often biased and unfair so the transparency of data processing is arguably not only the single most important principle of data protection law, but also the reason for the broad information duties of data controllers and the right of access.

 

Right for Access related to the Automated Decision Making Algorithm



Under the rights of data subject in ‘Article 15 (h)’ of GDPR which includes the reference to ‘Article 22’ of GDPR the existence of automated decision-making, including profiling, referred to in ‘Article 22(1) and (4)of GDPR’ clearly states that a person is having the right to access the information behind the decision whether it is automated or not .When we talk about Telematics insurance which is having whole ‘Data Bias’ issues and automated decision making by ‘Chatbots’ or algorithm of AI is widely in practice. In such a case, the right to access the information behind the decision is how that algorithm reached a particular decision to offer any individualized insurance to the customer is really important. If we consider this situation practically then it is really difficult when technologies like 'Deep learning' in AI in which it is so complicated and tough to reveal that ‘How decision is being made’. We have to consider and interpret the relation between  ‘Article 9 (a) and (g)’ of GDPR in relation to ‘Article 22 (1) & 22(4)’ of GDPR because it clearly states the processing of special categories of data which is the case in telematics insurance. 





Admissibility of Automated Decision Making



When we access ‘Article 5’ of GDPR “Principles relating to processing of personal data’ and relation between Article 5(1)(a) and (2) which clearly interprets about ‘Lawfulness’ , ‘Ethical processing’,’Transparency’ and ‘Accountability’ of processors .In Telematics insurance,where these insurance companies are processors and process personal data or sensitive  data via ‘Chatbots’. Accountability is one of the main underlying principles of the GDPR, and that poses a very big problem for machine learning algorithms, especially for newer tools such as deep learning and automated feature extraction because we don’t know how the evaluation is being done or what features data points are being used . Automated decision making made by algorithms are responsible for any decision about claims and individualized insurance premium .These types of ‘Automated decisions’ are having public liability issues so if any wrong claim or biased decision for personalized offer is being given then we don’t know who will be responsible in such case. Thus, we have to consider the admissibility and accountability issues for responsibility in such consequences.  

Sahil Tharia is an IT & IP Law attorney & consultant. He had wide international experience of ICT & IP Law and he specializes in Copyright Law , IP licensing , IP monetization , Technology Transfers among universities and government, FRAND Licensing & SEP licensing in ICT sector, Data Privacy /Protection in emerging technologies . He also worked as a consultant to various international, Chinese, Hongkong & Singapore based law firms and currently lives in Oslo, Norway. He was also associated with the Peking University of Transnational Law as guest Lecturer in Copyright Law. Mr. Tharia has done his LL.B. from ILS Law College, Pune, and Bachelors in International Business from the University of Pune, Cyber Law Diploma from Asian School of Cyber Laws and also PGDM in IPR LAWS from Indian Law Institute, New Delhi. He has done his LL.M. in IT & IP Law from Leibniz University Hannover. He is currently pursuing his second LL.M. in ICT Law from University of Oslo and working as Legaltech researcher with NRCCL (Norwegian Research Center for Computers and Law,Oslo).

Read More

By Kamayani 21 Sep, 2022
Elon Musk points at Twitter's cybersecurity vulnerabilities to cancel $44 bn buyout-deal.
By Raushan Tara Jaswal 21 Sep, 2022
Time is running out on the National Security defence adopted by the Government of India for the prolonged ban on Chinese based Mobile Applications.
By Marco Schmidt 21 Sep, 2022
This article is a follow-up to “Showdown Down Under?” which was published here last year. As our cycle aims to explore jurisdictions outside the EU and North America, we will further dive into Australian competition law by outlining its basic structure, introducing the relevant actors and give an insight into the pursued policies in the realm of digital markets with a particular focus on “ad tech”.
By Linda Jaeck 16 Jan, 2022
How AI is enabling new frontiers in Mars exploration.
By Marco Schmidt 09 Aug, 2021
Regulation is gaining more traction all over the place but it is uncertain if the Australian News Media Bargain Code will become a role model for legislation in other places. There are several weaknesses to the Code and after all, it is not clear if paying publishers for their content will really alter the high levels of market concentration.
By Theint Theint Thu 09 Aug, 2021
The perseverance of Myanmar’s youth to fight for freedom is proving to be the key to the country’s democratic future.

Watch Our Episodes

Share by: