GDPR & IoT
Challenges of GDPR in Telematics Insurance
By Sahil Tharia
August 30, 2020
Intersection of Ethical training of AI with GDPR in Telematics insurance
The relation between AI technology and the GDPR is multidimensional. On the one hand, the story tells us that AI effectively helps detect GDPR violations. On the other hand, several elements of the GDPR themselves challenge the effective use of AI. There are four different aspects of the regulation that create legal issues when it comes to the use of artificial intelligence: the principle of data minimization, the principle of transparency, the right for access related to the automated decision making algorithm and the admissibility of the automated decision making as such.
The extent to which pricing executives consider consumer perceptions of deception, fairness, and social justice is positioned within an emerging area of research that triangulates the dynamic between legal constraints, ethical considerations, and algorithmic models to make decisions about pricing , premium value and claims.
Principle of Data Minimization
According to the data minimisation principle, as few data as necessary, for the purposes of the processing shall be processed. Similarly, the objective of ‘ The principle of storage limitation’ is to ensure that controllers do not keep data longer than necessary for the initial purpose of the processing. Thus, the purpose of initial collection of data shall be processed and as soon as the initial purpose of the collection is fulfilled, the personal data has to be deleted. If we consider the nature of Telematics insurance working model or its requirements for training of ethical AI, it is arguably impossible to be specific about the purposes of big data analysis because for training of AI we need a lot of profiling and data for labeling and testing the model and its tools to respond such as ‘Chatbots’. Since the data would have to be deleted as soon as the initial purpose is fulfilled, data reuse would be generally impossible according to these principles. The principles of data minimisation and storage limitation may have a negative impact on the accuracy of the data analysis carried out to determine individual risk profiles and willingness to pay, which questions the accuracy of ethical or unbiased automated decision making for individualisation of insurance contracts.
Principle of Transparency & Purpose Limitation
The principle of transparency obliges controllers to be transparent with regard to their processing operations. This principle is closely connected to the principle of purpose limitation as it requires the controller to provide information on the purpose of its processing. In Telematics insurance, when we talk about ‘Chatbot’, then transparency of algorithm in personalized insurance contract for paying premium is clearly a blackbox and lack of transparency is there. How ‘Chatbot’ gets into any decision for refusal & acceptance of claims is also another blg blackbox area. As many past studies have shown that decisions are often biased and unfair so the transparency of data processing is arguably not only the single most important principle of data protection law, but also the reason for the broad information duties of data controllers and the right of access.
Right for Access related to the Automated Decision Making Algorithm
Under the rights of data subject in ‘Article 15 (h)’ of GDPR which includes the reference to ‘Article 22’ of GDPR the existence of automated decision-making, including profiling, referred to in ‘Article 22(1) and (4)of GDPR’ clearly states that a person is having the right to access the information behind the decision whether it is automated or not .When we talk about Telematics insurance which is having whole ‘Data Bias’ issues and automated decision making by ‘Chatbots’ or algorithm of AI is widely in practice. In such a case, the right to access the information behind the decision is how that algorithm reached a particular decision to offer any individualized insurance to the customer is really important. If we consider this situation practically then it is really difficult when technologies like 'Deep learning' in AI in which it is so complicated and tough to reveal that ‘How decision is being made’. We have to consider and interpret the relation between ‘Article 9 (a) and (g)’ of GDPR in relation to ‘Article 22 (1) & 22(4)’ of GDPR because it clearly states the processing of special categories of data which is the case in telematics insurance.
Admissibility of Automated Decision Making
When we access ‘Article 5’ of GDPR “Principles relating to processing of personal data’ and relation between Article 5(1)(a) and (2) which clearly interprets about ‘Lawfulness’ , ‘Ethical processing’,’Transparency’ and ‘Accountability’ of processors .In Telematics insurance,where these insurance companies are processors and process personal data or sensitive data via ‘Chatbots’. Accountability is one of the main underlying principles of the GDPR, and that poses a very big problem for machine learning algorithms, especially for newer tools such as
deep learning and automated feature extraction because we don’t know how the evaluation is being done or what features data points are being used . Automated decision making made by algorithms are responsible for any decision about claims and individualized insurance premium .These types of ‘Automated decisions’ are having public liability issues so if any wrong claim or biased decision for personalized offer is being given then we don’t know who will be responsible in such case. Thus, we have to consider the admissibility and accountability issues for responsibility in such consequences.
Read More


Watch Our Episodes