Data Protection in AI: Privacy vs Misuse of Data
How secure is your private information after the pandemic? After a massive migration to digital platforms, many people are accepting to give away a part of their privacy to get in return a vaccination or online services. Due to the number of social and economic activities that have a place online, the regulations of data protection are more required than ever.
According to The United Nations Conference on Trade and Development (UNCTAD) “128 out of 194 countries have put legislation in place to secure the protection of data and privacy”. Having structured regulations allow companies and customers to protect their privacy in varying contexts. However, when AI appears there are big challenges based on established guidelines and principles.
The Paradox of AI Without Data
One of the important factors of AI is its ability to identify patterns unseen to the human eye and based on data, make predictions about specific people or massive groups. The paradox starts when data and AI companies insist that it is necessary to have a large amount of data to learn, make accurate predictions and develop adaptive models.
On the other hand, the misuse of data protection and lack of regulations can mislead in an inappropriate use of customers´ information. Governments and companies that work with data can make decisions or design campaigns without considering users´ privacy.
A great example is the data breach of 50 million Facebook profiles harvested for Cambridge Analytica. This company used personal information taken without authorization in early 2014 to create a system that could target individual US voters. Thereby, it was easier to send them personalized political advertisements based on their preferences.
How Can AI Work with Data Protection?
It is possible to envision a future where AI can help enable privacy. The proper use of data can be made by specific information that users feel more comfortable sharing. Hence, AI can send individual services depending on privacy preferences that have been learned over time.
An MIT start-up called Secure AI Labs (SAIL) has proved in the healthcare industry that using AI can be done without breaking users´ privacy. They have developed a specific technology where AI algorithms handle encrypted datasets that stay in the data owner´s system. Right now, half of the top 50 academic medical centers in The United States are already using it.
Having data protection on the radar would construct an ethical way of using AI in the long term. Companies and governments need to create a balance between privacy and technological developments to provide a good environment where AI can contribute to society in a sustainable time.
The question is, what do you think is the best way to manage data in your company based on privacy restrictions and how should we proceed?
Let us know in the comments.