We go about life using passwords, pins and biometrics to protect our images, our bank details and other aspects of our personal data we feel are worthy of our protection. Yet, information like browsing patterns or images of us in public spaces often don’t cross our minds as a privacy concern. We rely on laws such as the GDPR and the CCPA to protect our privacy by giving us more control over the data collected. However, not only do these laws not offer comprehensive protection to the privacy threats we face now, but as technology continues to develop, we find ourselves completely underprepared for future threats.

What is changing?

The two main changes which will impact the threats to our privacy are: the way we use technology and technical advancements in technology.

We now use technology in far more personal settings than we used to. There is no longer a distinction between the ‘private sphere’ and the ‘public sphere’. Some of us even struggle to let go of our phones to do something as private as shower or use the toilet! By changing the way in which we use technology we also change the type of data we are sharing with these devices, like our biometric data for example. The more intimate the nature of the data the greater threat becomes, which is why our laws must reflect the changing nature of this data.

The increasing use of different types of sensors in the public spaces are creating new privacy threats. As one website put it:

“Physical public areas –town squares, pedestrian zones, shopping centres and bus stops– are increasingly subject to unfettered digitalisation, with commercial sensors tracking eye movements for feedback on digital advertisements, or cameras recognising faces in shopping centres.”

The problem is therefore twofold: firstly, we are not accustomed to the types of sensors being used and secondly, we don’t know what entities are collecting this information. The result is that we are unaware of what we need to protect ourselves against and where the data collected about us is going or how it is being used. One suggestion to combat these privacy concerns is to require these sensors to come with visual markers, but how these would be presented raise all sorts of ethical issues. In the Tuscan region of Italy a robot with visual sensors is being tested to collect rubbish on the streets. It has two eyes painted on it, but its visual sensors actually lay elsewhere. Is this indication enough? Or is it just misleading?

Advances in analytical tools such as the increasing use of Artificial Intelligence have also created new privacy concerns. With their amazing capabilities comes the potential for great harm through increasing quantities of data collected and the ability to analyze this data. In 2021 the European Union (‘EU’) released a proposal for harmonised rules on AI which included warnings regarding the use of AI. One warning was the potential to make more accurate predictions on our future actions and to use subliminal techniques to substantially influence a person’s behavior. Another is the growing concern that there are no laws covering AI’s communicating with one another and transfering data in this way.

How does the law reflect these changes?

If the GDPR, one of the most developed and widespread privacy protection mechanisms, is insufficient and outdated, then one can only fear how under protected the rest of the world’s population is. The table below highlights the international disparities in the quality of protection offered by the GDPR (EU Regulation) and the CCPA (California state law).

GDPR CCPA
Application: Applies to all companies that monitor or process EU citizen data. Application: Applies only to organisations that conduct business in California
Enforcement: Companies in violation receive heavy fines on behalf of the EU. Enforcement: Gives residents enforcement powers through litigation against violating companies.
Requirements: Companies that practice in the EU to appoint an enforcement officer to ensure compliance. Requirements: Makes no such provision for an enforcement officer.

The EU has recently made progress in developing some privacy protecting legislation. The EU AI Act uses a risk-based approach classifying legislation by the level of risk associated with the types of technology. Chatbots, for example, are on the lowest end of the risk scale, whereas technologies such as ‘real-time’ remote biometric identification systems are completely prohibited under the ‘unacceptable risk’ category (with very few exceptions). In considering which technology would present too great a risk, the EU faced the difficulty of balancing legitimate interests such as combating crime with the need to ensure individuals right to privacy which is enshrined in the Universal Declaration of Human Rights (Article 12), the European Convention of Human Rights (Article 8) and the European Charter of Fundamental Rights (Article 7).

The US has also begun to implement targeted laws which seek to protect vulnerable individuals and groups. California has introduced the Age Appropriate Design Code which requires companies to implement specific data protection measures when they know the services will be accessed by children under 18. While this is an important step forward, the law faces many of the same compliance problems as the CCPA.

Increasingly legislators are looking into these privacy issues and issuing recommendations for how to improve privacy protection. The European Union Committee on Legal Affairs calls on the commission to ensure data protection principles such as: privacy by design; privacy by default; data minimization; purpose limitation; and transparent control mechanisms for data subjects are upheld. While it is important that these principles are being acknowledged, its simply not enough. Proactive legislation is needed, and it must be developed at the same pace as the technological developments. It benefits the government, the public and the tech companies to develop these products in active communication with each other and to work together identifying future areas of risk.

How can we combat the current privacy concerns affecting individuals?

One way is by improving data collection consent mechanisms. The suggestion that every member of the population should be able to read legal jargon would seem ridiculous to most. Yet, when it comes to cookie policies this is exactly the way it is worded. By making data collection consent mechanisms simpler to understand and taking inspiration from developments in pictorial legal contracts we can improve understanding of exactly what people are consenting to and provide not just consent but informed consent. In fact, making privacy policies that are easy for children to understand is one of the requirements imposed by the Age-Appropriate Design Code mentioned previously. The burden of making privacy policies understandable should be something all companies don’t just aspire to but should be required to do so by law.

Companies should also be encouraged to follow best practices for data collection and management systems. This can include data mapping exercises to determine how information and data flow through organisations, implementing classification policies (high risk versus low risk data) and preparing regulatory maps to determine local, state, federal and international requirements.

At the individual level, it is important to ensure individuals are educated on what data is being accessed, what is being consented to and how this data is used. Some suggest data protection can also be provided as a service, but this is a dangerous step which would shift the burden of protection from the companies and the government to the individual.

Each individual born on this earth has the fundamental right to privacy. If companies are to benefit from the data provided, then they must do so only on the basis of informed consent. It is important for companies and for individuals to demand better guidance and more proactive laws on the protection of privacy. Only then can technology continue to develop in a way that benefits all and takes advantage of no one.

smartR AI’s top priority is creating secure and private AI solutions. Not only are smartR AI products GDPR, HIPPA, and ITAR compliant, but adherence to privacy principles such as privacy by design is assured. By ensuring products like SCOTi® AI are self-hosted, smartR AI aims to give companies maximum control over their data.

 

Written by Celene Sandiford, smartR AI