Computer Vision Privacy: We take it seriously.

See how computer vision can enhance AI while prioritizing privacy. We believe privacy is a fundamental human right, protecting sensitive data for all.

April 8, 2024
4 mins
Computer Vision Privacy: We take it seriously.
No items found.
Share:
facebook
twitter
linkedin
facebook icon
linkedin icon
twitter icon

Computer Vision Privacy: We take it seriously

We understand how important privacy is to our customers.

We understand how important privacy protection is to our customers. That's why we have built the Protex platform using privacy design principles and tools to ensure organizational and individual privacy rights are preserved. 

This research-focused article discusses computer vision and some of the privacy design frameworks we are using to build the Protex platform.

What is Computer Vision?‍

Computer vision (CV) is the term given to the technology that allows computers to gain a high level of understanding from digital images or videos. 

Evolution of Computer Vision Technology

Classical computer vision techniques have existed since the early 1960s, but recent advancements in deep learning algorithms, AI cameras, networks, and computing capabilities have allowed CV to flourish. 

Computer vision holds tremendous potential to solve problems in the healthcare, automotive, and manufacturing industries, to name but a few. The widespread adoption of this technology is driven by these advancements and the availability of cheaper smart camera infrastructure. 

Privacy Concerns in Computer Vision Systems

Although these computer vision-based systems can add substantial value to organizations and wider society, they also pose many ethical concerns, the majority of which center around privacy. 

Many organizations now face increasing pressure to comply with privacy regulations, such as the General Data Protection Regulation (GDPR) for CV, while still benefiting from the technology.

Balancing Privacy and CV Technology

The concept of privacy acts as a barrier, in many cases, to the deployment of and, thus, indisputable benefits that this technology can offer. 

The field of privacy in computer vision has recently become a burgeoning research area as more and more organizations are beginning to realize and understand the potential privacy concerns that computer vision could pose. 

Worker unions often express concern about how these technologies impact employee privacy, particularly in surveillance-heavy environments.

High-Profile Privacy Breaches and Their Impact

The understanding of the implications of a privacy breach, as well as the emphasis being put on data privacy, has been driven by recent high-profile examples which have garnered a significant amount of negative media attention. 

Companies are therefore putting a significant emphasis on ensuring that their systems are developed with and underpinned by strong privacy and security research, including the use of facial recognition systems. 

There is a particular interplay between security and privacy mechanisms in modern technology systems. 

While the security designs embedded in these systems are focused on safeguarding data collection, the privacy principles implemented are focused on safeguarding user identity. There is always a risk of a data breach in any system, which even tech giants have been unable to prevent

Strengthening Data Security

It is, therefore, of the utmost importance that in the event of a data leak, any data leak is encrypted, encoded, or structured in a manner that preserves the privacy and personal data of data stakeholders.

Global Perspectives on Privacy Definitions

The concept of privacy is difficult to measure and, in some cases, define. Ethical considerations play a significant role in shaping these definitions, which are based on strict individualistic definitions in the Western world. In the East, the definitions are looser and based more upon the collective community. 

This is an example of how the very fundamental principles upon which the definition is based can differ drastically. 

As such, there are a multitude of gray areas when privacy is discussed in relation to video data, and therefore, it can be difficult to outline a set of concrete rules or guidelines that are compatible with every definition and application. 

The frameworks that have been designed and used to inform how this technology should be used are based on guidelines and policies.  

Privacy by Design - A Framework

Privacy by Design (PbD) was initially developed and proposed by Ann Cavoukian and formalized by a joint team of data commissioners in 1995. 

The Privacy by Design framework was developed in response to the growing amount of information being created in the industrial manufacturing sector and the growing need to manage it responsibly. 

Furthermore, Cavoukian referenced increasing system complexity as a factor that presents profound challenges for informational privacy. The framework is based on the active embedding of privacy and centers around 7 principles, described in Fig 1.

Fig 1. Privacy by Design (PbD) Overview

What are the 7 Key Principles of Privacy by Design?

Each of these informs system design by presenting high-level objectives to ensure Privacy by Design Principles. They are briefly outlined below:

  1. Proactive not Reactive, Preventative not Remedial:

Privacy risks should be anticipated, and action taken before a privacy infraction occurs. Therefore, PbD is a before-the-fact design measure instead of one which offers remedies to privacy shortcomings.

  1. Privacy as the Default Setting: 

Personal sensitive data should be detected automatically, meaning that the user should not have to enable privacy. In systems such as surveillance and security cameras that rely heavily on computer vision for security and AI-powered monitoring, It should be enabled by default.

  1. Privacy Embedded into Design: 

Privacy should not be treated as an add-on - it should be integral to the system design, particularly in AI models that rely on object detection and biometric data. It should also be an essential component of the core functionality delivered by the system, ensuring the protection of personal information.

  1. Full Functionality – Positive-Sum, not Zero-Sum: 

The implementation of privacy should not result in any unnecessary trade-offs with any other component of the system. It should not hamper the functionality of the system, rendering it useless.

  1. End-to-End Security – Full Lifecycle Protection: 

Strong security systems are essential to ensure all data is securely retained during its life cycle and securely destroyed thereafter.

  1. Visibility and Transparency – Keep it Open: 

All stakeholders of the system and, furthermore, of them should be made aware of the privacy practices or lack thereof that are in place.

  1. Respect for user Privacy – Keep it User-Centric: 

The users of the system are core to any decision being made regarding the privacy of data in any system.

This framework came about in an era when the internet and cloud-based systems were beginning to become ubiquitous and central to the way in which large organizations managed their data. 

Challenges in Implementing Privacy by Design in AI Systems

A parallel could be drawn between the early 2000s and now, where recent advancements in Artificial intelligence and Computer Vision technologies have presented a variety of new challenges with respect to privacy issues and sensitive information. 

These issues particularly pertain to the complex, convoluted AI-based systems that produce large quantities of data, which can, in some cases, be personally sensitive by nature. 

It is this reason why, although the foundational principles of PbD are relevant and still hold importance, they also struggle to implicitly define how these systems should be built and instead exist to inform high-level privacy considerations.

Misconceptions About Privacy Frameworks

This article discusses a common misconception of a privacy framework, particularly PbD, which is that you can simply take a few Privacy-Enhancing Technologies (PETs) and add a good dose of security, thereby creating a fault-proof systems landscape for the future. 

Several challenges arise when implementing privacy by design principles, and these vary based on an organization’s size, maturity, and culture. Difficulties are often driven by the following factors:

Absence of a Privacy-First Culture:

Adopting privacy by design requires a cultural shift in many organizations, prioritizing privacy from the outset rather than treating it as an afterthought. This cultural gap is often compounded by:

  • Lack of Key Roles: Without dedicated privacy officers, accountability for privacy controls is weak.
  • Short-Term Focus: Prioritizing quick profits over long-term privacy can create conflicting priorities.

Limited Collaboration: 

Implementing privacy-preserving techniques requires input from different teams and senior leaders. Without cooperation, efforts to implement it can fail.

Poor Data Management:

Data sprawl and orphaned amounts of visual data complicate identifying privacy challenges, making it challenging to implement effective privacy processes.

Complex Regulations: 

Growing privacy laws make compliance tricky, as organizations must follow different rules in each region.

Fast-Changing Technology: 

New technology, such as image recognition and computer vision algorithms, brings both solutions and risks, requiring organizations to stay updated while protecting against emerging privacy violations and threats.

A Privacy-First Computer Vision Solution

At Protex AI, we not only understand how important privacy is to our customers, we also understand the mechanisms and technologies needed to implement a fully privacy-preserving platform!