After successful completion of the PDP4E research project, project participants believe the next logical step is to create an Eclipse Foundation working group that focuses on open privacy protection models.
The three-year PDP4E research project was funded by the European Union’s Horizon 2020 research and innovation program and wrapped up in April 2021. The eight-member consortium involved in the project integrated privacy and data protection engineering capabilities into mainstream software tools to make it easier for developers to comply with the European General Data Protection Regulation (GDPR).
Now, the goal is to build a broad community that can expand on the efforts of the PDP4E research project and take privacy-by-model concepts and standardization initiatives beyond GDPR requirements.
To understand more about the need for open privacy protection models and the goals of the potential working group, we asked Antonio Kung and Samuel Martín to answer a few questions. Here’s an edited version of our conversation.
Q. How would you summarize the goals of the potential working group on privacy models?
A. The objective is to develop a series of open and reusable engineering models for privacy protection that can be applied across consumer applications, IoT applications, and data processing. These models will give developers the most effective and advanced privacy protection and compliance capabilities available so they can proactively mitigate privacy and security risks in their software.
Because developers will be able to choose among models that have already been applied and proven, it will be easier to determine which is the optimal model for their application.
Q. Why do developers need engineering models for privacy protection?
A. It’s very difficult for people to understand how their privacy is protected — especially because there are so many domains that collect our data, from healthcare, social networks, and banks to connected vehicles and smart energy systems. And many industries analyze our data. There’s a saying that your bank manager has enough data about you to predict whether you’ll get a divorce.
It’s not sufficient to simply tell people, “yes, your privacy is protected” because they have no way of knowing whether that’s true, or to what extent their privacy is protected.
A model provides a high-level specification that describes how the software protects privacy. Developers can now say they protect peoples’ privacy using a particular, proven model, and they can send people the model so they can verify that the protection level matches their requirements. People also have the opportunity to negotiate some aspects of how privacy protection is applied.
Q. Can you provide an example of how different privacy protection models might work and how some aspects might be negotiated?
A. The different approaches taken in various COVID-19 contact tracing apps are a good example of how there can be different privacy protection models for policy makers and developers to choose from.
One privacy protection model, which was adopted in countries such as Germany and Canada, leaves the data collected about contacts on the smartphone. The data is only used - with appropriate protections - if the smartphone user becomes infected. Other models, such as the one used in Italy, are semi-decentralized, while the model used in France is fully centralized. A fully centralized model requires much stronger organizational controls.
Q. What characterizes a good privacy model?
A. First, it must reflect the state of the art. It must be the most effective way to achieve a high level of privacy protection. Second, the model must be available, meaning people can afford the cost of implementing it. These two characteristics give developers access to what we call the “best available model.”
There will never be a model that works forever. The best available model today may not be good enough tomorrow. And a model that’s too costly to implement today may become affordable two years from now.
It will be the working group steering committee’s responsibility to review best available model submissions and to accept or reject model submissions. They must also constantly monitor the state of the art and determine when the best available model is no longer good enough.
Q. What aspects of development do privacy models cover?
A. Following the tenets of the privacy-by-design paradigm, privacy and data protection should be considered from the moment a project starts and should never be an afterthought. With this approach, you can apply a privacy protection model at every stage of the development life cycle, from requirements and risk management to process assurance, system analysis, and iterative design. There’s a lot of flexibility. For example, developers may decide they need to use a best available privacy protection model in one area, but not in another.
We’ve developed an illustration that summarizes the different types of privacy protection models that may be needed and the privacy considerations associated with each area.
Q. What types of organizations do you expect will be interested in joining an open source community focused on privacy models?
A. We’re discussing membership in the working group with people from a wide variety of organizations, including those that focus on best practices in global privacy and security, government privacy commissioners, standards bodies, technology companies, digital transformation specialists, and universities.
It’s very important to ensure the community includes technical experts and thought leaders from many different fields. So far, there’s quite a bit of interest in the community and people are very positive when we speak to them about membership.
Q. If people are interested in learning more or getting involved in the potential new working group, what steps should they take?
A. Anyone who would like more information can email Antonio Kung or Samuel Martin.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 787034