In May 2016, the protection of personal data became a fundamental right in the European Union. In safeguarding this right, the European Commission assesses and determines whether countries meet data protection standards.

Privacy Shield was, in a sense, the aegis and seal of approval for companies in the United States to operate on European citizen data and allowed personal data to be transferred and used between the EU and the US. However, on July 16 2020, the EU Court of Justice invalidated Privacy Shield due to concerns over US law regarding private data, and US intelligence community and law enforcement surveillance programmes. As a result, companies now need to lean more than ever on Standard Contractual Clauses (SCCs) to demonstrate compliance and have to have real controls to back them up; they can’t be hollow contract terms. With organisations everywhere increasingly dependent on Software-as-a-Service (SaaS) and data centres, this becomes particularly difficult, with massive implications for service availability and corporate liability. Recognising this, organisations should endeavour to achieve “data autonomy”.

Simply put, data autonomy is about having full control and authority over your data. You have the choice and power to determine how the data should be used, where it should be stored, who is permitted to see it and how policies are implemented and revised, including the ability to monitor and audit access. This idea is frequently confused with the oft-spoken term data sovereignty. Data sovereignty is the notion that data should be confined to a region or nation’s physical borders, essentially determining specific data autonomy policies. In line with this, the data must be subject to the EU’s laws and requirements, seeing it as the ultimate authority. Both of these ideas somewhat overlap; however, one is really a prerequisite for the other. Data autonomy is fundamental to enforcing data sovereignty, especially going forward as data sovereignty laws become stricter and more specific over time. So, how does one attain such autonomy and identify a trustworthy vendor?

The first step in achieving data autonomy is through the implementation of strong business practices, including statements of purpose, handling claims and the onward use of data. This itself is a large undertaking because it must reflect real practices and must lead to contractual terms with teeth. Organisations need to create the policies and employ the right people with the expertise to set processes into motion. It is an issue of human resources that could, in theory, be resolved in a matter of weeks. It’s also important to game out scenarios and get the right executive reflexes in place: how does a company CEO or CIO respond to a subpoena for humanitarian reasons like kidnapping or medical need, let alone if the intelligence community turns up?

The second step is significantly more arduous as it requires the reconstruction of an organisation’s architecture, an endeavour that could take years. This step is about ensuring data autonomy from a technical standpoint, regardless of human failure and insider access at the business level and even to some degree resilience and controls to limit damage from malicious outsiders. That is, making data structures, applications, storage, use cases and other interactions with data private-by-design.

The concept of secure-by-design has been drilled into the minds of software engineers for decades, with mixed success. In this approach, security is included early and throughout the software development life cycle, with forethought as a hard requirement and design and implementation constraint. The software is closely coded to the source with robust cryptography, employing best practices and using measures for functions like authentication, availability, and authorisation standards. Moreover, procedures are enforced to maintain this security, not least by the practice of regular audits to update and patch vulnerabilities. Security should be built in, not bolted on.

In the same way, the notion of private-by-design is about incorporating privacy from the get go, and taking into account one’s data autonomy and sovereignty is fundamental to that.

Privacy should also be built in proactively, not bolted on after the fact.

The idea is to be proactive, anticipating poor set-ups and improving on a continuous basis to meet, even exceed, the standards put forth by the SCC as a baseline. This may take years to fully realise but delaying the inevitable will only complicate matters further down the line. Choosing not to embed privacy or not to lean into an ethos of respect for a human right to privacy, could lead to deterioration in performance, availability issues and hard-to-verify claims; and, thus, potentially hefty fines.

Once data handling is assessed in-house, organisations should begin to question and inspect how their employee, partner and customer data are being managed in third-party organisations. The key to recognising a trustworthy vendor from an unreliable one is transparency and visibility. Vendors should be documenting where they currently stand on privacy-related policies and how they will adapt their practices to comply with the SCC. That means a step-by-step plan with a clear designation of responsibilities. Some vendors may begin to contrive a number of excuses. For instance, they may claim that the assemblage of data is necessary for Big Data and machine learning, which is not true. They may even attempt to redefine data altogether. In either case, organisations need to be cautious of these early warning signs. Further perfection is the enemy of the good: a roadmap for privacy, regular and transparent view of achievements and progress should be rewarded, since most vendors are starting from a similarly weak position.

There is no doubt that, even if a new and improved Privacy Shield were to be drawn up, the amount of data that is currently being collected is colossal and will only continue to grow. We collectively need to seek a more sustainable means of protecting this data, to make sure that the data serves the best interests of the individuals concerned and not the hidden agenda of businesses, foreign governments and shadow courts. It is a privilege to access and interact with such data and not a right; the least we can do is implement a privacy-by-design approach into every aspect of data handling.

By Sam Curry, Chief Security Officer at Cybereason