You will be redirected to the page you want to view in  seconds.

Engineering privacy into systems

May. 21, 2014 - 02:44PM   |  

When you’re a steward of personally identifiable information, you have a responsibility to keep that information safe. Data about humans is valuable, and government agencies hold vast amounts of it. Government organizations, just as other enterprises, need to build privacy into their systems.

The best way to do this, in my opinion, is to think of privacy as an engineering requirement. You need to take the term “privacy” out of the conceptual realm and put it in the physical realm of nuts and bolts (or bits and bytes).

When most organizations need a privacy policy, they call a lawyer. I’m not saying you should never call a lawyer, but what if first you called an engineer? You would instruct that engineer, whether a government employee or contractor, to set requirements for privacy with the same primacy as they set requirements for, say, the speed and size of the system. Privacy would become a functional rather than an aspirational requirement. The system would then be built to meet those requirements, quality standards and measures would be set accordingly, and privacy would get built into it in a very real way.

This is privacy engineering, which differs from privacy by design. The latter asks organizations to adopt practices allowing privacy to be considered at the beginning of data-intensive processes. Privacy engineering, by contrast, hands tools and methodologies to developers, systems architects, financial strategists, organizational designers and audit teams to begin to build, dream and innovate so they can meet privacy-by-design aspirations.

To build privacy into a system you have to pose and answer many questions about the outcomes you want, as you would for any sound quality engineering processes. You start by understanding the system’s users — for example, to determine if expected users are employees, IT managers or consumers under a single government agency, multiple agencies, a particular level of government employees, citizens, veterans, members of a public/private partnership, or any combination of these or other classifications. Naming all the permitted — and nonpermitted — users before the system is designed rather than afterward helps you architect the appropriate permission levels and plan for potential activities encountered at each permission classification level.

(Page 2 of 2)

System designs and builds are impacted by what types of particular users should have system access, whether that should be full or partial access, and whether it should be time-restricted so that, for example, a government worker can be on the system at 2 p.m. but not 2 a.m. Other considerations include how a system would interact with other internal or external systems, what an effective outcome would look like in a perfect world, and what, where, how and by whom Plan B be would be deployed if a vulnerability or an actual intrusion were detected.

With privacy engineering, data can be responsibly, ethically and transparently connected, collated and queried in ways previously unimagined and certainly unmanaged. System owners must decide if the data can be shared across agencies and whether that means all agencies or just some in particular. Privacy is not secrecy but rather a continuum of authorized sharing, ranging all the way from “share everything” to “don’t share anything.” Information sharing among government agencies is great, but you need to know why you’re sharing the data, what kind of data you want to share, how much of it, and for what specific purpose. And turning the answers to those questions into system requirements rather than bolting them on at the end is much more effective in terms of managing your privacy requirements.

The dark cloud in all this influx and volume of data, of course, is our inability to predict what stories our data will tell and the potential harm it may bring to real people when it is uncurated and left to pure chance rather than planning. Privacy-engineering techniques allow for scenario testing and information-based risk taking. They are not a perfect protection against unknown uses or unscrupulous bad guys. Privacy-engineering techniques do, however, make system owners, data fiduciaries and even individual users more aware, accountable and better at situational awareness to help mitigate unplanned risk or deviously planned crime.

I’ve outlined a few of many relevant considerations for engineering privacy into federal systems. It’s really not an overwhelming task. One individual in a federal agency could make real progress by understanding what fair information practices are and how they would apply when they are put into actual requirements. Rather than calling it “privacy,” call it “quality,” and break it down into functional elements.

We in the private and public sectors can do this. And it’s worth it.

Michelle Finneran Dennedy is chief privacy officer at McAfee. She co-authored “The Privacy Engineer's Manifesto: Getting from Policy to Code to QA to Value” with Jonathan Fox and Tom Finneran.

More In Federal IT

More Headlines