As agencies continue to embrace a hybrid-remote work model, they are also tasked with adjusting their cybersecurity policies and guidelines. For nearly two years, we have witnessed a rapid deterioration of the norms that governed both physical and digital workspaces. Without such norms, interpreting behavior and understanding risk has become increasingly complex. Organizations have struggled to maintain visibility of their assets, and both internal and external attackers have capitalized on weaknesses revealed by ambiguity and uncertainty.

While the transition to remote work has not been without its benefits for some, such as reduced commute time or the ability to recruit talented employees from a wide range of locations, many employees are feeling the negative impact of long-term stress. For instance, the blurring of boundaries between personal lives and professional lives contributes to the prevalence of employee burnout. Burnout, in turn, negatively impacts employee well-being and resilience. These social and emotional factors subsequently impact behavior, including behavior that may put an organization at risk. Employees who aren’t feeling well rarely perform well, and frustrations linked with security friction, such as slow VPNs or multiple account logins, may lead to people taking shortcuts such as using personal storage accounts or personal emails. It is not possible to have a resilient organization without fostering a resilient workforce.

When it comes to security, agencies can no longer afford to prioritize technology over people. Security strategies must be recalibrated to meet employee needs while addressing the challenges of securing environments that no longer have traditional boundaries. Here’s how to get started:

Question Your Assumptions About Behavior

On one hand, rules and boundaries are deteriorating because of the transition to hybrid-remote work. On the other, many rules that are established are based on unquestioned assumptions about what employees are doing. Both realities spell trouble for security. Systems, theories, and models, such as the Humanistic Systems Theory, help us understand critical disparities between how we imagine employees use technology and how they actually use technology, as well as how we tell people to use technology and how they say they use technology. Too often, agencies create rules based on a fantasy that’s not reflective of reality and the result is that the rules are ineffective and frustrating. Employees consistently turn to creative workarounds to overcome challenges during hybrid-remote work, from using their personal cloud and email applications to taking photos of their computers and texting the image to someone without access.

Typically, employee creativity is a great thing. But in regard to security, it can present a large problem. In a recent survey of 3,000 workers, 47% of respondents reported using shadow IT. This type of exposure and security risk is invisible to agencies not invested in understanding how people interact with technology. Going forward, agencies must accept that their existing assumptions and schemas on security may be insufficient, if not inaccurate. To correctly interpret this new world with fewer boundaries, agencies must include the human element in their analysis.

Gain Insight from Analytics

Analytics provide a compelling avenue for bridging the gap between the imagined vision of how employees are using technology to access and interact with critical agency assets, and the messy reality that they are likely breaking rules in the process.

To start, you need data. The type and amount of data that an agency collects determines the types of insight or meaning that can be made from the datapoints. Advanced analytics, including strategies such as rule-based analytics driven by subject matter expertise, or even machine learning, can help agencies identify patterns in workforce behavior. Establishing a data-driven understanding of what’s normal ultimately allows organizations to identify and respond to risks or abnormal activity more quickly. Open-ended exploration of data can also help organizations identify the severity of risks that may have previously been accepted. For example, if an organization had no policy against using USBs to store or transfer data but noticed through use of analytics the quantity of data being moved around via USB, they may choose to change their policy. Analytics can also help agencies prioritize what risks to address first, which is especially useful when resources or expertise is in short supply.

As an agency’s analytic capabilities mature over time, it is increasingly possible to enable dynamic threat response and risk-adaptive policies that can considerably reduce response time and risk exposure.

Prioritize Privacy

Use of data and analytics to understand employee behavior does not have to come at the expense of privacy. Instead, agencies should strive to anonymize data and limit visibility of raw data or identifying information as much as possible. Group behavior, rather than individual behavior, can also be a powerful tool for understanding organizational health and for identifying risks.

Behavioral analytics often generate risk scores tied to an individual person. Shielding identifying information in the user interface protects privacy, but it also serves as a valuable de-biasing strategy for members of the security team tasked with investigating risky users. Recent research notes that bias challenges the effectiveness of insider threat missions and challenges the effectiveness of security programs. For instance, investigators may be hesitant to investigate a high-ranking official or a friend, and may be more or less likely to investigate someone based on demographic factors including gender or even a person’s name.

It is possible, and necessary, to be an advocate for privacy as well as an advocate for understanding the workforce’s use of data. In fact, if agencies don’t monitor user behavior and something goes wrong, they will be criticized for missing the threat. By anonymizing data, using role-based controls to reduce access to identifiable information, and by auditing internal investigative behaviors, agencies can identify and respond to cybersecurity threats without creating additional stress for employees.

The bottom line is that controlling employee behavior is not the goal in this new hybrid-remote work environment. Instead, it’s the ability to proactively understand and respond to dynamic changes in the workplace.

Dr. Margaret Cunningham is Principal Research Scientist for Human Behavior within ForcePoint’s Global Government and Critical Infrastructure (G2CI) group, focused on establishing a human-centric model for improving cybersecurity. Previously, Cunningham supported technology acquisition, research and development, operational testing and evaluation and integration for the U.S. Department of Homeland Security and U.S. Coast Guard.

Share:
In Other News
Load More