there was a problem with the attached résumé. This sort of phishing training exercise can improve some user behaviors, but it is certainly far from making users a foolproof part of the system.
Anticipating how people will behave helps you design better systems to capitalize on predictable behaviors, leading to better security. Even though people make mistakes, good systems should anticipate that and not break when they do.
“Users” Refers to Anyone in Any Function
When we use the term users throughout the book, it might seem that we are implying end users or low-level workers. The reality is that we mean anyone with any function. This can be managers, software developers, system administrators, accountants, security team members, and so on.
Anyone who has a job function or access that can result in damage is technically a user. Administrators can accidentally delete data and cause system outages. Security guards can leave doors open. Managers can leave sensitive documents in public places. Auditors can make accounting errors. Everyone is a user at some level.
Our use of the term users can also include outside contractors, customers, suppliers, cloud service providers, or anyone else who interacts with your organization. If they can take an action that can potentially cause harm to your organization, they must be considered in your risk model.
Cloud services and remote workers create additional concerns, where you potentially lose control over your information and users. For example, if a user goes into Starbucks and uses the free WiFi to connect to your network, that user creates a whole new class of users, increasing the risk profile. Cloud services change the profile of your users, given that access control methods change to allow for someone to theoretically log in from anywhere in the world. The risk can be mitigated, but you have to plan for it.
Perhaps some of the more overlooked groups of users are the people who are responsible for mitigating risk. They tend to look at the errors caused by others and believe that they themselves would have never caused the errors. This causes two types of problems.
The first is that if they don't conceive that an error can occur, they cannot proactively mitigate it. We have been on software test teams and found problems with potential uses of the software and told the developers. The developers have often responded that “nobody would ever do that” and fought us on implementing the fixes.
The second issue is that the risk mitigation teams, like information security, IT, physical security, operations, and so on, don't perceive themselves as being the source of errors. They do not believe they will make mistakes. They can have tremendous privileges and access, which provides the capabilities for their errors to create more damage than any normal user would.
Malice Is an Option
Although the natural assumption is that user-initiated loss happens through ignorance or carelessness, a great deal of damage is caused by malicious users. The 2018 DBIR found that 28 percent of incidents result from malicious insiders who have clear intent to either steal something of value or create other forms of damage. That is a staggering number.
More critical is that malicious insiders typically know the best ways to access whatever it is they are trying to steal or destroy. Additionally, if they are intelligent in their planning and execution, they might be able to identify and bypass your protection, detection, and reaction capabilities.
When malice is involved, awareness efforts can sometimes even work against an organization. Awareness efforts typically educate people about how malicious actors accomplish their goals. This provides your malicious employees with information about how they too can commit those types of crimes. It also gives them ideas about how and where you allocate defensive resources and what countermeasures they need to bypass. Clever malicious insiders use this information to improve their own attacks.
As a percentage of overall users, the number who will launch malicious attacks, let alone succeed at them, is fortunately small. Even so, the reality is that malicious users exist, so you must account for them. There have been various studies that have shown that a small percentage of users create the most damage. This is intuitively obvious. Such users will always exist. The best you can do is acknowledge this reality and prepare for them.
What You Should Expect from Users
Users need to perform their jobs properly in a fundamentally safe and secure manner. You need to ensure that security is embedded in job functions and that people know how to perform those functions properly. This should be well defined, and just like any other job function, you should set the expectation for those users to follow those definitions. We would love to say that you should also expect users to be fundamentally aware of security concerns beyond what is specifically defined, but that will not likely happen on a consistent basis.
Therefore, businesses should factor the users' limited awareness into their risk management calculations and plans. You should provide awareness training and opportunities to further reduce risk. Although we don't want organizations to rely too strongly on awareness, it is a critical component of any security program to reduce risk.
Although user ignorance can be partially improved with training, carelessness is another matter. Assuming you have properly instructed users in how they should perform their functions, if some users still consistently violate policies and cause damage, you may need to take disciplinary action against them.
Beyond ignorance and carelessness, you also must account for malicious actions. We discussed this in the previous section, and we will explore options to address it as we discuss security measures throughout the book.
It is important to follow our recommended strategies to ensure that your systems reduce the opportunities for users to make errors or cause malicious damage and then mitigates any remaining potential harm. Then regardless of whether the harmful actions are due to malice, ignorance, or carelessness, your environment should be far more powerfully positioned to minimize or even stop the resulting damage.
3 What Is User-Initiated Loss?
Users are expected to, and do, make mistakes, and some attempt to maliciously cause damage. However, those actions do not have to result in damage. There is a tendency to place all of the blame for mistakes on users. Instead, a better approach is to recognize the relationship between users and loss and work to improve the system in which they exist.
For this reason, we will use the term user-initiated loss (UIL), which we define as loss, in some form, that results from user action or inaction. As Chapter 2, “Users Are Part of the System,” discussed, users are not just employees but anyone who interacts with and can have an effect on your system. These actions can be a mistake, or they can be a deliberate, malicious act. Obviously, sometimes the system is attacked by an external entity, so the attack itself is not user-initiated. But when the user initiates an action that enables the attack to succeed, the user's action has initiated the actual loss.
It is important to also note that not all mistakes or malicious acts result in loss, and not all loss happens when the action takes place.
First, we must consider that some actions might not be sufficient to result in loss, or the loss may be prevented. For example, if a person clicks to open a ransomware program in a phishing message, if the user does not have admin privileges on their system, the ransomware should not be able to encrypt the system.
Then we must consider that should there be loss, the loss may or may not happen immediately. Consider that the data entry error may take years to create a problem, if at all, like the iconic error with the Hubble Space Telescope referred to in Chapter 2, where the error wasn’t realized until the telescope was already in orbit and ultimately required $150,000,000 in repairs. This error was years in the making.