Now let’s take a look at why leveling is important when comparing with external organizations.
As mentioned in our previous article , benchmarking is often easy to implement. Because there are many variables that affect the validity of the results. Many of these variables are easy to define, such as target audience demographics or phishing scenario variables.
However, defining the difficulty of phishing is more difficult depending on the simulation tools used and the credibility of the scenarios.
There are many institutions that use phishing simulations as an awareness tool, we can easily count institutions in verticals such as banking/finance and similar sectors. We can also narrow this down by size and even employee (target audience) maturity. But typically, features and similarities in real simulation data are difficult to compare. The SANS leveling model ensures that benchmark data is as applicable as possible, allowing you to perform better simulations.
For example, if an organization wants its overall security posture to be evaluated by a jury among its peers, past simulation results can be looked at to make evaluations appropriate to the selected level. When each industry organization shares metrics at a specific tier, metrics can be compared much more effectively than metrics across all levels or all difficulty levels.
The ideal situation can be achieved by choosing a leveling that represents a simulation with the same features, in similar sectors and under conditions where similar cyber security programs are implemented. If target audience demographics can also be compared, the Undesired Action Rate (UAR) will provide a reliable benchmark, along with the proportion of employees reporting suspected phishing.
Communication and collaboration between external organizations is crucial for such comparisons, which can be difficult depending on the typical workloads of the person responsible for the information security awareness program and the operational teams. When strategic priorities, as well as simulation execution time and resources, are jointly determined, analytical evaluations and comparisons of past results, taking into account leveling, can be reliable for teams. In this case, teams may choose to prepare simulations for a similar level of benchmarks by reducing resources and planning for impact.