RIMS USA develops significant thought leadership and related knowledge content. This is accessible as a member.
Why Businesses Struggle with Email Security
Every day, in businesses around the world, the following scenario occurs: An executive assistant receives a fraudulent email, supposedly from the CEO, requesting assistance with a new project and asking when she would be able to help. This scenario can end in one of two ways.
In some cases, the executive assistant is observant enough to notice that the “CEO” included a signoff at the end of the note—something she never did. This causes her to investigate the email more closely and recognize that while the name of the CEO in the heading was correct, the actual email address was not. Luckily, she realized that this was an email security threat before responding.
Far too often, however, companies fall victim to the scheme and open themselves up to costly security violations. Businesses and their employees are more susceptible than they realize to an array of phishing attacks and impersonations because of their blind trust in business communications—particularly email.
The Trust Gap by the Numbers
Unlike attacks that can be slowed by a firewall or endpoint security solutions, email-based threats do not prey on software infrastructure or security protocol loopholes. They rely on social engineering tactics to breach the voids created by human behavior.
A GreatHorn survey examined the realities and industry perceptions of business communication security, revealing a stark contrast between security and non-security staff members when it comes to recognizing email threats.
Asked whether they had observed potential security threats—other than spam—in their inbox, one-third of non-security employees responded that they do not see threats such as executive impersonation, false wire transfer requests, fake credential login sites or suspicious attachments, for example. In contrast, email security professionals did not see the same low occurrence rate of threatening emails. More than 85% of email security professionals noted that they had observed these same security threats in business email communications.
The perception gap between these two audiences is a reality check for organizations who are relying on security awareness training and user vigilance to keep their businesses safe.
This is by no means the fault of the non-security staff; currently, hackers have a range of potential email attack methods that they deploy that look remarkably authentic. Common email security attacks came in the form of phishing, impersonations, wire transfer requests, W2 requests, payload attacks (malware in the form of attachments or links), business services spoofing (false ADP or Docusign spear-phishing tactics), and credential theft through log-ins to phony Azure, AWS, Office 365, or Google Doc sites.
The Threat of Email Impersonation
The most widely reported form of attack was impersonation, which is difficult for employees to recognize due to the often-sophisticated nature of the social engineering tactics used by hackers. More than 46% of the population in the email security survey reported incidents of email impersonation. A larger number of email security professionals—63.5%—said that they continually see impersonation attacks in their own and their users’ emails.
As in the scenario above, most of the impersonating business communications come in the form of emails that appear to come from executives, internal employees, partners, customers or vendors. Unsuspecting employees can easily and unwittingly reveal personally identifiable information, confidential and proprietary information, or potentially worse, requests to transfer money.
Why do so many security professionals see impersonations bypass their existing security defenses? One reason may be that the current email security policies and technologies are not adequate for the various forms of phishing that attackers now deploy.
Refocusing Email Security
One of the more telling results of the email security survey had to do with which types of attacks posed the greatest threat in the eyes of security professionals. Among those who set the overall security strategy for their organization, a disproportionate number were most concerned about payload attacks through malware links or email attachments—33.9% versus an average of about 22% for other methods.
This is why the email security solutions, and not the unsuspecting employees, are to blame for the increasing success of these type of phishing email attacks. In reality, these perimeter-based approaches to blocking email threats are outdated. With tools that rely on information from known threats to perform a binary good/bad analysis of incoming email, the crop of legacy email security solutions are myopically focused on identifying, quarantining, and stopping payload attacks—making it easy for impersonations and other social engineering tactics to bypass the email security infrastructure.
Almost half—45.8%—of all the email security survey respondents reported that they continually observe impersonators—executive, internal or external—break through the perimeter of their current email security solutions.
So how should security teams approach their business communications in the current climate? As an industry, we need to move beyond the static good/bad evaluation model. Newer technologies leverage metadata analysis, automation and machine learning to understand the unique communication patterns of organizations and individuals. In doing so, such security tools can effectively learn what “good” email looks like and identify the often-subtle indicators—for example, email volume or anomalies in authentication or behavior—that indicate a phishing attack.
By incorporating the newer email securing tools, security professionals can be better protected against targeted email attacks than they would be through traditional email security technologies. And, more importantly, they will not need to rely on the one time that an executive assistant noticed that an impersonator pretending to be the CEO wrote “Best” when closing a potentially harmful request.