New research released today from Carbon Black shows the concerns of cybersecurity. Findings show that nearly two thirds (64%) of security researchers said they’ve seen an increase in non-malware attacks since the beginning of 2016.
Carbon Black, the leader in next-generation endpoint security has launched its latest research Beyond the Hype, into the cybersecurity landscape. Carbon Black interviewed 410 leading security researchers in an effort to gauge how non-malware attacks, AI and ML are currently perceived.
The key trends established from the interviews include how non-malware attacks are considered more threatening than malware-based attacks and are increasingly leveraging native system tools, such as WMI and PowerShell, to conduct nefarious actions.
In addition, the findings showed that confidence levels in legacy AV’s ability to prevent non-malware attacks is low.
AI is considered by most security researchers to be in its nascent stages and not yet able to replace human decision making in cybersecurity.
Furthermore, cybersecurity talent, resourcing and trust in executives, continue to be top challenges plaguing many businesses.
Mike Viscuso, CTO at Carbon Black said:
“Cybersecurity researchers are deeply familiar with the ins and outs of how attackers operate. Researchers have reported seeing an increase in the number, and sophistication, of non-malware attacks. These attacks are specifically designed to evade file-based prevention mechanisms and leverage native operating system tools to keep attackers under the radar. Legacy antivirus, a key component to cybersecurity for some organizations, simply cannot prevent non-malware attacks. That’s a major concern for most researchers, who are seeing a rapid evolution in attackers’ toolsets and bringing their concerns to light in Carbon Black’s latest research report.”
“87 percent of cybersecurity researchers indicated it will be at least three years before they trust artificial intelligence to lead cybersecurity decisions. That’s because AI must rely heavily on human experiences and training to arrive at ‘decisions.’ Researchers noted a number of reasons they don’t yet trust AI in cybersecurity including: high false positive rates, ‘easy for attackers to bypass,’ and that it slows down security operations. Most security researchers know that cybersecurity is still very much a human vs. human battle. These researchers are able to see through the marketing hype of current AI solutions and understand that AI be a component to modern information security programs and should be used primarily to assist and augment human decision making – not replace it.”
Join our free-to-attend digital event, Last Thursday in Privacy, addressing data protection, privacy and security challenges including working from home, COVID-19, global regulations and more. Visit https://digital.privsec.info/.
We have been awarded the number 1 GDPR Blog in 2019 by Feedspot.