US research firm International Data Corporation predicts that by 2020, businesses will spend over $100 billion to protect themselves from hacking, up from the estimated $74 billion budget last year. However, new technologies such as artificial intelligence and quantum computing can reportedly help prevent cyberattacks.
Artificial intelligence, for instance, could enhance threat detection, shorten defense response time, and improve ways of distinguishing real efforts from those that can be ignored, the Financial Times noted.
"Before artificial intelligence, we'd have to assume that a lot of the data - say 90 percent - is fine. We only would have bandwidth to analyze this 10 percent," Daniel Driver from UK defense group's Chemring Technology Solution said.
IBM is also developing its own AI security platform called Watson. The 'cognitive computing' platform has been taught to read a huge number of security research, which are mostly in human-readable form and not machine data. It could also work 60 times faster than a human investigator, reducing time spent on analyzing complex incidents from about an hour to less than a minute.
But an emerging technology would soon outperform IBM's Watson. If machine learning and AI would easily ramp up data sorting process, quantum computing could simultaneously work on every data permutation. Traditional computers recognize data as either one or zero, but quantum computers could store information in more than just one or zero. However, Driver said that it may still take three to five years from now before quantum computing for specific tasks would be widely available.
With all its theoretical computation capabilities, quantum computers could also be deemed useful on several military applications such as engaging in near-instantaneous hacking of encrypted military servers and controlling the enemy's infrastructure systems. It could also help enhance the performance of unmanned and autonomous military vehicles; be used on the different phases of new weapons and even on making new war tactics; and predict how satellites software would behave after a solar sun burst or nuclear pulse explosion.
However, just like any other machines, AI technology in security systems is also vulnerable from being tricked. It could be exploited by hackers by gradually teaching the system that unusual behavior is normal, a process known as behavioral drift. Fake human voices and video images could also allow criminals into your network.
Amid all these technological advances, Driver noted that "It's always a cat-and-mouse thing. As soon as you put the gate up higher, then the people will jump higher to get over it."