When is a cybersecurity hole not a hole? never

0


In cybersecurity, one of the tougher questions is deciding when a vulnerability is a big deal, requires immediate fix or workaround, and when it’s trivial enough to ignore or at least devalue. The tricky part is, a lot of it includes the dreaded safety through darkness, where a vulnerability is left in place and those who do know hope no one will find it. (Classic example: Leaving a sensitive website unprotected, but hoping that its very long and non-intuitive URL is not accidentally found.)

And then there’s the real problem: in the hands of a creative and well-endowed villain, almost any hole can be used in non-traditional ways. But there is always a but in cybersecurity – IT and security professionals cannot pragmatically fix every single hole in the environment.

As I said, it’s difficult.

What reminds me is an intriguing M1 CPU hole found by developer Hector Martin who named the hole M1racles and posted detailed thoughts on it.

Martin describes it as “a flaw in the design of the Apple Silicon M1 chip”. [that] allows any two applications running on the same operating system to secretly exchange data between them without using memory, sockets, files or other normal operating system functions. This works between processes that run as different users and with different authorization levels, creating a covert channel for stealthy data exchange. The vulnerability is burned into Apple silicon chips and cannot be fixed without a new silicon revision. “

Martin added, “The only mitigation available to users is running your entire operating system as a VM. Yes, running your entire operating system as a VM has an impact on performance,” then suggested to users not to do so due to the performance degradation.

This is where things get interesting. Martin argues that this is practically no problem.

“Really, nobody is going to find any nefarious use for this bug in practice. Also, there are already a million side channels that you can use for collaborative cross-process communication – cache stuff, for example – on any system. Covert channels can’t Data leaks from non-cooperative apps or systems. In fact, it’s worth repeating: Covert channels are completely useless unless your system is already compromised. “

Martin originally said that this mistake could be easily mitigated, but he has changed his mind. “I originally thought the register was per core. If so, you could simply delete it by changing the context. But since it is per cluster, unfortunately we are somehow screwed, since you can communicate without cross-core communication going into the kernel. Apart from running in EL1 / 0 with TGE = 0 – ie in a VM guest – there is no known way to block this. “

Before relaxing, think about Martin’s thoughts on iOS: “iOS is affected like all other operating systems. This iOS vulnerability has unique privacy implications as it could be used to bypass some of the more stringent privacy regulations. For example, keyboard apps are not allowed to access the Internet for privacy reasons. A malicious keyboard app could take advantage of this vulnerability to send text that the user is typing to another malicious app, which could then send it to the internet. However, since iOS apps stores distributed via the app are not allowed to create code at runtime (JIT), Apple can automatically scan it when it is sent and reliably detect all attempts to exploit this vulnerability using the static analysis already used, whether or not they are carrying out these checks they have already done so but are aware of the potential problem and can reasonably expect it. The automated analysis already rejects any attempts to use system registers directly. “

This is where I am worried. The security mechanism is to rely on Apple’s App Store folks to catch an app trying to take advantage of them. “Really?” Neither Apple – nor Google’s Android – have the resources to properly review every submitted app. If it looks good at first, an area that professional bad guys excel in, both cell phone giants are likely to agree.

In an otherwise excellent piece, Ars Technica Said, “The covert channel could bypass this protection by relaying the keystrokes to another malicious app, which in turn sends them over the internet. Even then, there is a chance that two apps will pass Apple’s verification process and then one target installed. ” Device are far fetched. “

Far-fetched? “Really?” IT should trust that this vulnerability will not cause any damage because there is no chance that an attacker will successfully exploit it, which in turn is based on Apple’s team catching a problematic app? That’s a pretty scary logic.

This brings us back to my original point. What’s the best way to deal with holes that require a lot of work and luck to be a problem? Given that no company has the resources to properly address every single system gap, what should an overworked, understaffed CISO team do?

Still, it’s refreshing when a developer finds a hole and then downplays it as no big deal. But now that the loophole has been exposed in impressive detail, my money is on a cyber thief or ransomware blackmailer who figures out how to use it. I would give them less than a month to use it.

Apple needs to be pressured to fix this as soon as possible.

Copyright © 2021 IDG Communications, Inc.



Source link

Share.

Comments are closed.