The Electron framework plays an important role in many communication software applications - Skype, Slack, WhatsApp and GitHub to name just a few. As a cross-platform development tool it offers developers the flexibility to create a variety of desktop applications with a single codebase.
It’s an open-source framework with relatively simple architecture, essentially a framework based on JavaScript and Node.js (running on the backend). However, this architecture also leaves certain files exposed, allowing would-be attackers to inject a backdoor. Let’s take a closer look.
Essentially, Electron Apps are becoming the de-facto standard in terms of desktop development because they allow a good chunk of the web application code to be reused. As mentioned earlier, some modern desktop applications such as Slack or VS Code are Electron apps. The major flaw with Electron apps, however, is that they are greatly exposed due to a lack of integrity protection. Any attacker with access to the local file system can tamper with those applications and change their behavior; it is relatively simple to inject malicious code inside a legitimate application without triggering any warnings (the digital signature is not altered).
This inherent weakness was recently demonstrated by consultant Pavel Tsakalidis. To perpetrate the attack, it’s necessary to unpack Electron ASAR archive files, which results in numerous JavaScript files that are not obfuscated or protected in any way. As so, it’s very easy to inject malicious code into these JavaScript files (and built-in Chrome browser extensions).
The vulnerability is part of the underlying Electron framework and allows for any malicious activity to be hidden within processes that appear to be harmless. During his demonstration, Tsakalidis was able to highlight a backdoored version of Microsoft Visual Studio Code that sent the contents of every code tab opened to a remote website.
Whilst it would appear that remote attacks on Electron apps are not a current threat, there is certainly a backdoor threat to applications which could pass unperceived and enable attackers to perform a myriad of attacks - taking screenshots of the app, activating a webcam, and exfiltrate data such as credentials and personally identifiable information.
So how do you prevent all of this? Well, one way is for Electron to roll out a secure code signing process, but that is something that does not exist today. Application owners can minimize the impact of this backdoor, such as by putting in place a Content Security Policy that prevents attackers from directly sending exfiltrated data to a command and control (C2) server.
However, as Tsakalidis’ research showed, a CSP only blocks part of this exploit’s capabilities - it helps minimize data exfiltration but doesn’t prevent injections that enable keyloggers, taking screenshots, and access to a webcam.
A more universal alternative, and one that only depends on the application owner, is for the owner to make their application code tamper-resistant. This is something that can be achieved with enterprise JavaScript protection, an approach that conceals the source code logic and, in addition, provides other protective layers such as code locks and self-defending code. By making the JavaScript source code extremely hard to read and making the application automatically react to tampering attacks, JavaScript protection renders these attacks completely uneconomical.
More advanced JavaScript protection technologies also enable application owners to gain real-time visibility over any attempt to debug or tamper with the application’s source code, which provides an extra degree of protection and readiness to minimize the extent of attacks.
As we see an increasing number of companies adopting Electron, it becomes increasingly important that organizations ensure that their applications cannot be tampered with. Developers of frameworks like Electron must take quick action to fix these vulnerabilities, but the stakes are too high for application owners to trust this alone.