2017-07-30:

The mystery of two file descriptors

debugging:security:iface
At my last livestream, around 1:02:15, I tried to show an old (as in: 2006) GDB detection trick relying on the fact that GDB "leaked" two file descriptors into the child process, i.e. the child process was spawned having 5 descriptors already allocated instead of the default 3 (stdin/stdout/stderr or 0/1/2). So I've created a small program that opened a file (i.e. allocated the next available file descriptor) and printed the descriptor, compiled it and executed (without GDB) assuming that number 3 will be printed. Instead 5 showed up and left me staring in amazement wondering what just happened. Since investigating this wasn't really the topic of my livestream I ended there, but today I found a few minutes to investigate the mysterious file descriptors. As expected, in the end it turned out that it was a mix of my mistake and unexpected behaviours of other programs. Furthermore, the descriptors could be used to escalate privileges under some very specific and weird conditions. To sum up - it turned out to be a fun bug.

My investigation started with trying to determine the nature of the file descriptors - so I started with the standard ls -l /proc/self/fd followed by lsof:

00:27:30 gynvael:haven> ls -l /proc/self/fd
total 0
lrwx------ 1 gynvael gynvael 64 Jul 30 00:27 0 -> /dev/pts/1
lrwx------ 1 gynvael gynvael 64 Jul 30 00:27 1 -> /dev/pts/1
lrwx------ 1 gynvael gynvael 64 Jul 30 00:27 2 -> /dev/pts/1
lrwx------ 1 gynvael gynvael 64 Jul 30 00:27 3 -> socket:[36875]
lrwx------ 1 gynvael gynvael 64 Jul 30 00:27 4 -> socket:[36612]
lr-x------ 1 gynvael gynvael 64 Jul 30 00:27 5 -> /proc/3800/fd

00:28:31 gynvael:haven> cat wth.c
#include
int main(void) {
 getchar();
 return 0;
}

00:28:42 gynvael:haven> gcc wth.c && ./a.out &
[1] 3831

00:29:21 gynvael:haven> lsof | grep `pgrep a.out`
...
a.out  ...  3u  IPv4  ...  TCP localhost:33321 (LISTEN)
a.out  ...  4u  IPv4  ...  TCP haven5:38666->192.168.56.1:33321 (ESTABLISHED)

Sockets listening on port 33321 - this actually rung a bell! These are sockets from my Windows↔Linux RPC interface.

The weird thing is that I could have sworn that I never noticed them before and I'm sure I listed file descriptors more than once during the years I've been using this Windows↔Linux setup. There was however one thing that changed a few months ago though - due to a bug in newer versions of GNOME Terminal on Ubuntu Server (i.e. if you don't have a full graphical environment installed it runs only from root for some reason) I recently switched to xterm. Maybe one terminal emulator made sure children processes get only stdin/stdout/stderr, and the other just passes the environment it inherited?



It turned out that that was (almost) exactly the case. I've done a quick test on three emulators (KDE konsole, xterm and GNOME Terminal) and indeed the first two passed on all inherited handles, while GNOME Terminal didn't exhibit this behaviour. However after further investigation it turned out that the latter is actually a side effect of GNOME Terminal being launched by dbus-daemon after an RPC call via dbus-launch.

Regardless of the above, of course the terminal emulators are not to be blamed for "leaking" handles - my RPC interface is.

The solution for this is rather obvious - just close the handles in the forked process when launching the terminal emulator. Since I'm using Python's subprocess.call (with shell=True) I could achieve this in various different ways:

1. Add closing the sockets (i.e. 3<&- 4<&-) to the issued commands:

 command = "(cd '%s'; /usr/local/bin/xterm -fa 'Monospace' -fs 14 -bg 'rgb:00/00/00' -xrm '*metaSendEscape:true' -xrm '*eightBitInput:false' 3<&- 4<&- & )" % cwd

 # Spawn.
 return subprocess.call(command, shell=True)

2. Set the FD_CLOEXEC flag on the descriptors in question (e.g. like it's shown here) - this would close them when execve is invoked.

3. When creating the socket use the SOCK_CLOEXEC option.

I initially went for the first approach (enough for testing), but for the sake of the patch pushed to github I wanted a less "hacky" method, so I settled for SOCK_CLOEXEC. Sadly, it turned out that Python doesn't support this option until the Python 3 family, so I had to fall back to FD_CLOEXEC. The fix has been pushed to github.

Since the problem was fixed I started thinking what was the actual severity of this mistake. I came to a conclusion that this might be a funny (almost horizontal) local privilege escalation vulnerability if the sockets would ever be passed to a child process running under a different (less trusted / less privileged user).

The above would be possible due to a somewhat embarrassing bug with the CMD_l_cmd RPC call. By design this call should only allow the terminal to be executed in the specified location, however it seems I've messed up escaping the shell characters:

 # Spawn the terminal.
 cwd = cwd.replace("'", "\\'")
 command = "(cd '%s'; /usr/local/bin/xterm -fa 'Monospace' -fs 14 -bg 'rgb:00/00/00' -xrm '*metaSendEscape:true' -xrm '*eightBitInput:false' 3<&- 4<&- & )" % cwd

That's not good at all - \' doesn't escape ' in Bash - you have to use '"'"' (seriously). In the current form one could just do a standard '; evil code; ' injection and would get the command launched with the privileges of the user running the RPC script.




One thing left to do is actually call the proper CMD_l_cmd implementation, though this isn't really straightforward (or maybe it is?) given that neither of the two sockets is connected to the "Linux VM" endpoint (see drawing above) - to be more specific, only the Linux VM endpoint implements the CMD_l_cmd call in a way that executes command. This would be the place where I would say "so the bug is not exploitable after all", but that turns out not to be the case, for two reasons:

1. While my RPC's description begins with words "Rather bad", it still requires knowing a secret key to be able to call the RPC. However, since the child process inherited the local RPC listening socket (i.e. socket "C" on the drawing above), it can "race" the actual RPC daemon to accept() an incoming local connection (e.g. one that happens if I click on a link in a Linux process and want it loaded in my Windows Chrome browser), and then happily receive the secret key from the connected RPC client (so it seems "Rather bad" holds true after all as this would not happen if there was a proper private/public-crypto mutual authentication scheme in play). The key can be used to then connect to the local RPC interface, authenticate, and issue the CMD_l_cmd call.

2. That said, the above is not even needed, as while the Windows endpoint doesn't actually execute the CMD_l_cmd call, it does by default forward it to the Linux VM. Well then.

Thankfully all above can be fixed by just doing a simple cwd = cwd.replace("'", "'\"'\"'") escape (unless... someone knows a bypass for this? if so, please let me know in the comments down below).

In the end the mystery of two additional sockets was indeed fun and led to the discovery of another interesting bug. And while severity was there, the risk was minimal (well, at least in my use case) as the exploitation scenario is really unlikely.

Comments:

2017-07-30 10:30:19 = redeemer
{
Nice!
}
2017-07-30 20:01:43 = Typo
{
Grammar typo

"I've did a" >> "I did a" or "I've done a"
}
2017-07-30 20:09:16 = shdown
{
Just cwd = cwd.replace("'", "'\\''") also works.
}
2017-07-30 20:42:20 = Gynvael Coldwind
{
@redeemer
Ty;)

@Typo
Thx, fixed@

@shdown
Makes sense :)
}
2017-07-31 00:12:29 = Rich
{
You want shlex.quote() I believe. :-)
}

Add a comment:

Nick:
URL (optional):
Math captcha: 4 ∗ 3 + 1 =