Thursday, August 30, 2018

Download All Corporate Git Repos

Depending on the client you are testing, they may have an internal development team that checks code into a git repo. The vast majority of the clients I've seen implement the Atlassian suite of tools, typically containing an internally hosted Bitbucket.

The Bitbucket web interface has a search feature for looking for code snippets. It's absolutely awful. It's like an off brand tonka toys reject of a search function. You know what's way better? grep. That means I'd have to download every repo to search it locally. I did that with this script:

It's handy to note that grepping isn't the only good thing about cloning repos locally. It allows you to run the myriad of vuln checker tools, load up the code into an IDE and run source/sink analysis on it, and much more.

Wednesday, August 29, 2018

Brute Force LDAP Names (or how I kinda downloaded LDAP)

Running queries over a network using the ldapsearch tool can be a bit annoying. It's especially annoying when you constantly run into the "size limit exceeded" result when you get large responses.

I decided to write a little tool to recursively and conditionally search LDAP for CN entries (basically AD account names) and download them locally. If it detects the error size limit error, it automatically adds a new character to drill even further.

It works fantastically well. After you run this tool you should have many .out files containing ldap query responses. Grep to your hearts content:

Thursday, August 23, 2018

Apache Struts 2 Vulnerability & Exploit (CVE-2018-11776)

Yesterday a new vulnerability in certain versions of Apache Struts (2.3 - 2.3.34, 2.5 - 2.5.16)was discovered that leads to RCE. It requires both vulnerable versions as well as vulnerable configurations.

The gist of the issue is that if you have a vulnerable configuration that doesn't lend a namespace to struts, struts will take the user-specified namespace instead. Fortunately, it takes the namespace and evaluates it as a OGNL expression, allowing you to fairly easily get remote code execution.

Working PoC (I personally tested it myself and it works)
https://github.com/jas502n/St2-057

Technical deep dive on finding the vulnerability:
https://lgtm.com/blog/apache_struts_CVE-2018-11776

Vuln writeup by Semmle (including conditions for vulnerable configurations)
https://semmle.com/news/apache-struts-CVE-2018-11776

Apache's security bulletin for the vuln:
https://cwiki.apache.org/confluence/display/WW/S2-057

Mitre CVE link:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11776

A couple caveats I found while testing:
  • It definitely requires a lack of namespace attribute in the classes xml
  • All that is required for successful exploitation is a single proper GET request
  • Doesn't work on all struts-showcase installs (2.3.15 wasn't working for some reason), making me think it may be a bit finicky
I modified the PoC listed above into a simple python function, making everything simpler:

Below is it being run against a vulnerable VM I set up

Wednesday, August 15, 2018

Twitter Controlled Anything - Micropython on ESP32

I recently purchased an ESP32 from amazon for testing purposes and a colleague mentioned you could install a minimalist python environment on them for control. To say the least, I was intrigued.

I wanted to be able to control a light (or anything really) using tweets. Below are the instructions/scripts I wrote to get it working. First comes the prerequisites:
  • ESP32 (duh)
  • A VPS, Pi, or really any computer acting as a flask server (it just needs internet access)
  • A wifi network for the ESP32 to connect to, I just used the hotspot on my phone as a PoC
  • Twitter API credentials (really easy to get, just fill out the forms)
TLDR:
Your ESP32 will query your flask server for a trigger word to enable the LED. The Flask server will query twitter for your latest top tweet, if it has a trigger word in it, relay that to the esp32 client. Boom, tweet causes LED to turn on.

The first step is to get your ESP32 setup running the micropython environment. I followed this excellent guide

Once you get your ESP32 configured to run python code, go ahead and transfer the following script to act as the client. You just need to change the wifi details and target flask server:

import machine
import urequests
import time

pin = machine.Pin(2, machine.Pin.OUT)

def connect():
    import network
    sta_if = network.WLAN(network.STA_IF)
    if not sta_if.isconnected():
        print('connecting to network...')
        sta_if.active(True)
        sta_if.connect('WIFINETWORKSSIDHERE', 'WIFIPASSWORDHERE')
        while not sta_if.isconnected():
            pass
    print('network config:', sta_if.ifconfig())

def no_debug():
    import esp
    # this can be run from the REPL as well
    esp.osdebug(None)

no_debug()
connect()

while True:
    time.sleep(2)
    if 'yes' in urequests.get('http://MYFLASKDOMAINHERE.com:5000').text:
        pin.value(1)

Connect the LED to Pin 2 on the ESP32 and it's all set to go. Now onto the flask server...

On your VPS/Pi/whatever, install flask and tweepy and create a directory to hold your script files. Grab the Access Token, Access Secret, Consumer Secret, Consumer Key from your Twitter Dev console that you set up earlier and place them in a "twitter_creds.py" file like the following:

ACCESS_TOKEN = '18077065-lakjsdflkajshdlfkajshdflkajsdhqqSYOtHSXtK1'
ACCESS_SECRET = 'hPqlkwjehrlkfjnlqwejhqrwklejrqhlwkejrJr1'
CONSUMER_KEY = 'QZlk9qlkwejrhqlkwjerhlqwlLh'
CONSUMER_SECRET = 'uEnkzjxcnvluqblwjbefkqwlekjflkqjwehflqlkjhuOE'

Then paste the following into "tweepy-top.py"

from twitter_creds import *

import tweepy

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET)

api = tweepy.API(auth)

def get_top_tweet():
    top_tweet = api.user_timeline(count=1)
    return top_tweet[0].text

Now create your main flask app by pasting the following into "flaskhello.py":

from flask import Flask
from tweepy_top import get_top_tweet

app = Flask(__name__)

@app.route("/")
def hello():
    if 'light' in get_top_tweet():
        return 'yes'
    else:
        return 'no'

if __name__ == "__main__":
  app.run(host="0.0.0.0", threaded=True)

There you can see 'light' is used as the trigger word. Using this setup, every 2 seconds the esp32 will make a request to your flask server, which causes the flask server to query twitter for the user's top tweet, if the top tweet contains the word "light" in it, it returns the string "yes". The ESP32 recognizes the "yes" and turns on pin 2.

This is a very simple PoC and gets the job done. You can take this and expand in a thousand directions with it, some ideas:

  • A desktop counter that keeps track of your followers, retweets, likes, etc
  • A LED scroller that outputs your latest mentions
  • Or simply use twitter as the control for some device
The options are endless...enjoy :D

Thursday, August 2, 2018

Top 100 Ingredients From HomeChef Recipes

I love cooking, I consider it my primary hobby outside of infosec/coding. I had HomeChef for several months and absolutely loved it, I looked forward to each selection every week and always got to try some new techniques/flavors/combinations I probably would never had tried on my own.

Every meal they sent us had a double-sided recipe page to guide you through the process. I noticed something at the bottom of the recipe page:

They have a handy link for each recipe posted on their website, probably so it's easy to share what you made with family/friends. The fact that I saw a number, along with a list of ingredients got my thinking...

If I were to stock my pantry/fridge with "basic" ingredients, what would it look like? How about I count up the occurrences of certain ingredients on each recipe page, that should give me a good idea.

Well after gathering the data over a couple of days (I kept everything slow so as to not cause any problems) I present to you, the top 100 ingredients according to recipes on HomeChef.com

Count Ingredient
609 Garlic Cloves
433 Butter
345 Boneless Skinless Chicken Breasts
320 Green Onions
315 Shallot
219 Lemon
210 Sour Cream
208 Lime
202 Red Onion
186 Grape Tomatoes
178 Red Bell Pepper
170 Parsley Sprigs
158 Yellow Onion
154 Mayonnaise
153 Red Pepper Flakes
152 Honey
149 Liquid Egg
148 Cremini Mushrooms
142 Russet Potatoes
142 Green Beans
138 Jasmine Rice
135 Cilantro Sprigs
134 Sugar
131 Chopped Ginger
125 Baby Arugula
122 Grated Parmesan Cheese
119 Carrot
117 Roma Tomato
114 White Cooking Wine
104 Baby Spinach
102 Grated Parmesan
101 Panko Breadcrumbs
100 Light Cream
100 Shredded Mozzarella
99 Slaw Mix
98 JalapeƱo Pepper
95 Cilantro
95 Shrimp
94 Thyme Sprigs
93 Zucchini
93 Garlic Clove
89 Parsley
88 Sriracha
87 Dijon Mustard
84 Sirloin Steaks
82 Cornstarch
76 Heavy Whipping Cream
76 Light Brown Sugar
75 Seasoned Rice Vinegar
74 Romaine Heart
73 Pork Tenderloin
72 Kale
72 Shredded Cheddar Cheese
72 Asparagus
71 Sweet Potato
71 Flour
69 Roasted Red Peppers
69 Spinach
69 Ground Beef
68 Salmon Fillets
68 Matchstick Carrots
68 Toasted Sesame Oil
62 Brussels Sprouts
62 Soy Sauce - Gluten-Free
61 Carrots
60 Mini Baguette
57 Small Flour Tortillas
57 Persian Cucumber
57 Basil Pesto
56 Green Onion
56 Ground Turkey
54 Teriyaki Glaze
54 Radishes
54 Red Fresno Chile
53 Beef Demi-Glace
52 Ear of Corn
51 Basil Sprigs
51 Roasted Chicken Breast
50 Roma Tomatoes
50 Blue Cheese
50 Canned Evaporated Whole Milk
49 Marinara Sauce
49 Extra Firm Tofu
48 Smoked Paprika
47 Balsamic Vinegar
47 Naan Flatbreads
47 Bacon Strips
47 Chicken Demi-Glace
46 Taco Seasoning
45 Avocado
45 Broccoli Florets
45 Frozen Peas
44 Chives
44 Corn Kernels
44 Plain Greek Yogurt
44 Tilapia Fillets
43 Navel Orange
43 Feta Cheese
43 Bone-in Pork Chops

What would a post be without some code? below is the embarrassing Python script (hey, it worked...) that parses the HTML:

XPATH Notes (how to grep xpath)

XPATH is a querying language for XML document trees. Lots of web scrapers use it since HTML can be represented as XML directly.

Your basic "grep" like XPATH query is something like the following:

  • //*[@itemprop="recipeIngredient"]
Breakdown:

  • // = start at root of tree and include itself in any searches
  • * = any tag, anywhere in the document, otherwise replace with tag name
  • [blah] = evaluate the condition blah inside the brackets
  • @itemprop = This is how you reference attributes instead of tags
  • [@itemprop] = the condition is: if the itemprop attribute exists in some tag
  • [@itemprop="recipeingredient"] = condition is: if itemprop attribute's value is "recipeingredient"
Another example is if I wanted to search anything that references example.com in an XML document, I'd search for any href attribute that contains "example.com" like so:
  • //*[@href='example.com']
Or limit it just to direct hyperlinks like "a" tags
  • //a[@href='example.com]
XPATH has a lot more functionality than this but this is mostly what I need it for.

PS.
The expression in the condition brackets "[blah]" can be used with certain functions: https://www.w3schools.com/xml/xpath_syntax.asp

Wednesday, August 1, 2018

Finding Interesting Files Using Statistical Analysis

I noticed a pattern when scrounging for target data on pentests. Most of the times in which I get valuable data (test creds/log data/unencrypted logs/etc) they are often in files that are in some way different than those around them. Sometimes its their filename, like when you have 400 files named "NightlyLogDATE" and you see a "NightlyLogDATE.bak". It also tends to happen with file sizes. You'll have the same directory and almost every file is around 400-600KB and a couple will be megabytes big or only a couple KB.

These files are "interesting" to me because they differ in some way. These are the outliers. Sometimes they will be temporary backup files where a tech needed to test credit card processing with encryption turned off, or maybe some error pumped traceback/debug output to an otherwise normal file.

I decided to scrounge around online to stitch together a script that will report these outlier files.

The following script will look in the target directory, calculate the median absolute deviation, compare it against a threshold and return the filenames for you to prioritize pillaging.

It's fairly basic so I'm happy to accept any code donations :D