this poor, neglected blog…

i had this wonderful goal of staying up-to-date on here with things i’ve been doing, but sadly i’ve been failing. i can’t say this is a promise to do better, but i can say that this is an admission that i’ll have half a moment to breathe once this month is up.

the last entry in here was right after Shmoocon…aligning pretty well with a major career-related move for me. i’m still at Accuvant LABS, but as of March 1, i’m no longer a Consulting Service Engineer, but rather an Associate Security Consultant. basically, this means that instead of spending my days scoping security gigs for other people to do, i’m actually conducting vulnerability assessments and pen tests. and…let me tell you. as much as i’ve goofed around with in my spare time, as much as i’ve picked up along the way from tinkering and attending con talks and picking the brains of far smarter security folk than i, there’s really nothing like doing my first this-is-for-an-actual-job pen test to smack me in the face and remind me just how much i still need to learn and improve at. i feel like with each gig i do i’m internalizing a bit more, and getting a bit better. though, i have a lot to learn as far as…just being more effective, knowing how to leverage the information i get, and isolating exactly what rabbit holes i should be going down.

aside from that, there’s quite a bit of conference craziness coming up. Notacon, Thotcon, and Security B-Sides Chicago are all coming up within the next approximately two weeks, making it a crazy month for me. at Notacon, i’m hosting Whose Slide Is It Anyway again, so i’ve been trying to come up with plenty of crazy slides for people to present. at Thotcon, i can catch a breath…i’m not speaking, i’m not con staff, i’m just there to say hi to everyone and enjoy the talks. at BSidesChicago, i’m giving an expanded version of my CTF talk — thirty minutes instead of fifteen. i’m looking really forward to that; even though i still don’t feel like it’s an hourlong talk, there were a few things i wanted to talk about that i didn’t quite have the time to hit at FireTalks. my thirty-minute slot at BSidesChicago should be perfect for that. in addition to speaking, i’m also volunteering for the day as a speaker wrangler. so, i’ve got a busy day ahead of me there.

and, if that wasn’t enough on my plate, the call for papers for Security B-Sides Detroit closes on the 27th, so i need to figure out what i’m proposing for that. i’m hoping i can think of something interesting and new for that conference, though i’ve been so focused on getting up to speed for work that other projects haven’t really been coming to me as i wish they were. we’ll see. hopefully soon; hopefully once i feel like i have my feet under me at least a little bit.

if you can open the terminal, you can capture the flag.

this past weekend, i attended Shmoocon.

on Friday night, i presented at Shmoocon Firetalks, a series of fifteen-minute talks put on by NoVA Infosec. my talk was entitled “If You Can Open The Terminal, You Can Capture The Flag: CTF for Everyone”, and discussed what to expect in CTF competitions, pitfalls to avoid when getting started, and how to get started doing CTFs and CTF-style problems. because Irongeek is amazing, all of the Shmoocon Firetalks are online, and available to watch.

this was my talk, for anyone interested:

thank you so much to NoVA Infosec for hosting Firetalks and giving me the opportunity to speak, to Irongeek for recording and posting the talk, and to J Wolfgang Goerlich for giving me the idea for this talk.

if you are interested in the slides, they are available for download here. as always, please comment here, email me at rogueclown *at* rogueclown *dot* net, or ping me on twitter if you have any questions, comments, or suggestions!

use your head(ers)

this is another problem i solved in the Nullcon HackIM CTF this past weekend. this time around, it’s Web Security 300.

there was no description of the problem: just a link to a login.php page. no matter what you typed in as a login and password, it just sent you back to the login screen, with a message “Our database server is down at this moment. sorry for the inconvenience!!”. i hunted through the page source to no avail. i also tried some SQL injection tricks just for good measure, but none of them did anything. i can’t say that i actually expected them to, since the “database server is down” suggested there wasn’t any kind of database server at all, but it was all i had. eventually, i wandered off to whack my head against other problems.

the real break in the case came Sunday morning, when j3remy messaged me to say that the source code for the PHP application was at login.phps. i took a look at it, and it had an interesting chunk of PHP right at the top:

if( isset( $_POST[ 'Login' ] ) ) {

if ("Username & Password is correct"){
// Authenticated
header("Location: onepage.php");
// Not authenticated
header("Location: login.php");

i hadn’t seen this “onepage.php”, or any link to that, before. since it was associated with being authenticated, i assumed that’s where i wanted to be. i tried to just load onepage.php in my browser, but it just gave me that login page yet again.

so, i did a bit of looking into the “Location” header. turns out, it’s a header used in HTTP redirect responses to tell the server where to redirect. since the PHP script wasn’t being a doll and setting that header for me, i decided that i would do it myself by forging that response in Burp. i set it to intercept both requests and responses, and sent another set of junk credentials to the form. as expected, the server sent back a 200 OK response with that same login page and the same message about the broken database. however, in Burp i changed the 200 OK response code to a 302 Found, and added a Location: header. sure enough, my browser responded with a GET request for onepage.php. the server responded with this:

HTTP/1.1 302 Moved Temporarily
Date: Sun, 03 Feb 2013 18:56:09 GMT
Server: Apache
Location: login.php
Vary: Accept-Encoding
Content-Length: 89
Content-Type: text/html

<h1>The flag is JagguisaNAUGHTYbunder</h1>

it’s a redirect back to the login page — but that response contained the flag as well! achievement unlocked.

so, what did i learn on this one? first off, it was a reiteration of the lesson that a little research about the protocol at hand (in this case, HTTP headers and redirects) can go a very long way. secondly, i learned about the .phps extension. i had never heard of it before this weekend, but it’s now in my bag of tricks for possible ways to get servers to disclose information about PHP applications.


this past weekend was the Nullcon HackIM CTF; again, i competed with the #misec team. it was a Jeopardy-style CTF, with a wide range of challenges to solve.

Programming 200 was a challenge called “lazy baba”. the challenge just said gives answer once in every 4 hours, and then linked to a page called answer.php. when i clicked on the link to the page, it was blank.

j3remy, one of my teammates, asked if someone could write up a script to grab that page every minute. i wrote the following bit of python, and set it running on a shell server to which i have access:


import time
import urllib2

while True:
    filename = time.time()
    filename = str(filename)
    response = urllib2.urlopen('')
    html =
    f = open(filename, 'w')

the script worked very simply: it grabbed the contents of that answer.php page, saved it as a text file with the timestamp as the filename, slept for thirty seconds, and then repeated. even though j3remy suggested every minute, i decided to do it every half a minute just to be on the safe side, since i didn’t figure that it would be a huge deal from either a network resources or disk space perspective.

a few hours later, i went into the directory, and ran an ls -l. there were a bunch of files titled with time stamps: the vast majority were one-byte files, but two consecutive ones were 32 bytes long, not one. i catted them, and each of the 32-byte files were the same:


sure enough, that was the flag.

if i were more on top of things at the time, i could have added some logic to exit the program to kill it if i pulled something that looked different than the other files it pulled, or to not save the file if it was just a one-byte file. you live, you learn, and you program more efficiently next time, i guess.

however, this quick and dirty little guy did the trick.

just one byte…

this is a continuation of my Mini CTF writeups. this time, it’s a reverse engineering challenge, entitled Crack Me!. the file is another zip file, and unzipping it gives you two files: an .exe, and a readme. opening README.txt advises:

Please ensure you have original CD-ROM in drive before using this application.

sure enough, running the .exe file (which runs beautifully under WINE, by the way; i did this entire problem on my Backtrack 5 VM) just throws a text box:

it then quits once you click OK.

this suggests that there’s some kind of code checking for a CD, and that you should be able to get the key if you subvert the CD checking code. i fired up the binary in IDA Pro, found the “Please insert the IRISSCON 2012 CD-ROM.” text in the “strings” window, and hunted through the binary for where that string was referenced in the code. sure enough, there was a subroutine that used that string as an argument in a message box, tossed the message box up, and exited once the message box was cleared. this looks like exactly the code that we want to avoid, and i renamed the subroutine to cderror_exit in IDA Pro just to make sure it was easy to find. i skipped to where it was called in the binary, and sure enough, there’s a jz instruction right before the call to the subroutine:

the subroutine right before it looks for the CD drive and tries to find the IRISSCON CD, and the jz instruction skips over the cderror_exit function if the check for the IRISSCON CD succeeds. so, it looked like it was probably time to bust out the oldest trick in the book: change the jz to a jnz, to reverse the logic on the check. i copied the binary, opened my copy in hexedit, and changed the jz opcode (74) to a jnz opcode (75) — that way, it should throw the error and exit if the CD is attached, and proceed to code that would show the key if the CD is not attached. that’s the only byte i changed; i touched none of the rest of the code. i ran the patched binary, and sure enough:

it popped up a message box with the key.

this is one of those problems that underscores the necessity of not getting bogged down in ancillary details when wading through a binary. there was a specific objective: crack the software by circumventing the CD check. i could have gotten lost for hours in the CD check or any of a bunch of other features of the software, but the solution was simple and straightforward: find the subroutine that was blocking access to the key, and change the logic of the program to avoid it. in this case, that required changing just that one byte.

the answer is right in front of you!

in furtherance of my goal to get better at CTFs, i’m taking a stab at another one. some of my co-workers told me about Mini CTF, and i’ve finally started attacking them after two weeks of being sick, out-of-town meetings while sick, and generally being too exhausted to think.

i’ve tackled one of them so far, entitled “Crack the Code”. it’s a zip file, and when you unzip it, you get a python script called i’m always happy to get a CTF problem that starts with a python script; since i’m a fairly decent python programmer, it means that i have a snowball’s chance of figuring out what the code is doing.

reading the script, it takes a seven-digit code as an argument, and checks it for certain two-digit sequences at the beginning of its SHA-1 hex digest, its SHA224 hex digest, its SHA256 hex digest, and its SHA512 hex digest. if all four of those match the digits prescribed in the validatecode() function, then it uses the code to decrypt a ciphertext, which ostensibly gives the key when decrypted.

fortunately, it’s obvious from the code that the key is the string representation of a seven-digit number. seven-digit numbers sounded like a relatively small keyspace, so i decided that i’d write a little script to brute force the answer: looping through each possible seven-digit code, using the validatecode() function from the code in the problem to test the hashes on each possible code, and then printing out a code that worked. this was the script i wrote:


import hashlib
import sys

def validatecode(code):	
	sha1 = hashlib.sha1()
	sha224 = hashlib.sha224()
	sha256 = hashlib.sha256()
	sha384 = hashlib.sha384()
	if sha1.hexdigest()[0:2] == 'a6' and sha224.hexdigest()[0:2] == '7b' and sha256.hexdigest()[0:2] == '57' and sha384.hexdigest()[0:2] == 'db':
		return True
		return False
if __name__=="__main__":
	digits = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
	for numone in digits:
		for numtwo in digits:
			for numthree in digits:
				for numfour in digits:
					for numfive in digits:
						for numsix in digits:
							for numseven in digits:
								code = numone + numtwo + numthree + numfour + numfive + numsix + numseven
								if validatecode(code):
									print code + " works!\n"

i wasn’t completely sure how long this was going to take, given that crypto functions in python aren’t the fastest things in the world, but it was worth a shot given the relatively small keyspace. fortunately, after about a minute, a key popped up in my terminal:

root@bt:~/minictf/crackthecode# python 
3495745 works!

and then, putting 3495745 into the original script from the zip file yields:

root@bt:~/minictf/crackthecode# python -p 3495745
IRISSCON 2012 - Crack the code!

Code correct!. Decrypting message...
Hi There!
The KEY is: 44859c3554ee157264297d62d8aeef64685c57549051836001e14b8821ab6a0f !

success! the answer really was right in front of me — given that the majority of my solver script was a function lifted right out of the original problem. most CTF problems aren’t that easy, but if they are, it’s nice to quickly identify that you don’t have to reinvent the wheel.

what are those track names, again?

today on Twitter, @kylemaxwell mentioned that he would not be proposing a talk to the Hack Miami conference because of the titles of the tracks. generally i’m not one to get up in arms about hacker conferences being informal about titles of tracks or talks: there’s a reason i prefer to go to conferences with a more “hacker” bent, as opposed to ones with a more “industry” or “vendor” bent. i’m not that formal a person. however, it turns out even i have my line — and titling the tracks of a conference “Oldf#gs” and “Newf#gs” crosses it. even though i realise it’s a reference to a longstanding 4chan thing, not all references to internet culture are appropriate as track names at a hacker conference.

i fully admit that my reaction is coloured by the fact that i am queer, but this seems unprofessional no matter what one’s sexuality. could you imagine applying for a job, mentioning your Hack Miami speaking gig (or just having the employer find it in a google search), and having it come up that you spoke in the “Newf#gs” track? i’d sure be embarrassed.

in addition to discussing it a bit on Twitter, i drafted an email and sent it out to all of the sponsors of the Hack Miami conference. in addition, i sent a quite similar letter to the conference organizers themselves. the letter i sent to the conference sponsors does, i feel, a good job of explaining my feelings about the track titles as well as the conference itself. the full text of that letter is below:

i noticed that OWASP South Florida is sponsoring the Hack Miami conference this May. i am not sure if you know what they are titling the tracks, but i’m seriously concerned about the names, and wanted to bring them to your attention. The advanced track is entitled “Oldf#gs”, and the novice track is entitled “Newf#gs”.

i’m from the Internet, and i know that this is a reference to a longstanding 4chan meme. However, Hack Miami isn’t a free-for-all, anonymous message board: it’s a technical conference sponsored by respected security companies and organizations. Even though hacker conferences tend to be somewhat informal in certain respects, there is a line that these track names cross. Titling the tracks with references to “fags” will alienate many LGBTQ and allied hackers. Not only will this lead to LGBTQ and allied hackers feeling less comfortable at the conference, but the terminology is likely to make speakers, organizers, and possibly even sponsors of the conference look less professional in the eyes of the community.

i am not demanding that you immediately remove your sponsorship of this conference, and i am not trying to argue that Hack Miami is doing nothing good for the community. Specifically, i applaud what Hack Miami is including both a novice/teaching track and a more advanced track. i have been involved with several conferences that have taken that kind of a tactic: i am a volunteer for Security B-Sides Chicago, served as a mentor on the “Proving Ground” track at Security B-Sides Las Vegas, and spoke as part of the “Teach Me” track at Derbycon 2.0. i have also advocated to organizers of multiple conferences to consider adding a novice/teaching track in order to include and involve people new to information security. However, this goal of inclusivity in content is severely shadowed by these alienating track titles, and i hope that questions or concerns from conference sponsors such as yourselves may help them think twice about these names.

Thank you for reading this, and for your thought about this issue.

nicolle neulist

update: @hackmiami has mentioned on their twitter feed that the track names are going to be changed, and that it will be up on their website shortly. thank you, Hack Miami, for doing the right thing.

update 2: that was either wishful thinking on my part, or anyone who was against the talk tracks was clearly trolled. The track names are not changing.

Ghost in the Shellcode Teaser: Poetry

today, i spent my afternoon playing the teaser round for the Ghost in the Shellcode CTF with the #misec team. as is appropriate for a “teaser” round, it wasn’t a full weekend-long CTF: it was three problems, and eight hours or one team solving all three problems, whichever came first. i had played the GitS teaser round last year, but was unable to solve any of the problems.

this year, i had a bit more luck. i wasn’t able to solve them all (congratulations, Plaid Parliament of Pwning, the first team who did!), but i did manage to solve one of the problems, entitled “Poetry”, during the time allotted for the competition.

the text of the problem was as follows:


The answer is the
md5 of a famous
cryptographic key

In case you wondered,
it's not DECSS.
Please, continue on.

both the problem statement and the hint are well-formed haiku poems; the hint that DECSS is not the answer was rather helpful, since DECSS was rather famously shared in all sorts of forms, including haiku. strangely enough, i was on the right track right off the bat with this one — so close that when i finally did solve it a few hours later, i was kicking myself. i was googling things like haiku cryptographic key and cryptographic key haiku, but not getting anything concrete. i couldn’t think of any famous keys that had to do with haiku, or any keys all over the news other that DECSS, so i set the problem aside.

i detoured to working on one of the other problems (“Kerming”, which i have yet to solve) for a while, until a teammate suggested trying the Playstation 3 decryption keys. i didn’t see what that had to do with haiku, but lacking better ideas (and realising that this was a “famous key” that i had been trying to recall, and failing to recall earlier in the morning!), i started md5-ing various permutations of the PS3 key; none of them were accepted by the system as an answer.

i was running out of ideas, but then one of the moderators in the IRC channel posted this to the chat room:

15:51 <@psifertex> Let me just summarize the hints already given so far
first: 1) Nothing to do with DeCSS. 2) Has to do with a particular type of 
poem. 3) That poem is used as a decryption key. Stir in google, shake.

i knew it wasn’t DECSS; this hint about a “particular type of poem” convinced me that the PS3 was not right. however, i then went on a detour about “poem ciphers”, and spent a good half an hour racking my brain trying to find a specific famous use of “The Life That I Have” as a specific five-word poem code key, or otherwise going down the rabbit hole of World War II poem crypto.

another hint in the IRC channel:

16:21 <@psifertex> ok, for those who have been asking me the exact 
formatting of the key that they think is right, here’s how to format it 
(again, this is how the key is actually used): no spaces, no new-lines, 
mostly lower-case, with the last few bits have upper case first 

this led me to believe that “The Life That I Have” would be no help, as it didn’t have a chunk of capital letters near the end. getting anxious and flipping through my open windows, i saw the Ghost in the Shellcode Twitter feed, which i had opened in a window but not looked at in hours. there was a tweet:

There are hints on IRC, if you haven't joined yet. Example: Poetry is 
an infamous decryption key in a particular style of poem. Not DeCSS.

for some reason (even though the IRC clue really should have done that, too…), “infamous decryption key in a particular style of poem” plus the formatting hint gave me a very clear idea of what i was looking for: the key itself was going to be a haiku (hence, going back to the first idea i had in the morning!), and that haiku was going to have a few words at the end that started with capital letters. i googled haiku decryption key -decss, and the second result was an order denying motion to seal in a recent copyright infringement case involving Apple. sure enough, on page 3, there’s this sentence:

Apple seeks to seal information relating to its decryption key haiku, 
“our hard work / by these words guarded / please don’t steal (c) Apple 
Computer Inc” (see, e.g., Apple Motion Tab 2 at 5).

this isn’t actually a well-formed haiku. whoever wrote this decryption key missed the whole five-seven-five syllabic pattern if they did, in fact, intend it to be haiku. however, it’s referred to as a haiku in the court case, and has a few capitalized words at the end.

i went back to my python shell, reformatted the string, and went to town:

>>> apple = “ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc”
>>> m =
>>> m.update(apple)
>>> m.hexdigest()

sure enough, 74b8fa920f8c4eaacf65e46afbe840de was the flag.

reading material?

maybe blogs have been more on my mind lately because i’ve decided to blog again. maybe it’s a dearth of reading material, or my recent dearth of new ideas for study or research. maybe it’s all of the above.

however, whatever the reason, i realised today that i was completely out of touch with other security blogs out there, and really needed to find a list of blogs to read regularly. i do frequently read things that i see linked off of twitter, but there isn’t a specific blog that i read on a regular basis.

i have put calls out on twitter as well as facebook asking about blogs people read, just to get some starting points for building a list of good blogs to read. i’m especially interested in blogs that cover specifically technical material: tool usage, code writing, exploit development, and the like. however, that’s not a hard line — i’m interested in hearing about any kinds of blogs that provide interesting information or viewpoints about security.

if you write a blog, or read a blog that you think i’d like, please leave a comment and let me know! i’ll definitely take a look. in about a week (once i’ve had a chance to go through them, and gauge a reasonable set of blogs that i can actually follow!), i’m going to post a list of ones that particularly tickle my fancy.

head, meet desk. twice.

it’s amazing how you can space out on the most basic things while trying to solve a CTF problem.

last weekend, i was trying to help the #misec team with a few of the problems from the 29c3 Capture the Flag. the two i spent the most time on were a reverse engineering problem and an exploitation problem.

the one i’m going to talk about here is the exploitation problem, the Minesweeper game. there were two really idiotic mistakes that i made on it, and i would have had quite a good chance of solving it if i didn’t make these two blunders. i won’t go through a full write-up of the problem; there are multiple writeups on CTFTime from teams who actually managed to crack it during the time allotted. however, i shall touch on the lessons i learned.

we had a lot of information available to us for the exploitation problem: specifically, the Python source code for the Minesweeper game, as well as host and port information for the running instance on the CTF server. given my familiarity with the Python language, i grabbed the source code, read through it to get an idea of how it worked, and played around with it. i fired up telnet, connected to the game, and started interacting with the game. the problem was, i kept getting syntax errors. it didn’t matter whether i know i’m rubbish with regular expressions, but i was sure i had the commands for saving, loading, flagging, and opening spaces absolutely correct — not only did i spend a bunch of quality time with the python regex documentation, but i also compiled the regexes independently in my python shell and verified that the commands i was sending matched the regexes in the source. still, no dice. i was flabbergasted — here was this program that i was supposed to exploit, this program i thought i understood, and i couldn’t even get it to work the way it was supposed to!

turns out, the problem wasn’t in my understanding of the program. the problem was the way i was talking to it. once the CTF closed and i started reading write-ups of the problem, every single team talked about using netcat. one of them even showed a screenshot of their interactions with the game, and their commands looked exactly like the ones i sent. sure enough, i tried it with netcat, and it worked. i fired up the old wireshark, sniffed my traffic — and found out that netcat sends a \x0a as a line terminator, whereas telnet sends \x0d\x0a as the line terminator. since the script was only expecting \n as a line terminator and not \r\n, this was my fatal (and extremely silly) error.

even while the competition was going on, even while i could use the minesweeper game to actually play a game of minesweeper, i assumed the vulnerability was going to be related to the load function. it allowed you to interact with files on the machine, so i assumed that it could be somehow exploited to load arbitrary files, including flag.txt. however, i was so hung up on that and other things to go much further than that, and i envisioned the vulnerability to be something where i could get the code to read the flag in as a saved file.

that wasn’t quite right. it’s not so terrible, of course, that my idea wasn’t correct right off the bat. what’s shameful about all this is that i didn’t think particularly deeply about the module that the program was using to save and load games — specifically, the pickle module. i hadn’t thought twice about pickle in years, since back when i was learning python at first. i knew it was a module used for serializing and recovering python data, but it never really came up in code i was writing, so i never dug through the documentation. however…the pickle module documentation is emblazoned with a warning that it is not intended to be secure against maliciously constructed data, and a quick google search for python pickle vulnerable turns up several nice descriptions for how to make pickle execute arbitrary code.

i spent some time this afternoon playing around with that — refreshing my memory on how pickle serializes and rearranges data, and getting pickle to execute random commands on a my linux box. it’s good knowledge to have now, but it was really not particularly difficult to understand, or apply to a situation where a program processed arbitrary pickles. if i had thought to google the module that the load function was possibly using, i would have had some concrete advice on how to own that Minesweeper game.

lessons learned: one, try multiple ways to connect to a service. if code is not behaving in the way you expect it to, and i’m convinced that i understand the source code for the service with which i’m interacting, maybe the problem lies in the client i am using. secondly, when given source code for a problem, do some research behind the libraries and functions it uses, because there may be some related vulnerabilities that could provide the trick to the problem.