Pen-testing using Backtrack and De-ICE under VMWare Fusion

Haven’t had an entry in quite a while, but tonight’s work on getting a little pen test lab setup got me to document the process, before I forget what I did 😉

So – objective: need to setup a virtual lab for pen-testing using Backtrack LiveCD as repository of penetration tools, and De-ICE LiveCDs as targets.

Problem: VMWare workstation uses a different network for the NAT option (vmnet8) than what De-ICE LiveCDs require with their pre-configured IPs (and NO, you cannot change the IP on the De-ICE LiveCDs at this stage!)

Solution: under the host terminal:

$ ifconfig

vmnet8: … inet 192.168.109.1 …

$ cd /Library/Applications\ Support/VMWare\ Fusion

$ grep -R 192.168.109

locations:…

vmnet8/dhcpd.conf:…

vmnet8/nat.conf:…

Replace all 192.168.109… entries in those files (locations, dhcpd.conf and nat.conf) with the networks | IPs required by De-ICE LiveCDs (192.168.1.0/24), setup the Backtrack and any De-ICE LiveCD VM as NAT and … good luck with the real work of pen-testing!

Advertisements

A brief update

Trying to build a little site as repository of consulting services offered by Network Fortius LLC. Still not decided which hosting company to use, so things may change. In any case – feel free to visit and, even more so, to contact us, if anything looks interesting to your [company’s] present infrastructure needs:

– Cost-effective solutions for SMBs, or high-end architectures for enterprise data centers and global networks and security

– Applications performance audit and optimization, as correlated to the underlying systems on which they run: network (local, global or Internet-based), servers, storage, etc.

securely saving kmail critical info to an USB key

This is what I have done on my macosx system for this:

1. created a directory for what I deem to be critical files (emails and configuration) on my system:

~$ md /<path-to-mail-backup-dir>/mail-backup

2. created a script able to update the backup directory, from the major ~/.kde place, tar and encrypt (using openssl and a password file to be passed to the encryption process) the tar file, then moved the encrypted file to the mounted USB volume (generically named “NO NAME” in the example). The script file is:


#!/bin/sh

cd ~/.kde/share/apps/
rsync -avz ./kmail –delete /<path-to-mail-backup-dir>/mail-backup/
rsync -avz ./kabc –delete /<path-to-mail-backup-dir>/mail-backup/
cd ../config/
rsync -avz ./kmailrc* –delete /<path-to-mail-backup-dir>/mail-backup/
rsync -avz ./emailidenti* –delete /<path-to-mail-backup-dir>/mail-backup/
cd /<path-to-mail-backup-dir>/mail-backup
tar -cf mail-backup.tar /<path-to-mail-backup-dir>/mail-backup/
openssl des3 -salt -in mail-backup.tar -out mail-backup.tar.des3 -pass file:/<path-to-password-file>/password.txt
mv -f mail-backup.tar.des3 /Volumes/NO\ NAME/
rm -f mail-backup.tar*

cryptolinguistics – well said!

From Matt Blaze’s blog:

We often say that researchers break poor security systems and that feats of cryptanalysis involve cracking codes. As natural and dramatic as this shorthand may be, it propagates a subtle and insidious fallacy that confuses discovery with causation. Unsound security systems are “broken” from the start, whether we happen to know about it yet or not. But we talk (and write) as if the people who investigate and warn us of flaws are responsible for having put them there in the first place.

Words matter, and I think this sloppy language has had a small, but very real, corrosive effect on progress in the field. It implicitly taints even the most mainstream security research with a vaguely disreputable, suspect tinge. How to best disclose newly found vulnerabilities raises enough difficult questions by itself; let’s try to avoid phrasing that inadvertently blames the messenger before we even learn the message.

useful analysis tools – usage reminder

How to obtain multiple files during a capture:

$ tethereal -i <interface> -a filesize:3000 -b 14 -s 96 -w <capture_file>
(3MB files of 96 bytes length)

NOTE: tcpdump defaults to 96 bytes length, also, but I am not sure if it supports ring buffer?!?

******

If multiple files matching the regexp FOOBAR are to be merged :

$ mergecap -w bigfile.cap `ls FOOBAR`

******

Determine the type and length of capture:

$ file <capture_file>
capture_file: tcpdump capture file (big-endian) – version 2.4 (Ethernet, capture length 96)

******

Analysis of capture file by reading the file in ntop, for statistics

# ntop -f <dump_file> -m <subnet_considered_local>

# ntop -c -m 10.10.10.0/24 -n -q -O <ntop-suspicious> -r 30 -u root

where: subnet considered local was 10.10.10.0

then connect to http://localhost:3000

******

Analysis of capture file through snort (create config file, first!), with the following command:

$ sudo snort -r <capture_file> -c <config_file> -X -d -A full

******

Determine the number of connections in a capture file:

./tcptrace -t -n <capture_file>

******

Stats w/tethereal:

# tethereal -i eth2 -z “io,stat,60,tcp&&tcp.port==21&&tcp.flags==0x02,\
COUNT(tcp.flags)tcp,flags&&tcp.port==21&&tcp.flags==0x02,\
AVG(tcp.flags)tcp.flags&&tcp.port==21&&tcp.flags==0x02,\
MIN(tcp.flags)tcp.flags&&tcp.port==21&&tcp.flags==0x02,\
MAX(tcp.flags)tcp.flags&&tcp.port==21&&tcp.flags==0x02”

******

Time:

# tcpdump -r <file> ==> time LOCAL to where the trace is being read
# tcpdump -r <file> -tt ==> UNIX epoch time
# date -r <result_of_above>
# tcpdump -r <file> -tttt ==> UTC time

So – we could potentially identify the location of the systems!

******

Example of usage – tcpflow accepts “expression” BPFs

#tcpflow -r <file_name> -c “(src port 21 and host 172.16.4.4)” or “(src port 20 and host 172.16.4.4)” > flowfile.txt

NOTE: -c above forces tcpflow to combine the traffic into one file (otherwise – if omitted – tcpflow creates two files: one from source, one from destination)

******

Determine OS:

# p0f -s <capture_file> -x “expression” (usually “host <IP_address>”)

NOTE: -x dumps the whole package content

******
Consolidate src-dst – see Honeynet challenge 23 – very, very useful!

$ tethereal -nr <capture_file> | ./sumsrcdst > file_with_conversations

******

TTL by IP conversation

$ tcpdump -vvvr <capture_file> |awk ‘{print $2, $5, $6, $15, $17}’ |sed ‘s/,//;s/://;s/\./ /4’ |sed ‘s/\./ /7’ |grep IP |awk ‘{print $3, $4, $6}’ |sort |uniq > ttl-by-ip-conv.txt

******

ngrep (-q) -> string searches inside network captures:

$ ngrep -I <capture_file> -q -x ‘passwd’ ‘tcp port 21’ –> reveals the attempts for passwd file retrieval or processing via ftp

also: $ ngrep -q -I <capture_file> passwd port 21

$ ngrep -I 2003.12.15.cap -q -x ‘shadow’ ‘tcp port 21’ –> same with shadow file access attempts

******

Time-related splitting of files:

# tethereal -r <capture_file> -w <new_file> -R ‘(frame.time >= “Jan 8, 2004 22:00:00.00”) && (frame.time <= “Jan 8, 2004 23:00:00.00”)’

******

Validate distance between networks as previously obtained w/ntop:

$ sudo p0f -l -s <capture_file> |sed ‘s/>//g’ |awk -F “-” ‘{print $1,$3}’ |grep distance |sed ‘s/:/ /g’ |awk ‘{print $1″<–>”$3″==”$6}’ |sed ‘s/,//’ |sort |uniq > distances.txt

******

TCP conversations, sorted and counted:

$ tcptrace -n -t <capture_file> |sed s/”:”/” “/g |awk ‘{print $2 $4 $5}’ |sort |uniq -c |sort –

******

Finding all hosts having been contacted by 10.10.10.195 on the SSH port:

$ tcpdump -r <capture_file> -X -s 1514 ‘host 10.10.10.195 and tcp port 22’ |grep ssh |awk ‘{print $3;}’ |awk -F. ‘{print $1″.”$2″.”$3”.”$4;}’ |sort |uniq

******

MAC address connections:

$ tcpdump -neqr <capture_file> |awk ‘{print $2″ “$3” “$4;}’ |sed ‘s/,//g’

******

MAC and IP connections in one line:

$ tcpdump -neqr <capture_file> |awk ‘{if ($5==”IPv4,”) print $2″ “$3” “$4” “$9” “$10” “$11;}’

******

Script to determine the MAC addresses associated with IPs:

$ tcpdump -nner <capture_file> | awk ‘{print $2 ” ” $11}’ | awk -F. ‘{if($1!~/x+|w+|r+/) print $1 “.” $2 “.” $3 “.” $4}’ |sort -u > mac-and-ip-address-pairs.txt

******

Creating individual capture files, based on some read-filters, prior to creation of output files:

$ tethereal -r <capture-file> -V -R <read-filter> -w <output-file>
$ tethereal -r <capture_file> -V -R ‘tcp.port==20 or tcp.port==21’ -w <ftp-sessions

******

Another way to reveal conversations:

$ ipsumdump -psSdD -r <capture_file> |sort +1n -n |uniq

******

MAC-to-vendor from oui.txt

$ awk ‘$1 ~ /^[0-9a-f][0-9a-f]\-[0-9a-f][0-9a-f]\-[0-9a-f][0-9a-f]/ {print $3}’ oui.txt
$ awk ‘$1 ~ /^[0-9a-f][0-9a-f]\-[0-9a-f][0-9a-f]\-[0-9a-f][0-9a-f]/ {print $1, $3}’ oui.txt |sed ‘s/-/:/g’ > mac-to-vendor.txt

******

Replace first 3 bytes of MAC address from source-mac.txt with vendor name from above:

$ awk ‘FNR==NR{a[$1]=$2;next} {b=$0;k=substr($1,1,8);if(k in a)b=a[k]substr($0,9);print b}’ mac-to-vendor.txt source-mac.txt > mac-to-vendor-to-ip.txt

******

Analysis of flows

$ tcpick -r <capture_file> -n -C -yP “host 10.10.10.195″ |egrep -v ‘FIN|SYN|TIME|CLOSED’

******

========= congraph shell script ==========

# tethereal creates a list of packets
# cut pulls off the two addresses
# sed removes the arrow to protect it from later munges
# sort puts duplicates next to each other
# uniq removes adjacent duplicates

tethereal -r $ipf -N mnt | awk ‘$4==”->”{print $3,”###”,$5;}’ | sort |uniq > raw

# Create the connections list:
# sed munges the names of the nodes
# sed prefixes node names that start with a digit

sed ‘s/[-\.:()]/_/g’ < raw | sed ‘s/\(^[0-9_][0-9_]*\)/IP\1/g;s/ \([0-9_][0-9_]*\)/IP\1/g’ > cons

# Create the nodes list:
# sed puts all node names on seperate lines
# sort | uniq removes duplicates
# sed duplicates the names on the same line, with one inside a label attribute
# sed munges the names in an identical manner to the connection list munge above, but only in the first name
# including prefixing names that start with a digit

sed ‘s/ ### /\n/’ < raw | sort | uniq | sed ‘s/\(.\+\)/\1 [label=\”\1\”]/’ | sed ‘:loop;s/[-\.:()]\(.* \[lab\)/_\1/;t loop;s/\(^[0-9]\)/IP\1/g’ > labels

******

Format tcpdump output, via tcptrace and using xplot:

$ sudo tcpdump -s 100 -w <output_file.cap> host <hostname_or_IP>
$ tcptrace -Sl <output_file.cap>
$ xplot a2b_tsg.xpl

******
Other tools: netwox/netwag; pcapmerge; argus/ra/racount/rasort; sguil (w/snort); ACID or BASE (w/snort) …

tunneling over SSH

Generic:

$ ssh -N -f -L <local_port>:<end_server>:<end_port> user@ssh_intermediary_server

NOTE: if using auth. w/keys and no passwd, the last part (user@…) is not needed

Example:

$ ssh -f -N -L 8025:smtp.comcast.net:25 my_home_machine -L 8110:mail.comcast.net:110 my_home_machine

allows me to use the email client on a laptop, pointing to localhost:8025 for SMTP services, and localhost:8110 for POP3 services associated with my Comcast account, w/out traversing “foreign” networks with clear text credentials.

If moving between places, I would need tostop and restart the process. This could be as simple as:

$ ps aux |grep ssh |grep -v grep |awk ‘{print $2}’ |xargs kill -9

SSH with keys

Execute on local host, under user’s pwd:

$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
$ cd ~/.ssh
$ ssh-keygen -t dsa

Copy the public key to the remote host

$ scp -p id_dsa.pub remoteuser@remotehost:
Password: *********

Log into remote host and install public key

$ ssh remoteuser@remotehost
Password: ********

remotehost$ mkdir -p ~/.ssh
remotehost$ chmod 700 ~/.ssh
remotehost$ cat ida_dsa.pub >> ~/.ssh/authorized_keys
remotehost$ chmod 600 ~/.ssh/authorized_keys
remotehost$ mv id_dsa.pub ~/.ssh
remotehost$ logout