useful analysis tools – usage reminder

How to obtain multiple files during a capture:

$ tethereal -i <interface> -a filesize:3000 -b 14 -s 96 -w <capture_file>
(3MB files of 96 bytes length)

NOTE: tcpdump defaults to 96 bytes length, also, but I am not sure if it supports ring buffer?!?


If multiple files matching the regexp FOOBAR are to be merged :

$ mergecap -w bigfile.cap `ls FOOBAR`


Determine the type and length of capture:

$ file <capture_file>
capture_file: tcpdump capture file (big-endian) – version 2.4 (Ethernet, capture length 96)


Analysis of capture file by reading the file in ntop, for statistics

# ntop -f <dump_file> -m <subnet_considered_local>

# ntop -c -m -n -q -O <ntop-suspicious> -r 30 -u root

where: subnet considered local was

then connect to http://localhost:3000


Analysis of capture file through snort (create config file, first!), with the following command:

$ sudo snort -r <capture_file> -c <config_file> -X -d -A full


Determine the number of connections in a capture file:

./tcptrace -t -n <capture_file>


Stats w/tethereal:

# tethereal -i eth2 -z “io,stat,60,tcp&&tcp.port==21&&tcp.flags==0x02,\



# tcpdump -r <file> ==> time LOCAL to where the trace is being read
# tcpdump -r <file> -tt ==> UNIX epoch time
# date -r <result_of_above>
# tcpdump -r <file> -tttt ==> UTC time

So – we could potentially identify the location of the systems!


Example of usage – tcpflow accepts “expression” BPFs

#tcpflow -r <file_name> -c “(src port 21 and host” or “(src port 20 and host” > flowfile.txt

NOTE: -c above forces tcpflow to combine the traffic into one file (otherwise – if omitted – tcpflow creates two files: one from source, one from destination)


Determine OS:

# p0f -s <capture_file> -x “expression” (usually “host <IP_address>”)

NOTE: -x dumps the whole package content

Consolidate src-dst – see Honeynet challenge 23 – very, very useful!

$ tethereal -nr <capture_file> | ./sumsrcdst > file_with_conversations


TTL by IP conversation

$ tcpdump -vvvr <capture_file> |awk ‘{print $2, $5, $6, $15, $17}’ |sed ‘s/,//;s/://;s/\./ /4’ |sed ‘s/\./ /7’ |grep IP |awk ‘{print $3, $4, $6}’ |sort |uniq > ttl-by-ip-conv.txt


ngrep (-q) -> string searches inside network captures:

$ ngrep -I <capture_file> -q -x ‘passwd’ ‘tcp port 21’ –> reveals the attempts for passwd file retrieval or processing via ftp

also: $ ngrep -q -I <capture_file> passwd port 21

$ ngrep -I 2003.12.15.cap -q -x ‘shadow’ ‘tcp port 21’ –> same with shadow file access attempts


Time-related splitting of files:

# tethereal -r <capture_file> -w <new_file> -R ‘(frame.time >= “Jan 8, 2004 22:00:00.00”) && (frame.time <= “Jan 8, 2004 23:00:00.00”)’


Validate distance between networks as previously obtained w/ntop:

$ sudo p0f -l -s <capture_file> |sed ‘s/>//g’ |awk -F “-” ‘{print $1,$3}’ |grep distance |sed ‘s/:/ /g’ |awk ‘{print $1″<–>”$3″==”$6}’ |sed ‘s/,//’ |sort |uniq > distances.txt


TCP conversations, sorted and counted:

$ tcptrace -n -t <capture_file> |sed s/”:”/” “/g |awk ‘{print $2 $4 $5}’ |sort |uniq -c |sort –


Finding all hosts having been contacted by on the SSH port:

$ tcpdump -r <capture_file> -X -s 1514 ‘host and tcp port 22’ |grep ssh |awk ‘{print $3;}’ |awk -F. ‘{print $1″.”$2″.”$3”.”$4;}’ |sort |uniq


MAC address connections:

$ tcpdump -neqr <capture_file> |awk ‘{print $2″ “$3” “$4;}’ |sed ‘s/,//g’


MAC and IP connections in one line:

$ tcpdump -neqr <capture_file> |awk ‘{if ($5==”IPv4,”) print $2″ “$3” “$4” “$9” “$10” “$11;}’


Script to determine the MAC addresses associated with IPs:

$ tcpdump -nner <capture_file> | awk ‘{print $2 ” ” $11}’ | awk -F. ‘{if($1!~/x+|w+|r+/) print $1 “.” $2 “.” $3 “.” $4}’ |sort -u > mac-and-ip-address-pairs.txt


Creating individual capture files, based on some read-filters, prior to creation of output files:

$ tethereal -r <capture-file> -V -R <read-filter> -w <output-file>
$ tethereal -r <capture_file> -V -R ‘tcp.port==20 or tcp.port==21’ -w <ftp-sessions


Another way to reveal conversations:

$ ipsumdump -psSdD -r <capture_file> |sort +1n -n |uniq


MAC-to-vendor from oui.txt

$ awk ‘$1 ~ /^[0-9a-f][0-9a-f]\-[0-9a-f][0-9a-f]\-[0-9a-f][0-9a-f]/ {print $3}’ oui.txt
$ awk ‘$1 ~ /^[0-9a-f][0-9a-f]\-[0-9a-f][0-9a-f]\-[0-9a-f][0-9a-f]/ {print $1, $3}’ oui.txt |sed ‘s/-/:/g’ > mac-to-vendor.txt


Replace first 3 bytes of MAC address from source-mac.txt with vendor name from above:

$ awk ‘FNR==NR{a[$1]=$2;next} {b=$0;k=substr($1,1,8);if(k in a)b=a[k]substr($0,9);print b}’ mac-to-vendor.txt source-mac.txt > mac-to-vendor-to-ip.txt


Analysis of flows

$ tcpick -r <capture_file> -n -C -yP “host″ |egrep -v ‘FIN|SYN|TIME|CLOSED’


========= congraph shell script ==========

# tethereal creates a list of packets
# cut pulls off the two addresses
# sed removes the arrow to protect it from later munges
# sort puts duplicates next to each other
# uniq removes adjacent duplicates

tethereal -r $ipf -N mnt | awk ‘$4==”->”{print $3,”###”,$5;}’ | sort |uniq > raw

# Create the connections list:
# sed munges the names of the nodes
# sed prefixes node names that start with a digit

sed ‘s/[-\.:()]/_/g’ < raw | sed ‘s/\(^[0-9_][0-9_]*\)/IP\1/g;s/ \([0-9_][0-9_]*\)/IP\1/g’ > cons

# Create the nodes list:
# sed puts all node names on seperate lines
# sort | uniq removes duplicates
# sed duplicates the names on the same line, with one inside a label attribute
# sed munges the names in an identical manner to the connection list munge above, but only in the first name
# including prefixing names that start with a digit

sed ‘s/ ### /\n/’ < raw | sort | uniq | sed ‘s/\(.\+\)/\1 [label=\”\1\”]/’ | sed ‘:loop;s/[-\.:()]\(.* \[lab\)/_\1/;t loop;s/\(^[0-9]\)/IP\1/g’ > labels


Format tcpdump output, via tcptrace and using xplot:

$ sudo tcpdump -s 100 -w <output_file.cap> host <hostname_or_IP>
$ tcptrace -Sl <output_file.cap>
$ xplot a2b_tsg.xpl

Other tools: netwox/netwag; pcapmerge; argus/ra/racount/rasort; sguil (w/snort); ACID or BASE (w/snort) …


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: