Recon -> Identification -> Reporting / exploitation
Examples provided later in the presentation
To allow easy extension and iterative automation build-up is to use either Python or shell scripting (for the shell of your choice).
JSON is made to be read by computers, and it's your best ally. Most of the tools support it in one way or another. In general you'll be dealing with either newline separated text files, or JSON.
Many different stages in the automation pipeline can branch in to multiple subtasks, so it's often good to store results from previous stages to a temporary file. This often helps you to debug possible issues as well.
Try to avoid creating unnecessary traffic, and especially spinning out of scope that can happen quite easily. Make sure to filter out unwanted entries from your automation workflow in an early stage. This will make your automation faster, and will cause considerably less legal issues down the road ;)
An example of a simple python script that reads lines from stdin (intended to be used within a pipe) and outputting only http(s) urls within scope can be found here
Everything starts with a good, potentially context specific wordlist.
A simple example of a context specific addition to a default wordlist is using output from recon to feed into wordlists for the next stages.
#!/bin/bash
# Optimally you already have the links from a previous step
echo 'https://target.tld' |hakrawler -subs -u |tee target.tld.txt
cat target.tld.txt |unfurl keys |anew custom_words.txt
cat target.tld.txt |unfurl values |anew custom_words.txt
# Add common entries from a public SecLists wordlist
cat burp-parameter-names.txt |anew custom_words.txt
A web fuzzing tool that aims to be accurate, reliable and fast.
Swiss army knife: giving the user as much control as possible.
Feature-rich. If it doesn't exist, you don't need it (joking obv.)
GET /path/resourcename?id=12345 HTTP/1.1
Host: ffuf.io.fi
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; ...
Accept: text/html
Connection: keep-alive
Cookie: cookiename=cookievalue;session_id=1234567890
Pragma: no-cache
Cache-Control: no-cache
GET /path/FUZZRES?id=12345 HTTP/1.1
Host: ffuf.io.fi
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; ...
Accept: text/html
Connection: keep-alive
Cookie: cookiename=FUZZCOOK;session_id=1234567890
Pragma: no-cache
Cache-Control: no-cache
$ ffuf -w custom_words.txt:KEYS -w cookies.txt:FUZZCOOK ...
meme by: @aufzayed
The examples show how things are done manually. Usually the only real modification that you need to do for automation is to use autocalibration. Typically -ac -ach (autocalibration per host) is enough.
ffuf -w "/path/to/wordlist" -u "https://ffuf.io.fi/FUZZ" -t 100 -c
ffuf -c -X POST -H "Content-Type: application/x-www-form-urlencoded" \
-d "username=joohoi&password=FUZZ" -w passwords.txt \
-u "https://ffuf.io.fi/login.php" -fr "error"
ffuf -c -w "users.txt:USER" -w "passwords.txt:PASS" \
-u "https://USER:PASS@ffuf.io.fi/secure/" -fc 401
ffuf -c -w SecLists/Discovery/DNS/fierce-hostlist.txt \
-H "Host: FUZZ.ffuf.io.fi" -t 1000 -u "http://ffuf.io.fi/"
ffuf -c -w "~/SecLists/Discovery/Web-Content/burp-parameter-names.txt" \
-u "https://ffuf.io.fi/content.php?FUZZ=true"
seq 1 10000 > numbers.txt && \
ffuf -c -w "numbers.txt" -u "https://ffuf.io.fi/content.php?id=FUZZ"
# chars.txt with a list of special characters and/or strings
ffuf -w chars.txt -u https://ffuf.io.fi/reflection.php?data=abcdFUZZefg \
-c -v -fr "abcdFUZZefg"
# ti.txt with different template injection payloads with a common outcome
ffuf -w ti.txt -u https://ffuf.io.fi/reflection.php?ti=FUZZ -mr 'abc42abc' -v -c
...where the wordlist contains test cases like abc{{ 6*7 }}abc
ffuf -w sqli.txt -u https://ffuf.io.fi/search.php?q=FUZZ -c -v -ft '<5000'
...where the wordlist contains test cases like: ') or sleep(5)='
ffuf -w lfi.txt -u https://ffuf.io.fi/show.php?file=FUZZ -mr 'root:x:' -v -c
...where the wordlist includes test cases like ../../../etc/passwd
ffuf -w hosts.txt:HOST -u https://HOST/.git/config -c -v -mr '\[core\]'
...to find exposed git repositories to extract
...and be effective at the same time.
Especially when scanning live production targets, it's best to scale horizontally.
This way you won't be stressing a single server too much. You might avoid getting blocked by rate limits / WAF's this way too ;)