Archive.org allows you to check the history of sites and pages, but a service most are not aware of is one that allows you to get a list of every page that a Archive.org has for a given domain. This is great for enumerating a web applications, many times you’ll find parts of web apps that have been long forgotten (and usually vulnerable).
This module doesn’t make any requests to the targeted domain, it simply outputs a list to the screen/or a file of all the pages it has found on Archive.org.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
|
msf auxiliary(enum_wayback) > info
Name: Pull Archive.org stored URLs for a domain
Version: 10394
License: Metasploit Framework License (BSD)
Rank: Normal
Provided by:
Rob Fuller
Basic options:
Name Current Setting Required Description
---- --------------- -------- -----------
DOMAIN portswigger.net yes Domain to request URLS for
OUTFILE no Where to output the list for use
Description:
This module pulls and parses the URLs stored by Archive.org for the
purpose of replaying during a web assessment. Finding unlinked and
old pages.
msf auxiliary(enum_wayback) > run
[*] Pulling urls from Archive.org
[*] Located 289 addresses for portswigger.net
http://portswigger.net/
http://portswigger.net/books/
http://portswigger.net/burp/
http://portswigger.net/burp/bullet.gif
http://portswigger.net/burp/buy.html
http://portswigger.net/burp/help.html
http://portswigger.net/burp/ps.css
http://portswigger.net/burp/screenshots.html
http://portswigger.net/burp/tc.html
http://portswigger.net/corner.gif
**SNIPPED**
|
You can set the OUTFILE so that you can parse it a bit and import it into Burp, or use a quick script to make the queries yourself. Here is one I wrote in python:
# cat requestor.py
1
2
3
4
5
6
7
8
9
10
11
12
13
|
#!/usr/bin/env python
import urllib
proxies = {'http': 'http://127.0.0.1:8080'}
filename = "/tmp/waybacklist.txt"
fl = open(filename, 'r')
for lines in fl:
url = str(lines)
if len(url) < 4:
print "Skipping blank line"
else:
print "Requesting " + url
temp = urllib.urlopen(url, proxies=proxies).read()
|
Enjoy!