find HIDDEN urls!! (subdomain enumeration hacking) // ft. HakLuke
Based on NetworkChuck's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Subdomain enumeration helps uncover hostnames that may point to entirely different servers and applications, expanding the attack surface under a domain.
Briefing
Hidden URLs aren’t just a curiosity—they’re often the difference between a secure website and one with exposed endpoints. Subdomain enumeration and URL discovery techniques help map those “extra” locations under a domain, such as learn.networkchuck.com, which may point to entirely different servers and applications. That matters because unused or misconfigured subdomains can become security liabilities, while bug bounty hunters use the same process to find attack surfaces worth reporting.
The transcript frames this as a legitimate recon workflow when done with permission. In bug bounty programs, companies define a scope of domains and rules for what testers can probe; platforms like HackerOne then pay for validated vulnerabilities. Within that allowed scope, automated discovery can surface candidate targets—like hypothetical vulnerable subdomains—faster than manual browsing. The practical goal is to identify endpoints that might host vulnerabilities, then report findings through the program’s process.
Two tools anchor the workflow. HackCrawler is presented as a web crawler that performs active enumeration: it takes one or more target websites, follows links recursively to a configurable depth, and discovers additional resources such as JavaScript files. Because it actually navigates through the target’s web surface, it can be powerful enough to stress systems or violate rules if the program’s scope doesn’t permit active crawling. The transcript emphasizes caution: not every scope allows this kind of direct probing.
Installation is demonstrated using Docker, with Linux as the setup baseline. The approach uses Docker to pull or build the tool’s container image, then run it with parameters such as the target domain and optional flags (including a subdomain-focused mode). The output is described as fast and broad, including links discovered through external references like social profiles.
The second tool, GAU (Get All URLs), is positioned as more passive. Instead of crawling a live site, GAU aggregates known URLs from existing datasets and archives. The transcript lists sources such as AlienVault, Open Threat Exchange, the Wayback Machine, and Common Crawl (hosted via AWS infrastructure). That design reduces direct interaction with the target server while still producing large URL lists that can reveal hidden paths and resources.
Beyond tool usage, the transcript highlights why these projects exist: developers build automation when existing tools don’t meet a specific need. HackLuke’s choice of Go is attributed to speed and native concurrency, while the broader community lesson is to tinker, fork, and extend open-source tools rather than waiting for a perfect solution. The closing guidance ties the workflow back to defensive and offensive value—discover what a domain exposes, fix what shouldn’t be public, or report what is vulnerable—while repeatedly stressing permission and ethical boundaries. A sponsor segment promotes using a VPN (Private Internet Access) for privacy during online activity, paired with claims of encryption and a no-log policy.
Cornell Notes
Subdomain enumeration and hidden-URL discovery help map the “extra” parts of a domain—often separate servers or applications—so security teams and bug bounty hunters can find exposed endpoints faster. HackCrawler performs active enumeration by crawling a site recursively and discovering links and resources like JavaScript files; it can be powerful enough to require strict scope permission. GAU (Get All URLs) performs more passive enumeration by pulling known URLs from archives and threat-intel datasets such as the Wayback Machine, Common Crawl, AlienVault, and Open Threat Exchange, without directly crawling the live site. Both tools are demonstrated via Docker, and the transcript emphasizes ethical use: only test targets with explicit permission. The broader takeaway is to learn from tool authors and build or extend your own automation when gaps exist.
What problem does subdomain enumeration solve, and why can it reveal security risk?
How does HackCrawler’s approach differ from GAU’s, and what does that mean for scope and risk?
What sources does GAU rely on to produce URL lists without crawling the target directly?
Why is Docker recommended for installing and running these tools?
What practical workflow do these tools support for bug bounty or defensive testing?
What’s the community lesson about tool-building mentioned in the transcript?
Review Questions
- When would active enumeration (like crawling) be inappropriate even if a tool can do it?
- Compare how HackCrawler and GAU obtain URLs. Which one interacts with the live target more directly, and why does that matter?
- What kinds of security outcomes can come from discovering JavaScript files and historical URLs for a domain?
Key Points
- 1
Subdomain enumeration helps uncover hostnames that may point to entirely different servers and applications, expanding the attack surface under a domain.
- 2
Hidden URL discovery can support both defensive audits and bug bounty hunting, but only within explicitly permitted scope.
- 3
HackCrawler performs active enumeration by crawling pages recursively and extracting links and resources such as JavaScript files, which can generate real traffic to targets.
- 4
GAU (Get All URLs) performs more passive enumeration by aggregating known URLs from archives and datasets like the Wayback Machine, Common Crawl, AlienVault, and Open Threat Exchange.
- 5
Docker is used to install and run both tools in a consistent environment, reducing local setup friction.
- 6
Tool authors choose implementation details (like Go for concurrency) to improve performance and usability for large-scale discovery.
- 7
The transcript emphasizes learning-by-building: fork existing tools, add features, and share improvements rather than waiting for a perfect solution.