Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

gigachad80/grep-backURLs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

47 Commits

Repository files navigation

πŸš€ Project Name : grep-backURLs

Maintenance

grep-backURLs : Automated way to find juicy information from website

πŸ“Œ Overview

grep-backURLs is a web security automation tool to extracts important credentials in bug hunting. It uses subfinder to find subdomains and then those subdomain acts as input links for waybackurls . After that , it uses grep command and keywords.txt to sort out important credentials.

πŸ€” Why This Name?

Just beacuse it uses grep command to sort out from waybackURLs link.

⌚ Total Time taken to develop , test & building bin.

Approx 3 hr 48 min 58 sec

πŸ™ƒWhy I Created This

Cause I don't want to waste my time to find subdomains and then try each keyword from keyword.txt to check whether is there any credential or not, so decided to automate it.

πŸ“š Requirements & Dependencies

πŸ“₯ Installation Guide & USage :

⚑ Quick Install:

  1. Git clone this URL.
  2. Go to grep-backURls directory and give permission to main.go
  3. Run command ./main.go

OR

  • You can directly download the binary from releases section here

πŸƒ Usage :

A tool to find sensitive information by enumerating subdomains, collecting Wayback Machine URLs,
analyzing them, and matching against custom patterns.
Options:
 -config
 Run interactive configuration setup and exit
 -domain string
 Specify the target domain (e.g., example.com)
 -html
 Generate a comprehensive HTML report summarizing all findings in the current directory
 -json
 Generate results in JSON format for each pattern
 -keywords-file string
 Path to a file containing grep-like keywords (one per line) (default "grep_keywords.txt")
 -markdown
 Generate results in Markdown format for each pattern
 -output-dir string
 Base directory to store all scan output files (default "output")
 -v Display the tool version and exit (shorthand)
 -version
 Display the tool version and exit

Note :

You don't need to specify -json or -markdown flag , it will automatically generate both , no matter you have specified these flags for not . However , for HTML report , you need to specify -html flag .

For Customisation : edit config.json in your editor ( pluma / notepad / nano / vim πŸ˜‰)

πŸ’« What's new in grep-backURLs v2 :

  • Customisation and control over concurrency , output directory name , timeout for subdomain enum , customm keywords , logging .

  • HTML report , JSON , Markdown support

πŸ“ Roadmap / To-do

  • Release Cross Platform Executables
  • Add More Keywords
  • Output in JSON & Markdown format
  • HTML Report
  • Attach Demo Screenshot
  • Update Readme

πŸ’“ Credits:

πŸ“ž Contact

πŸ“§ Email: pookielinuxuser@tutamail.com

πŸ“„ License

Licensed under MIT

πŸ•’ Last Updated: May 24 , 2025

πŸ•’ First Published : January , 2025

About

Automated way to extract juicy info with subfinder and waybackurls

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /