Skip to content
#

internet-archiving

Here are 26 public repositories matching this topic...

hoardy-web

Passively capture, archive, and hoard your web browsing history, including the contents of the pages you visit, for later offline viewing, mirroring, and/or indexing. Your own personal private Wayback Machine that can also archive HTTP POST requests and responses, as well as most other HTTP-level data.

  • Updated Nov 20, 2024
  • Python

⬇️ A CLI tool to download all discovered content from a URL (like youtube-dl/yt-dlp, forum-dl, gallery-dl). 🎭 Uses headless Chrome to get HTML, JS, CSS, images/video/audio/subtitles, PDFs, screenshots, article text, git srcs, and more...

  • Updated Oct 21, 2024
  • Python

Improve this page

Add a description, image, and links to the internet-archiving topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the internet-archiving topic, visit your repo's landing page and select "manage topics."

Learn more