A self-hosted solution to build and browse your own doujinshi library.
This repository contains three independent pieces:
scraper/– an asynchronous Python 3 scraper that downloads doujinshi from nhentai.net into a localmanga/folder.backend/– a small Node.js API serving the downloaded files and metadata.frontend/– a single‑page web interface generated with Vite and Vue.
The scraper is completely standalone. Use it to gather the content you want first, then run the backend and frontend to host the library.
- Downloads doujinshi from nhentai as individual folders with JSON metadata.
- Browse the collection in a responsive web UI.
- Download any entry as a PDF or zipped archive.
- Docker support for easy deployment.
Ensure Python 3.11+ is installed. Inside scraper/, adjust
main.py to choose what to download and run:
cd scraper
pip install -r requirements.txt
python3 main.pyDownloaded files appear under manga/ (created automatically at the repository
root).
npm run install:all
cp backend/.env.example backend/.env
cp frontend/.env.example frontend/.envEdit the .env files to set your API key and optional password.
npm run devThe frontend runs on http://localhost:8787 and the API on
http://localhost:5173. Update the ports inside the .env files if needed.
All API requests must include the key defined in backend/.env using the
Authorization header.
npm run buildServe the contents of frontend/dist on any static host and run the backend
(using npm run prod inside backend/ or the Docker setup below).
Both components have ready‑to‑use docker-compose.yml files.
From the repository root run:
# API
cd backend && docker compose up -d
# Frontend
cd ../frontend && docker compose up -dThe containers read the same .env files and mount ../manga to make your
collection available.
GET /api/manga– list all entriesPOST /api/rescan– rebuild the cache after adding filesGET /api/stats– number of pages and library sizeGET /api/manga/:id/archive– download as ZIPGET /api/manga/:id/pdf– download as PDF
Static images are served from /manga.
The scraper targets Python 3.13 but also works on Python 3.11 and 3.12. Earlier versions are not supported. The Node backend requires Node.js 20 or later.
Both the front-end and back-end components were generated entirely by AI, with no human-written code. Only the scraper component was hand-crafted.
Once the scraper component was complete, I wanted to quickly create a front-end. However, thanks to the power of OpenAI Codex, the project soon evolved into a full-fledged, API-driven website with both front-end and back-end.
Released under the terms of the GNU General Public License v3.