README.md

##twscrape
x to nostr scraper

#installation

  1. git clone repo to path: /usr/share/twscrape/,
    if you use different path, edit path in twitter.js

  2. create file .sec and place hex private key there

  3. create file queue and paste x.com profile urls there

  4. create cookies.txt by running:
    ./extract_cookies.sh

optionally install systemd service
sudo cp twitter.service /etc/systemd/system/twitter.service
sudo systemctl daemon-reload
sudo systemctl start twitter

#running
scape single profile:
node /usr/share/twscrape/twitter.js "$p"

or scrape profiles in queue:
./start-queue.sh

if using systemd service:
sudo systemctl enable twitter
sudo systemctl start twitter

systemd service simply runs queue once so its not really continuous process. just add while loop to start-queue.sh if you need such.

#config
you can change some settings in config file
if you want to see what its doing, set headless mode in twitter.js to false

Repository Details

name / identifier

twscrape

nostr clone url

nostr://npub13e5k43htjmj44z2k6dex0h33rckru5l7fk63ntyn3l4yr8ftvxms53kexm/twscrape
just install ngit and run
git clone nostr://...

description

test

git servers

https://relay.ngit.dev/npub13e5k43htjmj44z2k6dex0h33rckru5l7fk63ntyn3l4yr8ftvxms53kexm/twscrape.git

maintainers

relays

none

earliest unique commit

not specified

gitworkshop.dev logo GitWorkshop.dev