Skip to content

Latest commit

 

History

History
72 lines (55 loc) · 1.91 KB

README.md

File metadata and controls

72 lines (55 loc) · 1.91 KB

Degen Scraper

Pipeline for generating AI character files and training datasets by scraping public figures' online presence across Twitter and blogs.

⚠️ IMPORTANT: Create a new Twitter account for this tool. DO NOT use your main account as it may trigger Twitter's automation detection and result in account restrictions.

Setup

  1. Install dependencies:

    npm install
  2. Copy the .env.example into a .env file:

    # (Required) Twitter Authentication
    TWITTER_USERNAME=     # your twitter username
    TWITTER_PASSWORD=     # your twitter password
    
    # (Optional) Blog Configuration
    BLOG_URLS_FILE=      # path to file containing blog URLs
    
    # (Optional) Scraping Configuration
    MAX_TWEETS=          # max tweets to scrape
    MAX_RETRIES=         # max retries for scraping
    RETRY_DELAY=         # delay between retries
    MIN_DELAY=           # minimum delay between requests
    MAX_DELAY=           # maximum delay between requests

Usage

Twitter Collection

npm run twitter -- username

Example: npm run twitter -- pmarca

Blog Collection

npm run blog

Generate Character

npm run character -- username

Example: npm run character -- pmarca

Finetune

npm run finetune

Finetune (with test)

npm run finetune:test

Generate Virtuals Character Card

https://whitepaper.virtuals.io/developer-documents/agent-contribution/contribute-to-cognitive-core#character-card-and-goal-samples

Run this after Twitter Collection step

npm run generate-virtuals -- username date 

Example: npm run generate-virtuals -- pmarca 2024-11-29 Example without date: npm run generate-virtuals -- pmarca

The generated character file will be in the pipeline/[username]/[date]/character/character.json directory. The generated tweet dataset file will be in pipeline/[username]/[date]/raw/tweets.json.