⚡ A distributed crawler for weibo, building with celery and requests.
#网络爬虫#A web crawler. Supercrawler automatically crawls websites. Define custom handlers to parse content. Obeys robots.txt, rate limits and concurrency limits.
Django based application that allows creating, deploying and running Scrapy spiders in a distributed manner