Crawley is a pythonic Scraping / Crawling Framework intended to make easy the way you extract data from web pages into structured storages such as databases.
## Features
- High Speed WebCrawler built on Eventlet.
- Supports databases engines like Postgre, Mysql, Oracle, Sqlite.
- Command line tools.
- Extract data using your favourite tool. XPath or Pyquery (A Jquery-like library for python).
- Cookie Handlers.
- Very easy to use
Based on the "Web Crawling" category.
Alternatively, view Crawley alternatives based on common mentions on social networks and blogs.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of Crawley or a related project?
Build Status Code Climate Stories in Ready
http://packages.python.org/crawley/
http://project.crawley-cloud.com/
~$ python setup.py install
~$ pip install crawley
~$ crawley startproject [project_name]
~$ cd [project_name]
""" models.py """
from crawley.persistance import Entity, UrlEntity, Field, Unicode
class Package(Entity):
#add your table fields here
updated = Field(Unicode(255))
package = Field(Unicode(255))
description = Field(Unicode(255))
""" crawlers.py """
from crawley.crawlers import BaseCrawler
from crawley.scrapers import BaseScraper
from crawley.extractors import XPathExtractor
from models import *
class pypiScraper(BaseScraper):
#specify the urls that can be scraped by this class
matching_urls = ["%"]
def scrape(self, response):
#getting the current document's url.
current_url = response.url
#getting the html table.
table = response.html.xpath("/html/body/div[5]/div/div/div[3]/table")[0]
#for rows 1 to n-1
for tr in table[1:-1]:
#obtaining the searched html inside the rows
td_updated = tr[0]
td_package = tr[1]
package_link = td_package[0]
td_description = tr[2]
#storing data in Packages table
Package(updated=td_updated.text, package=package_link.text, description=td_description.text)
class pypiCrawler(BaseCrawler):
#add your starting urls here
start_urls = ["http://pypi.python.org/pypi"]
#add your scraper classes here
scrapers = [pypiScraper]
#specify you maximum crawling depth level
max_depth = 0
#select your favourite HTML parsing tool
extractor = XPathExtractor
""" settings.py """
import os
PATH = os.path.dirname(os.path.abspath(__file__))
#Don't change this if you don't have renamed the project
PROJECT_NAME = "pypi"
PROJECT_ROOT = os.path.join(PATH, PROJECT_NAME)
DATABASE_ENGINE = 'sqlite'
DATABASE_NAME = 'pypi'
DATABASE_USER = ''
DATABASE_PASSWORD = ''
DATABASE_HOST = ''
DATABASE_PORT = ''
SHOW_DEBUG_INFO = True
~$ crawley run
Do not miss the trending, packages, news and articles with our weekly report.