Skip to main content
Warning: You are using the test version of PyPI. This is a pre-production deployment of Warehouse. Changes made here affect the production instance of TestPyPI (testpypi.python.org).
Help us improve Python packaging - Donate today!

Scrapy extension to save items to a sql database

Project Description

scrapy-sqlitem

scrapy-sqlitem allows you to define scrapy items using Sqlalchemy models or tables. It also provides an easy way to save to the database in chunks.

This project is in beta. Pull requests and feedback are welcome. If you use this project please let me know. The regular caveats of using a sql database backend fora write heavy application still apply.

Inspiration from scrapy-redis and scrapy-djangoitem

Quickstart

Define items using Sqlalchemy ORM

from scrapy_sqlitem import SqlItem

class MyModel(Base):
    id = Column(Integer, primary_key=True)
    name = Column(String)

class MyItem(SqlItem):
    sqlmodel = MyModel

Or Define Items using Sqlalchemy Core

from scrapy_sqlitem import SqlItem


class MyItem(SqlItem):
    sqlmodel = Table('mytable', metadata
        Column('id', Integer, primary_key=True),
        Column('name', String, nullable=False))

If tables have not been created yet make sure to create them. See sqlalchemy docs and the example spider.

Use SqlSpider to automatically save database

settings.py

DATABASE_URI = "sqlite:///"

Define your spider

from scrapy_sqlitem import SqlSpider

class MySpider(SqlSpider):
   name = 'myspider'

   start_urls = ('http://dmoz.org',)

   def parse(self, response):
        selector = Selector(response)
        item = MyItem()
        item['name'] = selector.xpath('//title[1]/text()').extract_first()
        yield item

Run the spider

scrapy crawl myspider

Query the database

Select * from mytable;

 id |               name                |
----+-----------------------------------+
  1 | DMOZ - the Open Directory Project |

Other Information

Do Not want to use SqlSpider? Write a pipeline instead.

from sqlalchemy import create_engine

class CommitSqlPipeline(object):

        def __init__(self):
                self.engine = create_engine("sqlite:///")

        def process_item(self, item, spider):
                item.commit_item(engine=self.engine)

Drop items missing required primary key data before saving to the db

from scrapy.exceptions import DropItem

class DropMissingDataPipeline(object):
        def process_item(self, item, spider):
                if item.null_required_fields:
                        raise DropItem
                else:
                        return item
# Watch out for Serial primary keys that are considered null.

Save to the database in chunks rather than item by item

Inherit from SqlSpider and..

In settings

DEFAULT_CHUNKSIZE = 500

CHUNKSIZE_BY_TABLE = {'mytable': 1000, 'othertable': 250}

If an error occurs while saving a chunk to the db it will try and save each item one at a time

Gotchas

If you subclass either item_scraped or spider_closed make sure to call super!

class MySpider(SqlSpider):

        def parse(self, response):
                pass

        def spider_closed(self, spider, reason):
                super(MySpider, self).spider_closed(spider, reason)
                self.log("Log some really important custom stats")

Be Careful with other Mixins. The inheritance structure can get a little messy. If a class early in the mro subclasses item_scraped and does not call super the item_scraped method of SqlSpider will never get called.

Other Methods of sqlitem

sqlitem.null_required_fields

  • returns a set of the database key names that are are marked not nullable and the corresponding data in the item is null.

sqlitem.null_primary_key_fields

  • returns a set of the primary key names where the corresponding data in the item is null.

sqlitem.primary_keys

sqlitem.required_keys

ToDo

  • Continuous integration Tests
Release History

Release History

This version
History Node

0.1.0

Supported By

WebFaction WebFaction Technical Writing Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Rackspace Rackspace Cloud Servers DreamHost DreamHost Log Hosting