Skip to main content
Warning: You are using the test version of PyPI. This is a pre-production deployment of Warehouse. Changes made here affect the production instance of TestPyPI (testpypi.python.org).
Help us improve Python packaging - Donate today!

Simple webscraper built on top of requests and beautifulsoup

Project Description

Some basic webscraper I use in many projects.

webscraper

Module to ease web efforts

Supports

  • Cached web requests (Wrapper around requests)
  • Bultin parsing/scraping (Wrapper around beautifulsoup)

Constructor parameters

  • url: Default url, used if nothing else specified
  • scheme: Default scheme for scrapping
  • timeout
  • cache_directory: Where to save cache files
  • cache_time: How long is a cached resource vaild - in seconds (default: 7 minutes)
  • cache_use_advanced
  • auth_method: Authentication method (default: HTTPBasicAuth)
  • auth_username: Authentication username. If set, enables authentication
  • auth_password: Authentication password
  • handle_redirect: Allow redirects (default: True)
  • user_agent: User agent to use
  • default_user_agents_browser: Browser to set in user agent (from default_user_agents dict)
  • default_user_agents_os: Operating system to set in user agent (from default_user_agents dict)
  • user_agents_browser: Browser to set in user agent (Overwrites default_user_agents_browser)
  • user_agents_os: Operating system to set in user agent (Overwrites default_user_agents_os)
  • html2text: HTML2text settings
  • html_parser: What html parser to use (default: html.parser - built in)

Example

# Setup WebScraper with caching
web = WebScraper({
    'cache_directory': "cache",
    'cache_time': 5*60
})

# First call to git -> hit internet
web.get("https://github.com/")

# Second call to git (within 5 minutes of first) -> hit cache
web.get("https://github.com/")

Whitch results in the following output:

2016-01-07 19:22:00 DEBUG   [WebScraper._getCached] From inet https://github.com
2016-01-07 19:22:00 INFO    [requests.packages.urllib3.connectionpool] Starting new HTTPS connection (1): github.com
2016-01-07 19:22:01 DEBUG   [requests.packages.urllib3.connectionpool] "GET / HTTP/1.1" 200 None
2016-01-07 19:22:01 DEBUG   [WebScraper._getCached] From cache https://github.com

History

0.1.15a0 (2016-03-08)

  • First release on PyPI.
Release History

Release History

This version
History Node

0.1.15a0

Download Files

Download Files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

File Name & Checksum SHA256 Checksum Help Version File Type Upload Date
floscraper-0.1.15a0-py2.7.egg (19.8 kB) Copy SHA256 Checksum SHA256 2.7 Egg Mar 8, 2016
floscraper-0.1.15a0-py2.py3-none-any.whl (11.8 kB) Copy SHA256 Checksum SHA256 py2.py3 Wheel Mar 8, 2016
floscraper-0.1.15a0.win32.zip (21.2 kB) Copy SHA256 Checksum SHA256 Source Mar 8, 2016
floscraper-0.1.15a0.zip (14.6 kB) Copy SHA256 Checksum SHA256 Source Mar 8, 2016

Supported By

WebFaction WebFaction Technical Writing Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Rackspace Rackspace Cloud Servers DreamHost DreamHost Log Hosting