Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

l2weekly/webmagic

Repository files navigation

logo

Readme in Chinese

User Manual (Chinese)

Build Status

A scalable crawler framework. It covers the whole lifecycle of crawler: downloading, url management, content extraction and persistent. It can simplify the development of a specific crawler.

Features:

  • Simple core with high flexibility.
  • Simple API for html extracting.
  • Annotation with POJO to customize a crawler, no configuration.
  • Multi-thread and Distribution support.
  • Easy to be integrated.

Install:

Add dependencies to your pom.xml:

	<dependency>
 <groupId>us.codecraft</groupId>
 <artifactId>webmagic-core</artifactId>
 <version>0.4.3</version>
 </dependency>
	<dependency>
 <groupId>us.codecraft</groupId>
 <artifactId>webmagic-extension</artifactId>
 <version>0.4.3</version>
 </dependency>

Get Started:

First crawler:

Write a class implements PageProcessor:

 public class OschinaBlogPageProcesser implements PageProcessor {
 private Site site = Site.me().setDomain("my.oschina.net");
 @Override
 public void process(Page page) {
 List<String> links = page.getHtml().links().regex("http://my\\.oschina\\.net/flashsword/blog/\\d+").all();
 page.addTargetRequests(links);
 page.putField("title", page.getHtml().xpath("//div[@class='BlogEntity']/div[@class='BlogTitle']/h1").toString());
 page.putField("content", page.getHtml().$("div.content").toString());
 page.putField("tags",page.getHtml().xpath("//div[@class='BlogTags']/a/text()").all());
 }
 @Override
 public Site getSite() {
 return site;
 }
 public static void main(String[] args) {
 Spider.create(new OschinaBlogPageProcesser()).addUrl("http://my.oschina.net/flashsword/blog")
 .addPipeline(new ConsolePipeline()).run();
 }
 }
  • page.addTargetRequests(links)

    Add urls for crawling.

You can also use annotation way:

	@TargetUrl("http://my.oschina.net/flashsword/blog/\\d+")
	public class OschinaBlog {
	 @ExtractBy("//title")
	 private String title;
	 @ExtractBy(value = "div.BlogContent",type = ExtractBy.Type.Css)
	 private String content;
	 @ExtractBy(value = "//div[@class='BlogTags']/a/text()", multi = true)
	 private List<String> tags;
	 public static void main(String[] args) {
	 OOSpider.create(
	 	Site.me(),
				new ConsolePageModelPipeline(), OschinaBlog.class).addUrl("http://my.oschina.net/flashsword/blog").run();
	 }
	}

Docs and samples:

The architecture of webmagic (refered to Scrapy)

image

Javadocs: http://code4craft.github.io/webmagic/docs/en/

There are some samples in webmagic-samples package.

Lisence:

Lisenced under Apache 2.0 lisence

Contributors:

Thanks these people for commiting source code, reporting bugs or suggesting for new feature:

Thanks:

To write webmagic, I refered to the projects below :

Mail-list:

https://groups.google.com/forum/#!forum/webmagic-java

Bitdeli Badge

About

A scalable web crawler framework.

Resources

Stars

Watchers

Forks

Packages

No packages published

AltStyle によって変換されたページ (->オリジナル) /