- Arachnid is a Java-based web spider framework. It includes a simple HTML parser object that parses an input stream containing HTML content. Simple Web spiders can be created by sub-classing Arachnid and adding a few lines of code called after each page of a Web site is parsed. Two example spider applications are included to illustrate how to use the framework.
- While many bots around are focused on page indexing, Arale is primarly designed for personal use. It fits the needs of advanced web surfers and web developers. Some real life cases are:
downloading only images, videos, mp3 or zip files from a site.
manuals, articles, ebooks fragmented in many files to discourage download.
user-unfriendly sites. Popups, banners and tricky scripts annoying you before you can download a resource.
- Grunk (for GRammar UNderstanding Kernel) is a library for
parsing and extracting structured metadata from semi-structured text formats. It
is based on a very flexible parsing engine capable of detecting a wide variety
of patterns in text formats and extracting information from them. Formats are
described in a simple and powerful XML configuration from which Grunk builds a
parser at runtime, so adapting Grunk to a new format does not require a coding
or compilation step.
- Pure Java implementation
- Powerful two-step parser with pattern-matching based on Perl5 regular
- Inline transformations making it possible to parse otherwise tricky
- XML-based configuration
- Support for XML output
- Flexible API
- Heritrix is the Internet Archives open-source, extensible, web-scale, archival-quality web crawler project.
Heritrix (sometimes spelled heretrix, or misspelled or missaid as heratrix/heritix/ heretix/heratix) is an archaic word for heiress (woman who inherits). Since our crawler seeks to collect and preserve the digital artifacts of our culture for the benefit of future researchers and generations, this name seemed apt.
It is designed to respect the robots.txt exclusion directives and META robots tags .
- HyperSpider (Java app) collects the link structure of a website. Data import/export from/to database and CSV-files. Export to Graphviz DOT, Resource Description Framework (RDF/DC), XML Topic Maps (XTM), Prolog, HTML. Visualization as hierarchy and map.
This Java application collects the link structure of a website by following the hyperlinks. Various export formats are supported which makes this project unique, especially concerning RDF and XTM which allows to import the data into forthcoming visualization/analysis tools.
- A Java implementation of a flexible and extensible web spider engine. Optional modules allow functionality to be added (searching dead links, testing the performance and scalability of a site, creating a sitemap, etc ..
- LARM is a 100% Java search solution for end-users of the Jakarta Lucene search engine framework. It contains methods for indexing files, database tables, and a crawler for indexing web sites.
Well, it will be. At the moment we only have some specifications. Its up to you to turn this into a working program.
Its predecessor was an experimental crawler called larm-webcrawler available from the Jakarta project. Some people joined to leverage LARM on a higher level and wrote down some ideas. This resulted in a new project currently hosted on Sourceforge.
- Metis is a tool to collect information from the content of web sites. This was written for the Ideahamster Group for finding the competitive intelligence weight of a web server and assists in satisfying the CI Scouting portion of the Open Source Security Testing Methodology Manual (OSSTMM). The tool is distributed under the GNU Public license.
The too is written in Java and is composed of 2 packages:
The web spider engine : the faust.sacha.web java package
This package handles the web spidering process, collects and stores the information in memory.
The data analysis part : Metis org.idehamster.metis java package
This package reads the data collected by the spider and generate a report
- Nutch is open source web-search software. It builds on Lucene Java, adding web-specifics, such as a crawler, a link-graph database, parsers for HTML and other document formats, etc.
- Spider is a complete standalone Java application designed to easily integrate varied datasources.
XML driven framework for data retrieval from network accessible sources
Provides hooks for custom post-processing and configuration
Implemented as a Avalon/Keel framework datafeed service
Included Core Connectors:
Files and Zip Archives via HTTP/FTP/HTTPS/FileSystem
Supports access via links described as literals or regular expressions
Supports sessions/cookies/form parameters
Included Optional Connectors:
Axis (SOAP webservices)
- Spindle is a web indexing/search tool built on top of the Lucene toolkit. It includes a HTTP spider that is used to build the index, and a search class that is used to search the index. In addition, support is provided for the Bitmechanic listlib JSP TagLib, so that a search can be added to a JSP based site without writing any Java classes.
This library is released free of charge with source code included under the terms of the GPL. See the LICENSE file for details.
- WebLech is a fully featured web site download/mirror tool in Java, which supports many features required to download websites and emulate standard web-browser behaviour as much as possible. WebLech is multithreaded and will feature a GUI console.
- WebSPHINX ( Website-Specific Processors for HTML INformation eXtraction) is a Java class library and interactive development environment for web crawlers. A web crawler (also called a robot or spider) is a program that browses and processes Web pages automatically.
WebSPHINX consists of two parts: the Crawler Workbench and the WebSPHINX class library.
The Crawler Workbench is a graphical user interface that lets you configure and control a customizable web crawler.
The WebSPHINX class library provides support for writing web crawlers in Java.