Package edu.stanford.nlp.ie

This package implements various subpackages for information extraction.

See:
          Description

Interface Summary
FieldExtractor Interface to all information extraction components.
RankedExtractor Interface for information extraction components that support merging of two Instances, using field-level confidence ratings.
 

Class Summary
AbstractFieldExtractor Abstract superclass for implementations of the FieldExtractor interface.
Confidence Meant to serve as a parallel structure to the KAON Instance.
ExtractDemo This is a simple GUI front end that demonstrates the basic functionality of the information extraction packages.
ExtractorUtilities Utility class for deserializing FieldExtractors from files.
SingleFieldExtractor Superclass of FieldExtractors that only return a single (literal) field.
 

Package edu.stanford.nlp.ie Description

This package implements various subpackages for information extraction. Some examples of use appear later in this description. At the moment, three types of information extraction are supported (where some of these have internal variants):

  1. Regular expression based matching: These extractors are hand-written and match whatever the regular expression matches.
  2. Hidden Markov model based extractors: These can be either single field extractors or two level HMMs where the individual component models and how they are glued together is trained separately. These models are trained automatically, but require tagged training data.
  3. Description extractor: This does higher level NLP analysis of sentences (using a POS tagger and chunker) to find sentences that describe an object. This might be a biography of a person, or a description of an animal. This module is fixed: there is nothing to write or train (unless one wants to start to change its internal behavior).

There are some demonstrations of the stuff here which you can run (and several other classes have main() methods which exhibit their functionality):

  1. ExtractDemo is a simple GUI front-end to the information extraction components. It isn't intended for serious use, but illustrates loading and using various extractors, and includes an interface to web search. This can be done via Google (using the Google API) or Altavista or other engines (using screen scraping). The web search functionality is provided by the edu.stanford.nlp.web package. You should try this out first (see the examples below).
  2. edu.stanford.nlp.ie.test.ExtractorTest is a simple command line test of the information extraction components. It extracts only from a file.

Usage examples

0. Setup: For all of these examples except 3., you need to be connected to the Internet, and for the application's web search module to be able to connect to search engines. The web search functionality is provided by the supplied edu.stanford.nlp.web package. How web search works is controlled by a websearch.init file in your current directory (or if none is present, you will get search results from AltaVista). If you are registered to use the GoogleAPI, you should probably edit this file so web queries can be done to Google using their SOAP interface. Even if not, you can specify additional or different search engines to access in websearch.init. A copy of this file is supplied in the distribution. The DescExtractor in 4. also requires another init file so that it can use the include part-of-speech tagger.

1. Corporate Contact Information. This illustrates simple information extraction from a web page. Using the included ExtractDemo.bat or by hand run: java edu.stanford.nlp.ie.ExtractDemo

2. Corporate Contact Information merged. This illustrates the addition of information merger across web pages. Using the included MergeExtractDemo.bat or similarly do:

java edu.stanford.nlp.ie.ExtractDemo -m
The ExtractDemo screen is similar, but adds a button to Select a Merger.

3. Company names via direct use of an HMM information extractor. One can also train, load, and use HMM information extractors directly, without using any of the RDF-based KAON framework (http://kaon.semanticweb.org/) used by ExtractDemo.

4. Extraction of descriptions (such as biographical information about a person or a description of an animal). This does extraction of such descriptions from a web page. This component uses a POS tagger, and looks for where to find a path to it in the file descextractor.init in the current directory. So, you should be in the root directory of the current archive, which has such a file. Double click on the included MergeExtractDemo.bat in that directory, or by hand one can equivalently do: java edu.stanford.nlp.ie.ExtractDemo -m



Stanford NLP Group