See http://nlp.stanford.edu/software/corenlp.shtml.
-## Usage
+## NLPProcessor
+
+### Java client
~~~nitish
var proc = new NLPProcessor("path/to/StanfordCoreNLP/jars")
end
~~~
-## Nit API
-
-For ease of use, this wrapper introduce a Nit model to handle CoreNLP XML results.
-
-### NLPDocument
-
-[[doc: NLPDocument]]
-
-[[doc: NLPDocument::from_xml]]
-[[doc: NLPDocument::from_xml_file]]
-[[doc: NLPDocument::sentences]]
-
-### NLPSentence
-
-[[doc: NLPSentence]]
-
-[[doc: NLPSentence::tokens]]
-
-### NLPToken
+### NLPServer
-[[doc: NLPToken]]
+The NLPServer provides a wrapper around the StanfordCoreNLPServer.
-[[doc: NLPToken::word]]
-[[doc: NLPToken::lemma]]
-[[doc: NLPToken::pos]]
+See `https://stanfordnlp.github.io/CoreNLP/corenlp-server.html`.
-### NLP Processor
-
-[[doc: NLPProcessor]]
-
-[[doc: NLPProcessor::java_cp]]
-
-[[doc: NLPProcessor::process]]
-[[doc: NLPProcessor::process_file]]
-[[doc: NLPProcessor::process_files]]
+~~~nitish
+var cp = "/path/to/StanfordCoreNLP/jars"
+var srv = new NLPServer(cp, 9000)
+srv.start
+~~~
-## Vector Space Model
+### NLPClient
-[[doc: NLPVector]]
+The NLPClient is used as a NLPProcessor with a NLPServer backend.
-[[doc: NLPDocument::vector]]
+~~~nitish
+var cli = new NLPClient("http://localhost:9000")
+var doc = cli.process("String to analyze")
+~~~
-[[doc: NLPVector::cosine_similarity]]
+## NLPIndex
-## NitNLP binary
+NLPIndex extends the StringIndex to use a NLPProcessor to tokenize, lemmatize and
+tag the terms of a document.
-The `nitnlp` binary is given as an example of NitNLP client.
-It compares two strings and display ther cosine similarity value.
+~~~nitish
+var index = new NLPIndex(proc)
-Usage:
+var d1 = index.index_string("Doc 1", "/uri/1", "this is a sample")
+var d2 = index.index_string("Doc 2", "/uri/2", "this and this is another example")
+assert index.documents.length == 2
-~~~raw
-nitnlp --cp "/path/to/jars" "sort" "Sorting array data"
-0.577
+matches = index.match_string("this sample")
+assert matches.first.document == d1
~~~
## TODO