Metadata-Version: 2.1
Name: spacy-token-parser
Version: 0.1.15
Summary: Use spaCy to Parse Input Tokens
Home-page: https://github.com/craigtrim/spacy-token-parser
License: None
Keywords: nlp,nlu,ai,parser,spacy
Author: Craig Trim
Author-email: craigtrim@gmail.com
Maintainer: Craig Trim
Maintainer-email: craigtrim@gmail.com
Requires-Python: >=3.8.5,<4.0.0
Classifier: Development Status :: 4 - Beta
Classifier: License :: Other/Proprietary License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.9
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Dist: baseblock
Requires-Dist: nltk (==3.8.1)
Requires-Dist: spacy (==3.5.0)
Requires-Dist: wordnet-lookup
Project-URL: Bug Tracker, https://github.com/craigtrim/spacy-token-parser/issues
Project-URL: Repository, https://github.com/craigtrim/spacy-token-parser
Description-Content-Type: text/markdown

# spaCy Token Parser (spacy-token-parser)
Use spaCy to Parse Input Tokens

## Usage

Call the service with this code
```python
from spacy_token_parser import parse_tokens

parse_tokens(input_text.split())
```

The output is a tuple.

The first element of the tuple is a list of dictionary tokens (below).

The second element of the tuple is a wrapped instance of `spacy.tokens.doc.Doc`.

### List Output
```json
[
   {
      "dep":"compound",
      "ent":"NORP",
      "head":"5665575797947403677",
      "id":"6042939320535660714",
      "is_alpha":true,
      "is_punct":false,
      "is_stop":false,
      "is_wordnet":true,
      "lemma":"american",
      "noun_number":"singular",
      "other":{
         "head_i":3,
         "head_idx":24,
         "head_orth":5665575797947403677,
         "head_text":"films",
         "i":0,
         "idx":0,
         "orth":6042939320535660714
      },
      "pos":"PROPN",
      "sentiment":0.0,
      "shape":"xxxx",
      "tag":"NNP",
      "tense":"",
      "text":"american",
      "verb_form":"",
      "x":0,
      "y":8
   },
   {
      "dep":"compound",
      "ent":"",
      "head":"5665575797947403677",
      "id":"16602643206033239142",
      "is_alpha":true,
      "is_punct":false,
      "is_stop":false,
      "is_wordnet":true,
      "lemma":"silent",
      "noun_number":"singular",
      "other":{
         "head_i":3,
         "head_idx":24,
         "head_orth":5665575797947403677,
         "head_text":"films",
         "i":1,
         "idx":9,
         "orth":16602643206033239142
      },
      "pos":"PROPN",
      "sentiment":0.0,
      "shape":"xxxx",
      "tag":"NNP",
      "tense":"",
      "text":"silent",
      "verb_form":"",
      "x":8,
      "y":14
   },
   {
      "dep":"compound",
      "ent":"",
      "head":"5665575797947403677",
      "id":"16417888112635110788",
      "is_alpha":true,
      "is_punct":false,
      "is_stop":false,
      "is_wordnet":true,
      "lemma":"feature",
      "noun_number":"singular",
      "other":{
         "head_i":3,
         "head_idx":24,
         "head_orth":5665575797947403677,
         "head_text":"films",
         "i":2,
         "idx":16,
         "orth":16417888112635110788
      },
      "pos":"NOUN",
      "sentiment":0.0,
      "shape":"xxxx",
      "tag":"NN",
      "tense":"",
      "text":"feature",
      "verb_form":"",
      "x":14,
      "y":21
   },
   {
      "dep":"ROOT",
      "ent":"",
      "head":"5665575797947403677",
      "id":"5665575797947403677",
      "is_alpha":true,
      "is_punct":false,
      "is_stop":false,
      "is_wordnet":true,
      "lemma":"film",
      "noun_number":"plural",
      "other":{
         "head_i":3,
         "head_idx":24,
         "head_orth":5665575797947403677,
         "head_text":"films",
         "i":3,
         "idx":24,
         "orth":5665575797947403677
      },
      "pos":"NOUN",
      "sentiment":0.0,
      "shape":"xxxx",
      "tag":"NNS",
      "tense":"",
      "text":"films",
      "verb_form":"",
      "x":21,
      "y":26
   }
]
```

