Tokenization
          add_tokens(example, *, use_spacy_token_ends=False, preprocessed_outputs)
¶
  Add tokens to each Example
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| example | Example | Input Example | required | 
| preprocessed_outputs | Dict[str, Any] | Outputs of preprocessors | required | 
Returns:
| Name | Type | Description | 
|---|---|---|
| Example | Optional[Example] | Example with tokens | 
Source code in recon/tokenization.py
            | 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |  |