Expand description
Tokenization: splits template source into Tokens for crate::parser::parse.
Recognized regions:
{#…#}— comments (omitted from output).{%/{%-…%}/-%}— statement tags asToken::Tag(inner body is whitespace-trimmed).{{/{{-…}}/-}}— expressions asToken::Expression(inner spaces preserved unless trim markers strip them).
Whitespace control (Nunjucks-style): {%- / {{- strip trailing whitespace from the preceding
Token::Text; -%} / -}} strip leading whitespace from the following Text. Tag/variable
bodies still trim inner whitespace when those markers are present (see variable handling below).
Closing delimiters %} / }} are detected outside of double-quoted string literals (with \
escapes), so delimiter-like sequences inside strings do not end the region early.
{% raw %}…{% endraw %} and {% verbatim %}…{% endverbatim %} treat the middle as literal Token::Text.
Structs§
- Lexer
- Incremental lexer over a template string.
- Lexer
Options - Options controlling whitespace behavior and delimiters during lexing.
- Tags
- Nunjucks-style delimiter customization (the
tagskey inconfigure).
Enums§
- Token
- One lexical unit from a template.
Functions§
- tokenize
- Tokenizes the full
inputinto aVecofTokens. - tokenize_
with_ options - Like
tokenizebut with explicitLexerOptions.