Key Components
- Token - Individual tokens with position information
- TokenReference - Tokens with leading/trailing trivia
- Position - Exact position tracking in source code
- Lexer - The tokenization engine
Basic Usage
The tokenizer can be used independently from the parser:Lexer
The main entry point for tokenization.Methods
new
Creates a new Lexer from the given source string and Lua version(s).
current
Returns the current token.
peek
Returns the next token without consuming the current one.
consume
Consumes the current token and returns it.
collect
Returns a vector of all tokens left in the source string.
process_next
Processes and returns the next token in the source string, ignoring trivia.
LexerResult
The result of a lexer operation.Methods
unwrap
Unwraps the result, panicking if it is not LexerResult::Ok.
unwrap_errors
Unwraps the errors, panicking if it is LexerResult::Ok.
errors
Returns the errors, if there was any.
Error Handling
TokenizerErrorType
The possible errors that can happen while tokenizing.TokenizerError
Information about an error that occurs while tokenizing.Methods
Independent Tokenization
The tokenizer can be used independently from parsing, which is useful for:- Syntax highlighting
- Code formatting tools
- Comment extraction
- Whitespace analysis
- Creating custom parsers