This adds a Lookahead iterator so that while parsing it is easier
to peek ahead however much the parser needs. Basic parsers may not
need any, but a lot of parsers have two token lookahead. I've even
seen some with three.
This is the initial design of the AST. It is built in a data oriented
style. It also needs iterators over the AST and the optimized AST
as well as some more transformation functions.
Apparently Gitea was having an issue with the Unicode encoding used for the apostrophe. I have edited it in Gitea to fix it.
Signed-off-by: Myrddin Dundragon <myrddin@cybermages.tech>
I went and enhanced the LexerError to now wrap a Rust source Error.
This makes it so we can wrapup the possible IO errors we get when trying
to open a file and read its contents.
I also added some documentation for all the implemented functions.
This makes it so that the TokenStream and all it's associated Token
types use a generic when dealing with the variant of the Token.
Span was also given the ability to merge with another span. This
will make it easier to track the span as users group TokenTypes
together to make their domain specific types.
All tests and examples were updated for this change.
The version was incremented to 0.2.0.
I took the Token module from the Arcanum project and brought it over
to here. It was a nice data oriented way of handling the Tokens.
I then created a Lexer that can scan a file or text and allow the
user to transform the scanned tokens before the final token array
is returned. This should allow for more complex and specific tokens
to be created for whatever domain is being targeted.
I also added basic library examples and testing.
Finally, I made sure the documentation generated nicely.
This is now marked as version: 0.1.0