Allow tags in any part of regexp, not only on top-level concatenation.
Fixed-length tag optimization applies only to top-level concatenation:
since we cannot predict which path lexer will choose, we cannot be
sure that any tags except top-level concatenation will ever be
initialized. If a tag may be uninitialized, we cannot fix other
tags on this one (we cannot even fix same-level tags relative to
each other, because fixed tags won't preserve the default value).
Bind contexts (a.k.a. tags) to DFA transitions, not states.
This is a very imortant change: it makes tracing tag conflicts as
simple as comparing tags on transitions during DFA construction.
If during determinization we get two identical transitions that
differ only in tags, then we have a tag conflict. Tags that cause
conflicts are called non-deterministic (since they don't allow
deterministic match).
This approach if very similar to Ville Laurikari's TDFA: as he does,
we first build TNFA and then apply determinization; however Laurikari's
TDFA uses complex bookkeeping to track all possible tag values,
while re2c simply forbids tags that cannot be matche efficiently.
Binding tags to transitions allows more fine-grained liveness
analyses, dead tag elimination and tag deduplication.
Keep rule number in each NFA state (not only in final states).
This is currently not necessary, but we'll need it to distinguish
between different rules when comparing tags: if rules are different,
we don't care about tag difference; otherwise tags must be equal.
Cleanup in codegen (minor changes in '-D, --emit-dot' output).
- more elegant hanling of default case when generatin code
for 'switch' cases
- dumbed down complex logic behind the generation of sequential
'if' statements: explicitely list all different cases and
corresponding conditions
Moved conditions and contexts from global scope to block scope.
Directives like '/*!re2c:types*/' and '/*!re2c:contexts*/' have
global scope, they are not binded to any particular block and therefore
accumulate items (conditions/contexts/etc.) from the whole program.
There's currently no way to bind these directives to a particular
block (re2c will probably add scoping rules in future).
However, things like default context declaration or condition dispatch
have block scope (it's only natural that they are binded to the block
that generates them).
Unrelated change: context marker should be set for each condition,
re2c must generate it after condition label: this way code that skips
condition dispatch and jums directly to condition label will still
have context marker adjusted properly.
Lexer: unified handling of various re2c directives.
Now when lexer encounters the beginning of a new directive, it
dumps all the intermediate code to the output.
Before this commit re2c lexer dumped intermediate code on each
newline that occured in the input.
So this commit affects two aspects:
- Intermediate code is dumped much less often: it's good for
performance, but it becomes more probable that the intermediate
code will occasionally occupy too much buffer space and incur
buffer resize.
- Some re2c directives ignored characters right before the
(on the same line). These unfortunate characters are no longer
ignored.
Lexer: don't care if end of comment is followed by a newline.
re2c used to swallow newline that immediately followes end of directive:
for most of the directives re2c generates code block that ends with a
newline, so the generated code looks better if the newline is not doubled.
However, this unnecessarily complicates the lexer.
Directive is a new type of re2c block that consists of zero or more
of the following configurations (example values are the defaults):
line = "long @@;";
sep = "";
Ulya Trofimovich [Tue, 29 Mar 2016 21:33:32 +0000 (22:33 +0100)]
Parser grammar cleanup.
- Rearranged some rules to avoid code duplication.
- Added production and error message about contexts in named definitions.
- Removed productions and error messages about missing expression
(simple 'syntax error' is enough, given that only part of the
errors was captured by the removed productions).
Ulya Trofimovich [Tue, 29 Mar 2016 14:16:01 +0000 (15:16 +0100)]
Optimized pointer arithmetics generated with '-C, --contexts'.
If default input API is used and fixed-length context is based on
the rightmost context (that is, YYCURSOR), then the following expression
(that calculates pointer corresponding to the given context):
(YYCTXMARKER + ((YYCURSOR - YYCTXMARKER) - yyctx))
can be optimized to:
(YYCURSOR - yyctx)
Note: unfortunately, as of GCC < 7.0, expressions like:
(YYCURSOR - 3) - (YYCURSOR - 5)
trigger warning:
warning: integer overflow in expression [-Woverflow]
See this GCC bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61240
Ulya Trofimovich [Tue, 29 Mar 2016 11:54:02 +0000 (12:54 +0100)]
Added some configurations related to '-C, --contexts' option.
Configurations:
"define:YYCTX" (defaults to 'YYCTX')
"define:YYDIST" (defaults to 'YYDIST')
"define:YYDISTTYPE" (defaults to 'long')
"ctxprefix" (defaults to 'yyctx')
Ulya Trofimovich [Mon, 28 Mar 2016 13:39:11 +0000 (14:39 +0100)]
Moved nontrivial context handling from parser to NFA construction phase.
Parser should simply construct AST; all the complex reasoning about
contexts (fixed vs variable) should be delayed until NFA is being
constructed. This way AST can be immutable and it's very easy to share
parts of AST between different conditions, etc.
Removed rule ranks and rank counter: rules are now stored in NFA-local
array and addressed by index.
Ulya Trofimovich [Wed, 16 Mar 2016 09:41:02 +0000 (09:41 +0000)]
Address skeleton nodes by indices rather than by pointers.
This way it is more convenient to add various graph algorithms:
each algorithm can keep its own data in array indexed by skeleton
node numbers (instead of pushing all relevant data inside of the
node).
Ulya Trofimovich [Mon, 14 Mar 2016 22:23:03 +0000 (22:23 +0000)]
Skeleton: simplified path structure.
Just store pointers to skeleton nodes (instead of bookkeeping arcs,
contexts and rules in each path). All the necessary information can
be easily retrieved from nodes when path is baing dumped to file.
Three tests in '--skeleton' mode have been broken by this commit.
Actually, these are not breakages: these cases reveal incorrect
re2c-generated code. The change is due to the fact that skeleton
now doesn't simulate contexts that go *after* the matched rule:
------o------o------> ... (fallback to rule)
rule context
Ulya Trofimovich [Mon, 22 Feb 2016 10:22:43 +0000 (10:22 +0000)]
Code cleanup: factored RuleInfo out of RuleOp.
Rule information (line, attached code block, rank, shadowing set, etc.)
is used throughout the program. Before this patch, rule information was
inlined in RuleOp. It was inconvenient because RuleOp belongs to early
stages of compilation (AST, prior to NFA), but it had to live throughout
the whole program.
Ulya Trofimovich [Wed, 17 Feb 2016 16:12:59 +0000 (16:12 +0000)]
Simplified tracking of fixed-length trailing contexts.
Static (that is, of fixed length) trailing contexts don't need
recording context position with YYCTXMARKER and restoring it
back on successful match. They can be tracked simply by decreasing
input position by context length.
Ulya Trofimovich [Wed, 17 Feb 2016 14:46:43 +0000 (14:46 +0000)]
Simplified [-Wmatch-empty-rule] analyses.
Before this patch [-Wmatch-empty-rule] was based on:
- DFA structural analyses (skeleton phase)
- rule reachability analyses (skeleton phase)
Now it is based on:
- NFA structural analyses (NFA phase)
- rule reachability analyses (skeleton phase)
It's much easier to find nullable rules in NFA than in DFA.
The problem with DFA is in rules with trailing context, both
dynamic and especially static (as it leaves no trace in DFA
states). re2c currently treats static context as dynamic, but
it will change soon.
On the other side NFA may give some false positives because of
unreachable rules:
[^] {}
"" {}
infinite rules:
[^]* {}
or self-shadowing rules:
[^]?
Reachability analyses in skeleton helps to filter out unreachable
and infinite rules, but not self-shadowing ones.
Ulya Trofimovich [Sat, 16 Jan 2016 23:07:17 +0000 (23:07 +0000)]
Stabilized the list of shadowing rules reported by [-Wunreachable-rules].
Before this commit, the list of rules depended on the order of NFA states
in each DFA state under construction (which is simply a matter of ordering
pointers to heap: the order can be different).
Now all rules for each DFA state are collected and the final choice of
rule is delayed until DFA is constructed, so the order of NFA states
no longer matters.
Ulya Trofimovich [Mon, 11 Jan 2016 15:01:05 +0000 (15:01 +0000)]
Moved YYFILL points calculation to the earlier stage of DFA construction.
No serious changes intended (mostly cleanup and comments).
The underlying algorithm for finding strongly connected components
(SCC) remains the same: it's a slightly modified Tarjan's algorithm.
We now mark non-YYFILL states by setting YYFILL argument to zero,
which is only logical: why would anyone call YYFILL to provide zero
characters. In fact, re2c didn't generate 'YYFILL(0)' call itself,
but some remnants of YYFILL did remain (which caused changes in tests).
Serialize '--skeleton' generated data in little-endian.
This commit fixes bug #132 "test failure on big endian archs with 0.15.3".
Tests failed because re2c with '--skeleton' option used host endianness
when serializing binary data to file. Expected test result was generated
on little-endian arch, while actual test was run on big-endian arch.
Only three tests failed (out of ~40 tests that are always run with
'--skeleton'), because in most cases data unit is 1 byte and endianness
doesn't matter.
The fix: re2c now converts binary data from host-endian to little-endian
before dumping it to file. Skeleton programs convert data back from
little-endian to host-endian when reading it from file (iff data unit
size is greater than 1 byte).
Serialize '--skeleton' generated data in little-endian.
This commit fixes bug #132 "test failure on big endian archs with 0.15.3".
Tests failed because re2c with '--skeleton' option used host endianness
when serializing binary data to file. Expected test result was generated
on little-endian arch, while actual test was run on big-endian arch.
Only three tests failed (out of ~40 tests that are always run with
'--skeleton'), because in most cases data unit is 1 byte and endianness
doesn't matter.
The fix: re2c now converts binary data from host-endian to little-endian
before dumping it to file. Skeleton programs convert data back from
little-endian to host-endian when reading it from file (iff data unit
size is greater than 1 byte).
Ulya Trofimovich [Thu, 31 Dec 2015 21:17:32 +0000 (21:17 +0000)]
Removed obsolete code deduplication mechanism.
This mechanism was tricky and fragile; it cost us a most unfortunate
bug in PHP lexer: https://bugs.gentoo.org/show_bug.cgi?id=518904
(and a couple of other bugs).
Now that re2c does DFA minimization this is no longer needed. Hoooray!
The updated test changed because skeleton is constructed prior to
DFA minimization.
Ulya Trofimovich [Thu, 31 Dec 2015 15:35:30 +0000 (15:35 +0000)]
Added DFA minimization and option '--dfa-minimization <table | moore>'.
Test results changed a lot; it is next to impossible to verify them
by hand. I therefore implemented two different minimization algorithms:
- "table filling" algorithm (simple and inefficient)
- Moore's algorithm (not so simple and efficient enough)
They produce identical minimized DFA (up to states relabelling), thus
giving some confidence in that the resulting DFA is correct.
I also checked the results with '--skeleton': re2c constructs
skeleton prior to reordering and minimization, therefore
skeleton-generated data is free of (potential) minimization errors.
Ulya Trofimovich [Wed, 30 Dec 2015 20:52:33 +0000 (20:52 +0000)]
Split DFA intermediate representation in two parts: DFA and ADFA.
ADFA stands for 'action DFA', that is, DFA with actions.
During DFA construction (aka NFA determinization) it is convenient
to represent DFA states as indexes to array of states.
Later on, while binding actions, it is more convanient to store
states in a linked list.