lex - fast lexical analyzer generator


lex [[-78BbcdFfhIiLlnpsTtVvw] [-C[aefFmr]] [-Pprefix]
	[-Sskeleton] [filename ...]


The lex(1) utility is a tool for generating scanners, which are programs that recognize lexical patterns in text. The lex(1) utility reads the given input files, or its standard input if no file names are given, for a description of a scanner to generate. The description is in the form of pairs of regular expressions and C code, called rules. The lex(1) utility generates as output a C source file, lex.yy.c, which defines the routine yylex(). This file is compiled and linked with the -ll library to produce an executable. When the executable is run, it analyzes its input for occurrences of the regular expressions. Whenever it finds one, it executes the corresponding C code.

For full documentation, see lexdoc(1). This manual entry is intended for use as a quick reference.


Generate a seven-bit scanner, which can save considerable table space, especially when using -Cf or -CF. (At most sites, -7 is on by default for these options. To determine whether this is the case, use the -v verbose flag and check the flag summary it reports.)
Generate an eight-bit scanner. This is the default except for the -Cf and -CF compression options, for which the default is site-dependent, and can be checked by inspecting the flag summary generated by the -v option.
Generate a batch scanner instead of an interactive scanner (see -I later in this topic). See lexdoc(1) for details. Scanners using -Cf or -CF compression options also automatically specify this option.
Generate backing-up information to lex.backup(1). This is a list of scanner states that require backing up, as well as the input characters on which they do so. By adding rules, one can remove backing-up states. If all backing-up states are eliminated and -Cf or -CF is used, the generated scanner will run faster.
Specify the degree of table compression and scanner optimization.
Trade off larger tables in the generated scanner for faster performance because the elements of the tables are better aligned for memory access and computation. This option can double the size of the tables used by your scanner.
Construct equivalence; that is, sets of characters that have identical lexical properties. Equivalence classes usually give dramatic reductions in the final table/object file sizes (typically a factor of 2-5) and have little impact on performance (one array look-up per character scanned).
Generate full scanner tables. The lex(1) utility should not compress the tables by taking advantages of similar transition functions for different states.
Use the alternate fast scanner representation (described in lexdoc(1)).
Construct meta-equivalence classes, which are sets of equivalence classes (or characters, if equivalence classes are not being used) that are commonly used together. Meta-equivalence classes are often beneficial when using compressed tables, but they have a moderate performance impact (one or two "if" tests and one array look-up per character scanned).
Bypass using stdio for input in generated scanner. In general, this option results in a minor performance gain that is only worthwhile when used in conjunction with -Cf or -CF. It can cause surprising behavior if you use stdio yourself to read from yyin prior to calling the scanner.
Alone, compress scanner tables but use neither equivalence classes nor meta-equivalence classes.

The options -Cf or -CF and -Cm do not make sense together. There is no opportunity for meta-equivalence classes if the table is not being compressed. Otherwise, the options can be freely mixed.

The default setting is -Cem, which specifies that lex(1) should generate equivalence classes and meta-equivalence classes. This setting provides the highest degree of table compression. You can increase the scanner’s execution speed by increasing table size, with the following (a continuum from slowest and smallest at the top to fastest and largest at the bottom) being generally true:

Slowest and smallest
Fastest and largest

-C options are cumulative.

Does nothing; a deprecated option included for POSIX compliance.

NOTE: In previous releases of lex(1), -c specified table-compression options. This functionality is now given by the -C flag. To ease the impact of this change, when lex(1) encounters -c, it currently issues a warning message and assumes that you wanted -C instead. In the future this "promotion" of -c to -C will be eliminated in the name of full POSIX compliance (unless the POSIX meaning is removed first).

Run the generated scanner in debug mode. Whenever a pattern is recognized and the global yy_flex_debug is non-zero (which is the default), the scanner will write to stderr a line of the form:
--accepting rule at line 53 ("the matched text")
The line number refers to the location of the rule in the file defining the scanner (that is, the file that was fed to lex). Messages are also generated when the scanner backs up, accepts the default rule, reaches the end of its input buffer (or encounters a NUL; the two look the same to the scanner), or reaches an end-of-file.
Use fast scanner table representation (and bypass stdio). This representation is about as fast as the full-table representation (-f), and for some sets of patterns will be considerably smaller (and for others, larger). See lexdoc(1) for more details.

This option is equivalent to -CFr.

Use fast scanner No table compression is done and stdio is bypassed. The result is large but fast. This option is equivalent to -Cfr.
Generate a "help" summary of lex's(1) options to stderr, and then exit.
Generate an interactive scanner; that is, a scanner that stops immediately rather than looking ahead if it knows that the currently scanned text cannot be part of a longer rule's match. This is the opposite of a batch scanner (see -B). See lexdoc(1) for details.

-I cannot be used in conjunction with full or fast tables; that is, the -f, -F, -Cf, or -CF flags. For other table compression options, -I is the default.

Generate a case-insensitive scanner. The case of letters given in the lex(1) input patterns will be ignored, and tokens in the input will be matched regardless of case. The matched text given in yytext will have the preserved case (that is, it will not be folded).
Do not generate #line directives in lex.yy.c. The default is to generate such directives so error messages in the actions will be correctly located with respect to the original lex(1) input file, and not to the meaningless line numbers of lex.yy.c.
Turn on maximum compatibility with the original AT&T lex implementation, at a considerable performance cost. This option is incompatible with -f, -F, -Cf, or -CF. See lexdoc(1) for details.
Does nothing. Another deprecated option included only for POSIX compliance.

Change the default yy prefix used by lex(1) to be prefix instead. See lexdoc(1) for a description of all the global variables and file names that this affects.
Generate a performance report to stderr. The report consists of comments regarding features of the lex(1) input file that will cause a loss of performance in the resulting scanner. If you give the flag twice, you will also get comments regarding features that lead to minor performance losses.

Use skeleton_file to construct the scanner instead of the default file. You will never need this option unless you are doing lex(1) maintenance or development.
Suppress the default rule (that unmatched scanner input is echoed to stdout). If the scanner encounters input that does not match any of its rules, it aborts with an error.
Run in trace mode. It will generate many messages to stderr concerning the form of the input and the resultant non-deterministic and deterministic finite automata. This option is used mostly for maintaining lex.(1)
Write the scanner it generates to standard output instead of lex.yy.c.
Print the version number to stderr and exit.
Write to stderr a summary of statistics regarding the scanner it generates.
Suppress warning messages.


The patterns in the input are written using the extended set of regular expressions provided in the following table.
Pattern Matches
x Match the character 'x'.
. Any character except newline.
[xyz] A "character class"; in this case, the pattern matches either an 'x', a 'y', or a 'z'.
[abj-oZ] A "character class" with a range in it; matches an 'a', a 'b', any letter from 'j' through 'o', or a 'Z'.
[^A-Z] A "negated character class"; that is, any character but those in the class. In this case, any character except an uppercase letter.
[^A-Z\n] Any character except an uppercase letter or a newline.
r* Zero or more instances of r, where r is any regular expression.
r+ One or more instances of r.
r? Zero or one instance of r (that is, "an optional r").
r{2,5} From two to five instances of r.
r{2,} Two or more instances of r.
r{4} Exactly four instances of r.
{name} The expansion of the name definition.
"[xyz]\"star" The literal string: [xyz]"star
\X If X is an 'a', 'b', 'f', 'n', 'r', 't', or 'v', the ANSI-C interpretation of \x. Otherwise, a literal 'X' (used to escape operators such as '*').
\123 The character with octal value 123.
\x2a The character with hexadecimal value 2a.
(r) Match an r; parentheses are used to override precedence.
rs Concatenation: the regular expression r, followed by the regular expression s.
r|s Either an r or an s.
r/s An r, but only if it is followed by an s. The s is not part of the matched text. This type of pattern is called a "trailing context".
^r An r, but only at the beginning of a line.
r$ An r, but only at the end of a line. Equivalent to "r/\n".
<s>r An r, but only in start condition s (see the discussion of start conditions later in this topic).
<s1,s2,s3>r Same, but in any of start conditions s1, s2, or s3.
<*>r An r in any start condition, even an exclusive one.
<<EOF>> An end-of-file.
<s1,s2><<EOF>> An end-of-file when in start condition s1 or s2.

The regular expressions listed above are grouped according to precedence, from highest precedence at the top to lowest at the bottom. Those grouped together have equal precedence.

The following list provides additional information about patterns:


In addition to arbitrary C code, the following can appear in actions:
Copies yytext to the scanner's output.
Followed by the name of a start condition, places the scanner in the corresponding start condition.
Directs the scanner to proceed on to the "second best" rule that matched the input (or a prefix of the input). yytext and yyleng are set up appropriately. Note that REJECT is a particularly expensive feature in terms scanner performance; if it is used in any of the scanner's actions it will slow down all of the scanner's matching. Furthermore, REJECT cannot be used with the -f or -F options. Also, unlike the other special actions, REJECT is a branch; code that immediately follows it in the action will not be executed.
Tells the scanner that the next time it matches a rule, the corresponding token should be appended onto the current value of yytext rather than replacing it.
Returns all but the first n characters of the current token back to the input stream, where they will be rescanned when the scanner looks for the next match. Both yytext and yyleng are adjusted appropriately (for example, yyleng will now be equal to n).
Puts the character c back onto the input stream. It will be the next character scanned.
Reads the next character from the input stream (this routine is called yyinput()if the scanner is compiled using C++.
Can be used instead of a return statement in an action. It terminates the scanner and returns a 0 to the scanner's caller, indicating "all done." By default, yyterminate() is also called when an end-of-file is encountered. It is a macro and can be redefined.
An action available only in <<EOF>> rules. It means "I have set up a new input file, continue scanning." It is no longer required; you can simply assign yyin to point to a new file in the <<EOF>> action.
yy_create_buffer( file, size )
Takes a file pointer and an integer size. It returns a YY_BUFFER_STATE handle to a new input buffer large enough to accommodate size characters and associated with the given file. When in doubt, use YY_BUF_SIZE for the size.
yy_switch_to_buffer( new_buffer )
Switches the scanner's processing to scan for tokens from the given buffer, which must be a YY_BUFFER_STATE.
yy_delete_buffer( buffer )
Deletes the given buffer.


The following values are available to the user:
char *yytext
Holds the text of the current token. It can be modified but not lengthened (you cannot append characters to the end). Modifying the last character may affect the activity of rules anchored using '^' during the next scan; see lexdoc(1) for details. If the special directive %array appears in the first section of the scanner description, yytext is instead declared
char yytext[YYLMAX]
where YYLMAX is a macro definition that you can redefine in the first section if you do not like the default value (usually 8 KB). Using %array results in somewhat slower scanners, but the value of yytext becomes immune to calls to input() and unput(), which potentially destroy its value when yytext is a character pointer. The opposite of %array is %pointer, which is the default.
int yyleng
Holds the length of the current token.
FILE *yyin
The file from which lex(1) reads by default. It can be redefined, but doing so only makes sense before scanning begins or after an end-of-file (EOF) has been encountered. Changing it in the midst of scanning will have unexpected results because lex(1) buffers its input; use yyrestart() instead. Once scanning terminates because an end-of-file has been seen, you can assign yyin at the new input file and call the scanner again to continue scanning. You can call void yyrestart(FILE *new_file) to point yyin at the new input file. The switch-over to the new file is immediate (any previously buffered-up input is lost). Note that calling yyrestart() with yyin as an argument throws away the current input buffer and continues scanning the same input file.
FILE *yyout
The file to which ECHO actions are done. It can be reassigned by the user.
Returns a YY_BUFFER_STATE handle to the current buffer.
Returns an integer value corresponding to the current start condition. You can subsequently use this value with BEGIN to return to that start condition.


The lex(1) utility allows you to redefine the following macros and functions:
Controls how the scanning routine is declared. By default, it is "int yylex()", or, if prototypes are being used, "int yylex(void)". This definition can be changed by redefining the "YY_DECL" macro. If you give arguments to the scanning routine using a K&R-style/non-prototyped function declaration, you must terminate the definition with a semicolon (;).
You can control how the scanner gets its input by redefining the YY_INPUT macro. The calling sequence for YY_INPUT is as follows:
Its action is to place up to max_size characters in the character array buf and return in the integer variable result either the number of characters read or the constant YY_NULL (traditionally 0) to indicate EOF. The default YY_INPUT reads from the global file-pointer yyin A sample redefinition of YY_INPUT (in the definitions section of the input file):
#undef YY_INPUT
#define YY_INPUT(buf,result,max_size) \
	{ \
	int c = getchar(); \
	result = (c == EOF) ? YY_NULL : (buf[0] = c, 1); \
When the scanner receives an end-of-file indication from YY_INPUT, it checks the yywrap() function. If yywrap() returns false (zero), then it is assumed that the function has set up yyin to point to another input file, and scanning continues. If it returns true (non-zero), the scanner terminates, returning 0 to its caller. The default yywrap() always returns 1.
Can be redefined to provide an action that is always executed prior to the matched rule's action.
Can be redefined to provide an action that is always executed before the first scan.
In the generated scanner, the actions are all gathered in one large switch statement and separated using YY_BREAK, which may be redefined. By default, it is simply a "break", to separate each rule's action from the action of the following rule.


Library with which scanners can be linked to obtain default versions of yywrap() and main().
Generated scanner (called lexyy.c on some systems).
Backing-up information for -b flag (called lex.bck on some systems).


The lex(1) utility can generate the following diagnostic messages:
reject_used_but_not_detected undefined
The scanner uses REJECT but lex(1) failed to find it in the first two sections. This can happen if you use an #include file to insert it. Make an explicit reference to the action in your lex(1) input file. (Previously, lex(1) supported a %used mechanism for dealing with this problem. Although this feature is still supported, it is now deprecated.
The scanner uses yymore() but lex(1) failed to find it in the first two sections. This can happen if you use a #include file to insert it. Make an explicit reference to the action in your lex(1) input file. (Note that previously, lex(1) supported a %used mechanism for dealing with this problem; this feature is still supported but now deprecated, and will go away soon unless the author hears from people who can argue compellingly that they need it.)
lex scanner jammed
A scanner compiled with -s has encountered an input string that was not matched by any of its rules.
warning, rule cannot be matched
The given rule cannot be matched because it follows other rules that will always match the same text as it. See lexdoc(1) for an example.
warning, -s option given but default rule can be matched
It is possible (perhaps only in a particular start condition) that the default rule (match any single character) is the only one that will match a particular input. Since -s was given, presumably this is not intended.
scanner input buffer overflowed
A scanner rule matched more text than the available dynamic memory.
token too large, exceeds YYLMAX
Your scanner uses %array, and one of its rules matched a string longer than the YYLMAX constant (8 KB by default). You can increase the value by defining YYLMAX in the definitions section of your lex(1) input.
scanner requires -8 flag to use the character 'x'
Your scanner specification includes recognizing the eight-bit character 'x'. You did not specify the -8 flag; your scanner defaulted to seven-bit because you used the -Cf or -CF table-compression options.
lex scanner push-back overflow
You used unput() to push back so much text that the scanner's buffer could not hold both the pushed-back text and the current token in yytext. Ideally, the scanner should dynamically resize the buffer in this case, but at present, it does not.
input buffer overflow, can't enlarge buffer because scanner uses REJECT
The scanner was working on matching an extremely large token and needed to expand the input buffer. This does not work with scanners that use REJECT.
fatal lex scanner internal error--end of buffer missed
This can occur in an scanner which is reentered after a long-jump has jumped out (or over) the scanner's activation frame. Before reentering the scanner, use:
yyrestart( yyin );


Vern Paxson, with the help of many ideas and much inspiration from Van Jacobson. Original version by Jef Poskanzer.

See lexdoc(1) for additional credits and the address to send comments to.


Some trailing context patterns cannot be properly matched and generate warning messages ("dangerous trailing context"). These are patterns where the ending of the first part of the rule matches the beginning of the second part, such as "zx*/xy*", where the 'x*' matches the 'x' at the beginning of the trailing context. (Note that the POSIX draft states that the text matched by such patterns is undefined.)

For some trailing context rules, parts that are actually fixed-length are not recognized as such, leading to the performance loss already mentioned. In particular, parts using '|' or {n} are always considered variable-length.

Combining trailing context with the special '|' action can result in fixed trailing context being turned into the more expensive variable trailing context. For example, in the following:

	abc	|

Use of unput() or input() invalidates yytext and yyleng, unless the %array directive or the -l option has been used.

Use of unput() to push back more text than was matched can result in the pushed-back text matching a beginning-of-line ('^') rule, even though it did not come at the beginning of the line (this happens very infrequently).

Pattern-matching of NUL characters is substantially slower than matching other characters.

Dynamic resizing of the input buffer is slow, as it entails rescanning all the text matched so far by the current (generally huge) token.

The lex(1) utility does not generate correct #line directives for code internal to the scanner. Thus, bugs in its skeleton file yield bogus line numbers.

Due to both buffering of input and read-ahead, you cannot intermix calls to <stdio.h> routines, such as, for example, getchar()(3), with lex(1) rules and expect it to work. Call input()() instead.

The total table entries listed by the -v flag excludes the number of table entries needed to determine which rule has been matched. The number of entries is equal to the number of deterministic finite automaton (DFA) states if the scanner does not use REJECT, and somewhat greater than the number of states if it does.

REJECT cannot be used with the -f or -F options.

The lex(1) internal algorithms need documentation.






M. E. Lesk and E. Schmidt, LEX - Lexical Analyzer Generator