From: Berker Peksag Date: Sun, 5 Feb 2017 01:32:39 +0000 (+0300) Subject: Issue #28489: Fix comment in tokenizer.c X-Git-Tag: v3.6.1rc1~105 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=6f805628625e0cf4ee2d420735f7b15a93715aca;p=python Issue #28489: Fix comment in tokenizer.c Patch by Ryan Gonzalez. --- diff --git a/Parser/tokenizer.c b/Parser/tokenizer.c index 0fa3aebc0f..ff65f2a735 100644 --- a/Parser/tokenizer.c +++ b/Parser/tokenizer.c @@ -1508,7 +1508,7 @@ tok_get(struct tok_state *tok, char **p_start, char **p_end) /* Identifier (most frequent token!) */ nonascii = 0; if (is_potential_identifier_start(c)) { - /* Process b"", r"", u"", br"" and rb"" */ + /* Process the various legal combinations of b"", r"", u"", and f"". */ int saw_b = 0, saw_r = 0, saw_u = 0, saw_f = 0; while (1) { if (!(saw_b || saw_u || saw_f) && (c == 'b' || c == 'B'))