From da747c3d977d2c90e9877be7264adf7516bbf599 Mon Sep 17 00:00:00 2001 From: Meador Inge Date: Thu, 19 Jan 2012 00:17:44 -0600 Subject: [PATCH] Issue #2134: Clarify token.OP handling rationale in tokenize documentation. --- Doc/library/tokenize.rst | 6 ++++++ Misc/NEWS | 3 +++ 2 files changed, 9 insertions(+) diff --git a/Doc/library/tokenize.rst b/Doc/library/tokenize.rst index 30677eaadc..7075035281 100644 --- a/Doc/library/tokenize.rst +++ b/Doc/library/tokenize.rst @@ -15,6 +15,12 @@ implemented in Python. The scanner in this module returns comments as tokens as well, making it useful for implementing "pretty-printers," including colorizers for on-screen displays. +To simplify token stream handling, all :ref:`operators` and :ref:`delimiters` +tokens are returned using the generic :data:`token.OP` token type. The exact +type can be determined by checking the token ``string`` field on the +:term:`named tuple` returned from :func:`tokenize.tokenize` for the character +sequence that identifies a specific operator token. + The primary entry point is a :term:`generator`: .. function:: generate_tokens(readline) diff --git a/Misc/NEWS b/Misc/NEWS index 0233823bfb..2193af0c46 100644 --- a/Misc/NEWS +++ b/Misc/NEWS @@ -495,6 +495,9 @@ Tests Documentation ------------- +- Issue #2134: The tokenize documentation has been clarified to explain why + all operator and delimiter tokens are treated as token.OP tokens. + - Issue #13513: Fix io.IOBase documentation to correctly link to the io.IOBase.readline method instead of the readline module. -- 2.50.1