Why Gemfury? Push, build, and install  RubyGems npm packages Python packages Maven artifacts PHP packages Go Modules Debian packages RPM packages NuGet packages

Repository URL to install this package:

Details    
Size: Mime:
B

ñ¼·Αäã@südZddlZddlZddlZddlZddlZddlZddlmZddl	m
Z
mZddlm
Z
dd„Ze
dd	d
d„ƒZGdd
„d
eƒZeƒjZejdejdZe
dd	dd„ƒZe
dd	dd„ƒZejrÎeZneZe
dd	dd„ƒZe
ddddd„ƒZdS)z"Better tokenizing for coverage.py.éN)Úenv)ÚiternextÚ
unicode_class)Úcontractc
csòd}d}d}xà|D]Ø\}}\}}\}}	}
||krº|r¶| d¡r¶d}| d¡rRd}n.|tjkr€d|kr€| dd¡d	ddkr€d}|r¶t| d¡d
ƒd}dd||f||df|fV|
}|tjtjfkrÎ|}||||f||	f|
fV|}qWdS)
aBReturn all physical tokens, even line continuations.

    tokenize.generate_tokens() doesn't return a token for the backslash that
    continues lines.  This wrapper provides those tokens so that we can
    re-create a faithful representation of the original source.

    Returns the same values as generate_tokens()

    Néÿÿÿÿz\
Tú\FÚ
éréþÿÿÿiŸ†é)ÚendswithÚtokenÚSTRINGÚsplitÚlenÚtokenizeÚNEWLINEÚNL)
ÚtoksÚ	last_lineZlast_linenoZ
last_ttextÚttypeÚttextZslinenoÚscolZelinenoÚecolZltextZinject_backslashZccol©rú‘/build/wlanpi-profiler-MIf3Xw/wlanpi-profiler-1.0.8/debian/wlanpi-profiler/opt/wlanpi-profiler/lib/python3.7/site-packages/coverage/phystokens.pyÚphys_tokenss,


 rÚunicode)Úsourceccs2tjtjtjtjh}g}d}| d¡ dd¡}t|ƒ}xêt	|ƒD]Þ\}}\}}\}}	}d}
xºt
 d|¡D]ª}|dkrˆ|Vg}d}d}n†|dkr–d}nx||kr¤d}nj|
rÊ||krÊ| d	d
||f¡d}
tj
 |d¡ ¡dd
…}
|tjkrüt |¡rüd}
| |
|f¡d}d}qhW|r@|	}q@W|r.|VdS)aGenerate a series of lines, one for each line in `source`.

    Each line is a list of pairs, each pair is a token::

        [('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ]

    Each pair has a token class, and the token text.

    If you concatenate all the token texts, and then join them with newlines,
    you should have your original `source` back, with two differences:
    trailing whitespace is not preserved, and a final line with no newline
    is indistinguishable from a final line with a newline.

    réz
rTz(
)FÚÚwsú ZxxNéÚkey)r
ÚINDENTÚDEDENTrrrÚ
expandtabsÚreplaceÚgenerate_tokensrÚrerÚappendÚtok_nameÚgetÚlowerÚNAMEÚkeywordÚ	iskeyword)rZ	ws_tokensÚlineÚcolZtokgenrrÚ_rrZ
mark_startÚpartZmark_endZ	tok_classrrrÚsource_token_linesIs< r6c@s*eZdZdZdd„Zedddd„ƒZdS)	ÚCachedTokenizeraXA one-element cache around tokenize.generate_tokens.

    When reporting, coverage.py tokenizes files twice, once to find the
    structure of the file, and once to syntax-color it.  Tokenizing is
    expensive, and easily cached.

    This is a one-element cache so that our twice-in-a-row tokenizing doesn't
    actually tokenize twice.

    cCsd|_d|_dS)N)Ú	last_textÚlast_tokens)ÚselfrrrÚ__init__‰szCachedTokenizer.__init__r)ÚtextcCs4||jkr.||_t| d¡ƒ}tt |¡ƒ|_|jS)z*A stand-in for `tokenize.generate_tokens`.T)r8rÚ
splitlinesÚlistrr)r9)r:r<Úreadlinerrrr)s

zCachedTokenizer.generate_tokensN)Ú__name__Ú
__module__Ú__qualname__Ú__doc__r;rr)rrrrr7~s
r7z#^[ \t]*#.*coding[:=][ \t]*([-\w.]+))ÚflagsÚbytescsªt|tƒst‚t| d¡ƒ‰dd„‰d}d‰d}‡fdd„}‡‡fd	d
„}|ƒ}| tj¡rpd‰|dd…}d}|sx|S||ƒ}|rˆ|S|ƒ}|s–|S||ƒ}|r¦|S|S)
zªDetermine the encoding for `source`, according to PEP 263.

    `source` is a byte string, the text of the program.

    Returns a string, the name of the encoding.

    TcSs<|dd… ¡ dd¡}t d|¡r(dSt d|¡r8dS|S)	z(Imitates get_normal_name in tokenizer.c.Nér4ú-z^utf-8($|-)zutf-8z&^(latin-1|iso-8859-1|iso-latin-1)($|-)z
iso-8859-1)r.r(r*Úmatch)Úorig_encÚencrrrÚ_get_normal_name¬sz._source_encoding_py2.<locals>._get_normal_nameÚasciiFNcs yˆƒStk
rdSXdS)z Get the next source line, or ''.r N)Ú
StopIterationr)r?rrÚread_or_stopÃsz*_source_encoding_py2.<locals>.read_or_stopcsžy| d¡}Wntk
r"dSXt |¡}|s6dSˆ|dƒ}yt |¡}Wn tk
rptd|ƒ‚YnXˆršt|d|ƒ}|dkr’tdƒ‚|d7}|S)	z"Find an encoding cookie in `line`.rLNrzunknown encoding: Únamezutf-8zencoding problem: utf-8z-sig)	ÚdecodeÚUnicodeDecodeErrorÚ	COOKIE_REÚfindallÚcodecsÚlookupÚLookupErrorÚSyntaxErrorÚgetattr)r2Úline_stringÚmatchesÚencodingÚcodecZ
codec_name)rKÚ	bom_foundrrÚfind_cookieÊs$
z)_source_encoding_py2.<locals>.find_cookier#z	utf-8-sig)Ú
isinstancerEÚAssertionErrorrr=Ú
startswithrTÚBOM_UTF8)rÚdefaultr[rNr^ÚfirstÚsecondr)rKr]r?rÚ_source_encoding_py2œs2	rfcCst| d¡ƒ}t |¡dS)zªDetermine the encoding for `source`, according to PEP 263.

    `source` is a byte string: the text of the program.

    Returns a string, the name of the encoding.

    Tr)rr=rÚdetect_encoding)rr?rrrÚ_source_encoding_py3ûs	rhcCs8t|ƒ}tjr(t|tƒr(| t ¡d¡}t|||ƒ}|S)a»Just like the `compile` builtin, but works on any Unicode string.

    Python 2's compile() builtin has a stupid restriction: if the source string
    is Unicode, then it may not have a encoding declaration in it.  Why not?
    Who knows!  It also decodes to utf8, and then tries to interpret those utf8
    bytes according to the encoding declaration.  Why? Who knows!

    This function neuters the coding declaration, and compiles it.

    r()	Úneuter_encoding_declarationrÚPY2r_rÚencodeÚsysÚgetfilesystemencodingÚcompile)rÚfilenameÚmodeÚcoderrrÚcompile_unicodes
rr)rZreturnscCsRt |¡rN| d¡}x.ttdt|ƒƒƒD]}t d||¡||<q(Wd |¡}|S)z8Return `source`, with any encoding declaration neutered.Trz# (deleted declaration)r )rRÚsearchr=ÚrangeÚminrÚsubÚjoin)rZsource_linesÚlinenorrrri!s


ri)rCrTr0r*rlr
rZcoveragerZcoverage.backwardrrZ
coverage.miscrrr6Úobjectr7r)rnÚ	MULTILINErRrfrhÚPY3Zsource_encodingrrrirrrrÚ<module>s*75_