How Checks are Run¶
In Flake8 2.x, Flake8 delegated check running to pep8. In 3.0 Flake8
takes on that responsibility. This has allowed for simpler
handling of the
--jobs parameter (using
simplified our fallback if something goes awry with concurency.
At the lowest level we have a
FileChecker. Instances of
created for each file to be analyzed by Flake8. Each instance, has a copy
of all of the plugins registered with setuptools in the
FileChecker instances are managed by an instance of
Manager instance handles creating sub-processes with
multiprocessing module and falling back to running checks in serial if
an operating system level error arises. When creating
Manager is responsible for determining if a particular file has been
Unfortunately, since Flake8 took over check running from pep8/pycodestyle,
it also had to take over parsing and processing files for the checkers
to use. Since it couldn’t reuse pycodestyle’s functionality (since it did not
separate cleanly the processing from check running) that function was isolated
FileProcessor class. We moved
several helper functions into the
flake8.processor module (see also
Processor Utility Functions).
FileChecker(filename, checks, options)¶
Manage running checks for a file and aggregate the results.
Run physical checks if and only if it is at the end of the line.
Handle the logic when encountering a newline token.
Process tokens and trigger checks.
This can raise a
flake8.exceptions.InvalidSyntaxexception. Instead of using this directly, you should use
report(error_code, line_number, column, text)¶
Report an error by storing it in the results list.
Run all checks expecting an abstract syntax tree.
Run the check in a single plugin.
Run checks against the file.
Run all checks expecting a logical line.
Run all checks for a given physical line.
A single physical check may return multiple errors.
Manager(style_guide, arguments, checker_plugins)¶
Manage the parallelism and checker instances for each plugin and file.
This class will be responsible for the following:
- Determining the parallelism of Flake8, e.g.:
- Do we use
multiprocessingor is it unavailable?
- Do we automatically decide on the number of jobs to use or did the user provide that?
- Do we use
- Falling back to a serial way of processing files if we run into an
OSError related to
- Organizing the results of each checker so we can group the output together and make our output deterministic.
Check if a path is excluded.
Parameters: path (str) – Path to check against the exclude patterns. Returns: True if there are exclude patterns and the path matches, otherwise False. Return type: bool
Create checkers for each file.
Report all of the errors found in the managed file checkers.
This iterates over each of the checkers and reports the errors sorted by line number.
Returns: A tuple of the total results found and the results reported. Return type: tuple(int, int)
Run all the checkers.
This will intelligently decide whether to run the checks in parallel or whether to run them in serial.
If running the checks in parallel causes a problem (e.g., https://gitlab.com/pycqa/flake8/issues/74) this also implements fallback to serial processing.
Run the checkers in parallel.
Run the checkers in serial.
Start checking files.
Parameters: paths (list) – Path names to check. This is passed directly to
Stop checking files.
- Determining the parallelism of Flake8, e.g.:
FileProcessor(filename, options, lines=None)¶
Processes a file and holdes state.
This processes a file by generating tokens, logical and physical lines, and AST trees. This also provides a way of passing state about the file to checks expecting that state. Any public attribute on this object can be requested by a plugin. The known public attributes are:
Number of preceding blank lines
Number of blank lines
Build an abstract syntax tree from the list of lines.
Build a logical line from the current tokens list.
Build the mapping, comments, and logical line lists.
Current checker state
Delete the first token in the list of tokens.
Return the complete set of tokens for a file.
Accessing this attribute may raise an InvalidSyntax exception.
Tokenize the file and yield the tokens.
Raises: flake8.exceptions.InvalidSyntax – If a
tokenize.TokenErroris raised while generating tokens.
User provided option for hang closing
Character used for indentation
Current level of indentation
Context-manager to toggle the multiline attribute.
Generate the keyword arguments for a list of parameters.
Line number in the file
Current logical line
Maximum docstring / comment line length as configured by the user
Maximum line length as configured by the user
Whether the current physical line is multiline
Get the next line from the list.
Record the previous logical line.
This also resets the tokens list and the blank_lines count.
False, included for compatibility
Retrieve the line which will be used to determine noqa.
Previous level of indentation
Previous logical line
Previous unindented (i.e. top-level) logical line
Read the lines for this file checker.
Read the lines for a file.
Read the lines from standard in.
Reset the blank_before attribute to zero.
flake8: noqais in the file to be ignored.
Returns: True if a line matches
defaults.NOQA_FILE, otherwise False
Return type: bool
Split a physical line’s line based on new-lines.
This also auto-increments the line number for the caller.
Strip the UTF bom from the lines of the file.
Current set of tokens
Total number of lines in the file
Update the checker_state attribute for the plugin.
Update the indent level based on the logical line mapping.
Verbosity level of Flake8
Note that we visited a new blank line.
Count the number of parentheses.
Return the amount of indentation.
Tabs are expanded to the next multiple of 8.
>>> expand_indent(' ') 4 >>> expand_indent('\t') 8 >>> expand_indent(' \t') 8 >>> expand_indent(' \t') 16
Check if the token is an end-of-line token.
Check if this is a multiline string.
Log a token to a provided logging object.
Replace contents with ‘xxx’ to prevent syntax matching.
>>> mutate_string('"abc"') '"xxx"' >>> mutate_string("'''abc'''") "'''xxx'''" >>> mutate_string("r'abc'") "r'xxx'"
Check if the token type is a newline token type.