How to make pylint print filename (and maybe counter) of that it checks at the moment?
Documentation shows how to format messages if a problem is found, but not how to report progress. https://pylint.readthedocs.io/en/latest/user_guide/output.html
This currently isn't possible. It's easy to add to pylint's code:
index 258bfdc3..30bac9f1 100644
--- a/pylint/lint.py
+++ b/pylint/lint.py
## -839,6 +839,8 ## class PyLinter(config.OptionsManagerMixIn,
if not self.should_analyze_file(modname, filepath, is_argument=is_arg):
continue
+ print(modname)
+
self.set_current_module(modname, filepath)
# get the module representation
ast_node = self.get_ast(filepath, modname)
but I never got to making this a proper pull request with an option and all.
Related
I was trying to print a backtrace using dladdr(). info.dli_fname in the following code snippet is displaying the file name of an ELF file. Could you please tell me if it is possible to resolve and print the name of the source file and line number programmatically without the help of addr2line or gdb?
Code:
void print_caller(void)
{
int rc = -1;
Dl_info info = {0};
void *pc = __builtin_return_address(0);
rc = dladdr(pc, &info);
printf(" ==> %p: %s (in %s)\n", pc, info.dli_sname, info.dli_fname);
}
Output:
$ ./a.out
==> 0x55a6b04a1589: foo2 (in ./a.out)
tell me if it is possible to resolve and print the name of the source file and line number programmatically
It is definitely possible -- addr2line and gdb do that.
But it is very non-trivial -- it requires understanding and decoding (possibly multiple) complicated debugging info formats.
If you only care about a single platform (looks like Linux), things are a bit easier -- you only need to decode DWARF.
But that format is still pretty complicated. You should start with a helper library, such as libdwarf.
I have an Android XML (string) file that is edited in a Ruby script. I would like to list and output the changes that were made then. I have tried it with Nokogiri and nokogiri/diff. But it does not have the desired result.
I also have the feeling that it has problems when a new line is added in the middle of it. All in all, I think it would be easiest if I could use git diff.
I've also found ruby-git gem, but I still could not get it to work. Especially because I only need the diff of a specific file.
require 'git'
Git.configure do |config|
#not sure if I actually need something?
end
g = Git.open(path_to_my_dir, :log => Logger.new(STDOUT))
g.diff(path_to_file)
#or
g.diff().path(path_to_file)
Can Someone please help me out? :-(
You thought in the right direction.
For this purpose use
require 'git'
g = Git.open('path/to/dir')
g.diff('your.file').patch #=> changes in your.file
For example we had empty files git.rb and smth in our git-repo.
Then we changed them and checked difference:
$ git diff
diff --git a/git.rb b/git.rb
index e69de29..3ff224d 100644
--- a/git.rb
+++ b/git.rb
## -0,0 +1,3 ##
+require 'git'
+g = Git.open(__dir__)
+puts g.diff('smth').patch
diff --git a/smth b/smth
index e69de29..7c5bd35 100644
--- a/smth
+++ b/smth
## -0,0 +1 ##
+we want to know changes
As already guessed from modified git.rb, now we will see changes only in smth:
$ ruby git.rb
diff --git a/smth b/smth
index e69de29..7c5bd35 100644
--- a/smth
+++ b/smth
## -0,0 +1 ##
+we want to know changes
In case there are no changes, you will get empty string "".
You might want to use the McIlroy-Hunt longest common subsequence (LCS) algorithm directly instead of using derivates/wrappers of it.
See https://github.com/halostatue/diff-lcs
The diff will change if you compare changes of a vs. b as opposed to changes of b vs. a, but you can run it against an array or against a whole file of course.
The gem also has the classic diff tool formatting (used by diff or git) if you prefer that instead of using its direct output.
Using python standard logging module, the line number for the originating log call can be added using: %(lineno)s.
How can this be accomplished using structlog?
EDIT:
Structlog version 21.5.0 introduced the CallsiteParameter processor, so this should be a much more straightforward process right now, as #vitvlkv's answer shows.
I had a similar need and I ended up creating a custom processor
I took a look to what structlog does to output the module and line number when it is told to "pretend" to format in a compatible mode with the logging library (meaning: when it's using a regular stdlib.LoggerFactory) and I found inspiration in that. The key were the following words...
By using structlog’s structlog.stdlib.LoggerFactory, it is also ensured that variables like function names and line numbers are expanded correctly in your log format.
... from this documentation page
The code seems to keep looking for execution frames until it finds one that is in a non logging-related module.
I have all the setup for structlog inside a module called my_libs.util.logger so I want to get the first frame that is NOT inside that module. In order to do that, I told it to add my logging-related my_libs.util.logger to those exclusions. That's what the additional_ignores in the code below does.
In the example I hardcoded the module's name ('my_libs.util.logger') in the exclusion list for clarity, but if you have a similar setup you'll probably be better off using __name__ instead. What this does is ignoring execution frames that exist because of the logging machinery in place. You can look at it as a way of ignoring calls that may have occurred as part of the process of actually logging the message. Or, otherwise said, calls that happened after the logging.info("Foo") that happened in the actual module/line that you do want to output.
Once it finds the right frame, extracting any kind of information (module name, function name, line number... ) is very easy, particularly using the inspect module. I chose to output the module name and the line number, but more fields could be added.
# file my_libs/util/logger.py
import inspect
from structlog._frames import _find_first_app_frame_and_name
def show_module_info_processor(logger, _, event_dict):
# If by any chance the record already contains a `modline` key,
# (very rare) move that into a 'modline_original' key
if 'modline' in event_dict:
event_dict['modline_original'] = event_dict['modline']
f, name = _find_first_app_frame_and_name(additional_ignores=[
"logging",
'my_libs.util.logger', # could just be __name__
])
if not f:
return event_dict
frameinfo = inspect.getframeinfo(f)
if not frameinfo:
return event_dict
module = inspect.getmodule(f)
if not module:
return event_dict
if frameinfo and module:
# The `if` above is probably redundant, since we already
# checked for frameinfo and module but... eh... paranoia.
event_dict['modline'] = '{}:{}'.format(
module.__name__,
frameinfo.lineno,
)
return event_dict
def setup_structlog(env=None):
# . . .
ch.setFormatter(logging.Formatter('%(message)s'))
logging.getLogger().handlers = [ch]
processors = [
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
# . . . more . . .
show_module_info_processor, # THIS!!!
structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M:%S"),
structlog.processors.format_exc_info,
structlog.processors.StackInfoRenderer(),
# . . . more . . .
]
# . . . more . . .
structlog.configure_once(
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
context_class=structlog.threadlocal.wrap_dict(dict),
processors=processors,
)
This produces an output like:
server_1
| INFO [my_libs.hdfs] 2019-07-01 01:01:01 [info ] Initialized HDFS
[my_libs.hdfs] modline=my_libs.hdfs:31
According to official docs, you may add
structlog.configure(
processors=[
# ...
# Add callsite parameters.
structlog.processors.CallsiteParameterAdder(
[CallsiteParameter.FILENAME,
CallsiteParameter.FUNC_NAME,
CallsiteParameter.LINENO],
),
# ...
],
So, I guess there is no need to write a custom processor for this. It was hard to find in the official docs though.
Have a look at this answer to the more general question of how to get a line number.
https://stackoverflow.com/a/3056270/5909155
This cannot be bound to the logger with log.bind(...) because it has to be evaluated each time you log. Thus, you should add a key-value pair like this
logger.log(..., lineno=inspect.getframeinfo(inspect.currentframe()).lineno)
each time. Maybe wrap this in a function, though, like this: https://stackoverflow.com/a/20372465/5909155
Don't forget to
import inspect
When running pylint on a python file it shows me warnings regarding TODO comments by default. E.g.:
************* Module foo
W:200, 0: TODO(SE): fix this! (fixme)
W:294, 0: TODO(SE): backlog item (fixme)
W:412, 0: TODO(SE): Delete bucket? (fixme)
While I do find this behavior useful, I would like to know of a way of temporarily and/or permanently turn these specific warnings on or off.
I am able to generate a pylint config file:
pylint --generate-rcfile > ~/.pylintrc
I'm just note sure what to put in this file to disable warnings for TODO comments.
in the generated config file, you should see a section
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,XXX,TODO
simply drop TODO from the "notes" list.
The config file is found at
~/.pylintrc
If you have not generated the config file, this can be done with
pylint --generate-rcfile > ~/.pylintrc
Along with the solution posted by #sthenault where you could disable all warnings, Pylint also allows you to ignore a single line (helpful if you would want to deal with it in the future) like so:
A_CONSTANT = 'ugh.' # TODO: update value # pylint: disable=fixme
or by stating the Rule ID:
A_CONSTANT = 'ugh.' # TODO: update value # pylint: disable=W0511
IMHO your code should not have # TODO but during development it might be needed to have TODO for a short period of time and in this case pylint will bother you. To avoid this during this time the best is to globally disable it in the pylintrc by adding fixme to the disable list like this:
[MESSAGES CONTROL]
# globally disable pylint checks (comma separated)
disable=fixme,...
So it let you the time to fix all your TODO and once this is done, you can remove the fixme from the pylintrc. Note if you use an older pylint version you will need to use W0511 instead of fixme. For more detail see https://pylint.pycqa.org/en/stable/technical_reference/features.html#messages-control-options
Changing the pylintrc notes as proposed in the first answer is a bad practice in my opinion. notes is designed to configure wich comments triggers the fixme warning and not designed to disable the warning.
In our projects we have a pylint.cfg file. We use the --rcfile pylint option to point to that file.
In pylint.cfg, I can disable checker W0511, which is the checker that complains about "TODO" and similar terms in comments. Just add W0511 to the comma-separated list for parameter disable.
But remember that, as Uncle Bob Martin says, a TODO is not an excuse to leave bad code in the system, and the code should be scanned regularly to remove TODOs, and pylint and/or sonarqube issues can work as good reminders and motivation for doing so.
I am trying to read a gist containing a dput from Github:
library(RCurl)
data <- getURL("https://gist.githubusercontent.com/aronlindberg/848b8efef154d0e7fdb4/raw/5bf4bb864cc4c1db0f66da1be85515b4fa19bf6b/pull_lists")
pull_lists <- dget(textConnection(data))
This generates:
Error: '\U' used without hex digits in character string starting ""## -1,7 +1,9 ##
module ActionDispatch
module Http
module URL
- # Returns the complete \U"
Which I think is a Ruby error message rather than an R error. Now consider this:
data <- getURL("https://gist.githubusercontent.com/aronlindberg/b6b934b39e3c3378c3b2/raw/9b1efe9340c5b1c8acfdc90741260d1d554b2af0/data")
pull_lists2 <- dget(textConnection(data))
This seems to work fine. The former gist is rather large, 1.7mb. Could this be why I can't read it from Github. If not, why?
The gist that you created does not have a .R file in it, since pull_lists does not have an extension. I forked your gist to this one and added the extension. Now it is possible to source the gist and save it to a value.
library("devtools")
pull_lists <- source_gist("a7b157cec3b9259fc5d1")