To run "pylint" tool on multiple Python files on Windows Machine - windows

I want to run "pylint" tool on multiple python files present under one directory.
I want one consolidated report for all the Python files.
I am able to run individually one python file, but want to run on bunch of files.
Please help with the command for the same.

I'm not a windows user, but isn't "pylint directory/*.py" enough ?
If the directory is a package (in the PYTHONPATH), you may also run "pylint directory"

Someone wrote a wrapper in python 2 to handle this
The code :
#! /usr/bin/env python
'''
Module that runs pylint on all python scripts found in a directory tree..
'''
import os
import re
import sys
total = 0.0
count = 0
def check(module):
'''
apply pylint to the file specified if it is a *.py file
'''
global total, count
if module[-3:] == ".py":
print "CHECKING ", module
pout = os.popen('pylint %s'% module, 'r')
for line in pout:
if re.match("E....:.", line):
print line
if "Your code has been rated at" in line:
print line
score = re.findall("\d+.\d\d", line)[0]
total += float(score)
count += 1
if __name__ == "__main__":
try:
print sys.argv
BASE_DIRECTORY = sys.argv[1]
except IndexError:
print "no directory specified, defaulting to current working directory"
BASE_DIRECTORY = os.getcwd()
print "looking for *.py scripts in subdirectories of ", BASE_DIRECTORY
for root, dirs, files in os.walk(BASE_DIRECTORY):
for name in files:
filepath = os.path.join(root, name)
check(filepath)
print "==" * 50
print "%d modules found"% count
print "AVERAGE SCORE = %.02f"% (total / count)

Related

Conda: list all environments that use a certain package

How can one get a list of all environments that use a certain package in conda?
conda search
Since Conda v 4.5.0, the conda search command has had an --envs flag for searching local environments for installed packages. See conda search -h:
--envs Search all of the current user's environments. If run
as Administrator (on Windows) or UID 0 (on unix),
search all known environments on the system.
A use case given in the original Issue was finding environments with outdated versions of openssl packages:
conda search --envs 'openssl<1.1.1l'
Conda Python API
Here's an example of how to do it with the Conda Python package (run this in base environment):
import conda.gateways.logging
from conda.core.envs_manager import list_all_known_prefixes
from conda.cli.main_list import list_packages
from conda.common.compat import text_type
# package to search for; this can be a regex
PKG_REGEX = "pymc3"
for prefix in list_all_known_prefixes():
exitcode, output = list_packages(prefix, PKG_REGEX)
# only print envs with results
if exitcode == 0 and len(output) > 3:
print('\n'.join(map(text_type, output)))
This works as of Conda v4.10.0, but there since it relies on internal methods, there's no guarantee going forward. Perhaps this should be a feature request, say for a CLI command like conda list --any.
Script Version
Here is a version that uses arguments for package names:
conda-list-any.py
#!/usr/bin/env conda run -n base --no-capture-output python
## Usage: conda-list-any.py [PACKAGE ...]
## Example: conda-list-any.py numpy pandas
import conda.gateways.logging
from conda.core.envs_manager import list_all_known_prefixes
from conda.cli.main_list import list_packages
from conda.common.compat import text_type
import sys
for pkg in sys.argv[1:]:
print("#"*80)
print("# Checking for package '%s'..." % pkg)
n = 0
for prefix in list_all_known_prefixes():
exitcode, output = list_packages(prefix, pkg)
if exitcode == 0 and len(output) > 3:
n += 1
print("\n" + "\n".join(map(text_type, output)))
print("\n# Found %d environment%s with '%s'." % (n, "" if n == 1 else "s", pkg))
print("#"*80 + "\n")
The shebang at the top should ensure that it will run in base, at least on Unix/Linux systems.
I've ran into the same issue and constructed a Powershell script that:
takes a regex query as an argument
goes through all available conda environments (and shows a progress bar while doing so)
lists all packages in each environment
returns the environments that have a package that match the query
per returned environment shows the package name that matched the query
The script is available at the bottom of this answer. I have saved it in a folder that is avaialble in the PATH under the name search-pkg-env.ps1.
The script can be run as follows (here I search for environments with the pip package pymupdf):
PS > search-pkg-env.ps1 pymupdf
Searching 19 conda environments...
py39
- pymupdf 1.18.13
Script
if ($args.Length -eq 0) {
Write-Host "Please specify a search term for that matches the package name (regex)." -BackgroundColor DarkYellow -ForegroundColor White
exit
}
$pkg_regex = $args[0]
$conda_envs = ConvertFrom-JSON -InputObject $([string]$(conda env list --json))
$total = $conda_envs.envs.Count
Write-Host "Searching $total conda environments..."
$counter = 1
ForEach ($conda_env in $conda_envs.envs) {
if ($total -gt 0) {
$progress = $counter / $total * 100
}
else {
$progress = 0
}
# Split the full path into its elements
$parts = $conda_env -Split '\\'
# The last element has the env name
$env_name = $parts[-1]
Write-Progress -Activity "Searching conda environment $env_name for $pkg_regex" -PercentComplete $progress
# Now search the provided package name in this environment
$search_results = ConvertFrom-JSON -InputObject $([string]$(conda list $pkg_regex -n $env_name --json))
If ($search_results.Count -gt 0) {
Write-Host $env_name
foreach ($result in $search_results) {
$pkg_name = $result.name
$pkg_version = $result.version
Write-Host " - $pkg_name $pkg_version"
}
}
$counter++
}
Python + Terminal solution
Jump directly to solution:
Use the following Github script Listing Environments Containing a Set of Packages
You Don't Need to install any Library
Simply copy the script, run and provide the set of packages needed (one or more)
From Scratch Solution:
To find all environments that contains a set of packages (one or more):
Get all the conda environments using Popen and conda env list; this will list all conda envs on your station.
Loop over the packages and check if a package exist in a conda env using Popen and conda list -n <environment>; this will list all available packages in an environment
Check through the output if it exists using either grep, findstr or your platform alternative on the terminal output - or do the search with python.
For each package save in a list
Check through lists for common environments and Voila!

Awk/sed script to convert a file from camelCase to underscores

I want to convert several files in a project from camelCase to underscore_case.
I would like to have a onliner that only needs the filename to work.
You could use sed also.
$ echo 'fooBar' | sed -r 's/([a-z0-9])([A-Z])/\1_\L\2/g'
foo_bar
$ echo 'fooBar' | sed 's/\([a-z0-9]\)\([A-Z]\)/\1_\L\2/g'
foo_bar
The proposed sed answer has some issues:
$ echo 'FooBarFB' | sed -r 's/([a-z0-9])([A-Z])/\1_\L\2/g'
Foo_bar_fB
I sugesst the following
$ echo 'FooBarFB' | sed -r 's/([A-Z])/_\L\1/g' | sed 's/^_//'
foo_bar_f_b
After a few unsuccessful tries, I got this (I wrote it on several lines for readability, but we can remove the newlines to have a onliner) :
awk -i inplace '{
while ( match($0, /(.*)([a-z0-9])([A-Z])(.*)/, cap))
$0 = cap[1] cap[2] "_" tolower(cap[3]) cap[4];
print
}' FILE
For the sake of completeness, we can adapt it to do the contrary (underscore to CamelCase) :
awk -i inplace '{
while ( match($0, /(.*)([a-z0-9])_([a-z])(.*)/, cap))
$0 = cap[1] cap[2] toupper(cap[3]) cap[4];
print
}' FILE
If you're wondering, the -i inplace is a flag only available with awk >=4.1.0, and it modify the file inplace (as with sed -i). If you're awk version is older, you have to do something like :
awk '{...}' FILE > FILE.tmp && mv FILE.tmp FILE
Hope it could help someone !
This might be what you want:
$ cat tst.awk
{
head = ""
tail = $0
while ( match(tail,/[[:upper:]]/) ) {
tgt = substr(tail,RSTART,1)
if ( substr(tail,RSTART-1,1) ~ /[[:lower:]]/ ) {
tgt = "_" tolower(tgt)
}
head = head substr(tail,1,RSTART-1) tgt
tail = substr(tail,RSTART+1)
}
print head tail
}
$ cat file
nowIs theWinterOfOur disContent
From ThePlay About RichardIII
$ awk -f tst.awk file
now_is the_winter_of_our dis_content
From The_play About Richard_iII
but without your sample input and expected output it's just a guess.
Here is a Python script that converts a file with CamelCase functions to snake_case, then fixes up callers as well. Optionally it creates a commit with the changes.
Usage:
style.py -s -c tools/patman/terminal.py
#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0+
#
# Copyright 2021 Google LLC
# Written by Simon Glass <sjg#chromium.org>
#
"""Changes the functions and class methods in a file to use snake case, updating
other tools which use them"""
from argparse import ArgumentParser
import glob
import os
import re
import subprocess
import camel_case
# Exclude functions with these names
EXCLUDE_NAMES = set(['setUp', 'tearDown', 'setUpClass', 'tearDownClass'])
# Find function definitions in a file
RE_FUNC = re.compile(r' *def (\w+)\(')
# Where to find files that might call the file being converted
FILES_GLOB = 'tools/**/*.py'
def collect_funcs(fname):
"""Collect a list of functions in a file
Args:
fname (str): Filename to read
Returns:
tuple:
str: contents of file
list of str: List of function names
"""
with open(fname, encoding='utf-8') as inf:
data = inf.read()
funcs = RE_FUNC.findall(data)
return data, funcs
def get_module_name(fname):
"""Convert a filename to a module name
Args:
fname (str): Filename to convert, e.g. 'tools/patman/command.py'
Returns:
tuple:
str: Full module name, e.g. 'patman.command'
str: Leaf module name, e.g. 'command'
str: Program name, e.g. 'patman'
"""
parts = os.path.splitext(fname)[0].split('/')[1:]
module_name = '.'.join(parts)
return module_name, parts[-1], parts[0]
def process_caller(data, conv, module_name, leaf):
"""Process a file that might call another module
This converts all the camel-case references in the provided file contents
with the corresponding snake-case references.
Args:
data (str): Contents of file to convert
conv (dict): Identifies to convert
key: Current name in camel case, e.g. 'DoIt'
value: New name in snake case, e.g. 'do_it'
module_name: Name of module as referenced by the file, e.g.
'patman.command'
leaf: Leaf module name, e.g. 'command'
Returns:
str: New file contents, or None if it was not modified
"""
total = 0
# Update any simple functions calls into the module
for name, new_name in conv.items():
newdata, count = re.subn(fr'{leaf}.{name}\(',
f'{leaf}.{new_name}(', data)
total += count
data = newdata
# Deal with files that import symbols individually
imports = re.findall(fr'from {module_name} import (.*)\n', data)
for item in imports:
#print('item', item)
names = [n.strip() for n in item.split(',')]
new_names = [conv.get(n) or n for n in names]
new_line = f"from {module_name} import {', '.join(new_names)}\n"
data = re.sub(fr'from {module_name} import (.*)\n', new_line, data)
for name in names:
new_name = conv.get(name)
if new_name:
newdata = re.sub(fr'\b{name}\(', f'{new_name}(', data)
data = newdata
# Deal with mocks like:
# unittest.mock.patch.object(module, 'Function', ...
for name, new_name in conv.items():
newdata, count = re.subn(fr"{leaf}, '{name}'",
f"{leaf}, '{new_name}'", data)
total += count
data = newdata
if total or imports:
return data
return None
def process_file(srcfile, do_write, commit):
"""Process a file to rename its camel-case functions
This renames the class methods and functions in a file so that they use
snake case. Then it updates other modules that call those functions.
Args:
srcfile (str): Filename to process
do_write (bool): True to write back to files, False to do a dry run
commit (bool): True to create a commit with the changes
"""
data, funcs = collect_funcs(srcfile)
module_name, leaf, prog = get_module_name(srcfile)
#print('module_name', module_name)
#print(len(funcs))
#print(funcs[0])
conv = {}
for name in funcs:
if name not in EXCLUDE_NAMES:
conv[name] = camel_case.to_snake(name)
# Convert name to new_name in the file
for name, new_name in conv.items():
#print(name, new_name)
# Don't match if it is preceeded by a '.', since that indicates that
# it is calling this same function name but in a different module
newdata = re.sub(fr'(?<!\.){name}\(', f'{new_name}(', data)
data = newdata
# But do allow self.xxx
newdata = re.sub(fr'self.{name}\(', f'self.{new_name}(', data)
data = newdata
if do_write:
with open(srcfile, 'w', encoding='utf-8') as out:
out.write(data)
# Now find all files which use these functions and update them
for fname in glob.glob(FILES_GLOB, recursive=True):
with open(fname, encoding='utf-8') as inf:
data = inf.read()
newdata = process_caller(fname, conv, module_name, leaf)
if do_write and newdata:
with open(fname, 'w', encoding='utf-8') as out:
out.write(newdata)
if commit:
subprocess.call(['git', 'add', '-u'])
subprocess.call([
'git', 'commit', '-s', '-m',
f'''{prog}: Convert camel case in {os.path.basename(srcfile)}
Convert this file to snake case and update all files which use it.
'''])
def main():
"""Main program"""
epilog = 'Convert camel case function names to snake in a file and callers'
parser = ArgumentParser(epilog=epilog)
parser.add_argument('-c', '--commit', action='store_true',
help='Add a commit with the changes')
parser.add_argument('-n', '--dry_run', action='store_true',
help='Dry run, do not write back to files')
parser.add_argument('-s', '--srcfile', type=str, required=True, help='Filename to convert')
args = parser.parse_args()
process_file(args.srcfile, not args.dry_run, args.commit)
if __name__ == '__main__':
main()

Figuring out pipes in python

i am currently writing a program in python and i am stuck. So my questtion is:
I have a program that reads a file and prints some lines to stdout like this:
#imports
import sys
#number of args
numArgs = len(sys.argv)
#ERROR if not enough args were committed
if numArgs <= 1:
sys.exit("Not enough arguments!")
#naming input file from args
Input = sys.argv[1]
#opening files
try:
fastQ = open(Input , 'r')
except IOError, e:
sys.exit(e)
#parsing through file
while 1:
#saving the lines
firstL = fastQ.readline()
secondL = fastQ.readline()
#you could maybe skip these lines to save ram
fastQ.readline()
fastQ.readline()
#make sure that there are no blank lines in the file
if firstL == "" or secondL == "":
break
#edit the Header to begin with '>'
firstL = '>' + firstL.replace('#' , '')
sys.stdout.write(firstL)
sys.stdout.write(secondL)
#close both files
fastQ.close()
Now i want to rewrite this program so that i can run a command line like : zcat "textfile" | python "myprogram" > "otherfile". So i looked around and found subprocess, but can't seem to figure out how to do it. thanks for your help
EDIT:
Now, if what you are doing is trying to write a Python script to orchestrate the execution of both zcat and myprogram, THEN you may need subprocess. – rchang
The intend is to have the "textfile" and the program on a cluster, so i dont need to copy any files from the cluster. i just want to login on the cluster and use the command:zcat "textfile" | python "myprogram" > "otherfile", so that the zcat and the program do their thing and i end up with "otherfile" on the cluster. hope you understand what i want to do.
Edit #2:
my solution
#imports
import sys
import fileinput
# Counter, maybe there is a better way
count = 0
# Iterieration over Input
for line in fileinput.input():
# Selection of Header
if count == 0 :
#Format the Header
newL = '>' + line.replace('#' , '')
# Print the Header without newline
sys.stdout.write(newL)
# Selection of Sequence
elif count == 1 :
# Print the Sequence
sys.stdout.write(line)
# Up's the Counter
count += 1
count = count % 4
THX
You could use fastQ = sys.stdin to read the input from stdin instead of the file or (more generally) fastQ = fileinput.input() to read from stdin and/or files specified on the command-line.
There is also fileinput.hook_compressed so that you don't need zcat and read the compressed file directly instead:
$ myprogram textfile >otherfile

How to display output of command in python

I want to run the command df -h to display disk information, But when i run my code nothing is displayed in the terminal, I even tried to do "df -h > out.txt" and then cat out.txt but that didn't work either.
import sys
import os
import subprocess
def main():
os.system('clear')
print("Options: \n")
print("1 - Show disk info")
print("2 - Quit \n")
selection = input('> ')
if selection == 1:
subprocess.call(['df -h'], shell=True)
elif selection == 2:
sys.exit(0)
if __name__ == "__main__":
main()
Reading user input with input() returns a string. However, your if/elif statement are comparing the contents of selection with integers, so the comparison will always be False. As a workaround, use this:
selection = int(input('> '))
if selection == 1:
...

How to provide flags to ruby script like -v , -o and place code accordingly?

I have created one ruby script that I want to run with some flags on console say -v flag prints output on console and -o stores output in new file with file name I am taking from console using gets()
My code has following structure:
puts "Enter filename to analyze:\n\n"
filename = gets().chomp
puts "Provide filename to store result in new text file:\n\n"
output = gets().chomp
filesize = File.size(filename)
puts "File size in Bytes:\n#{filesize.to_i}\n"
pagecontent = filesize - 20
puts "\n\nData:\n#{pagecontent}\n\n"
File.open(filename,'r') do |file|
#whole process with few do..end in between that I want to do in 2 different #ways.
#If I provide -v flag on console result of this code should be displayed on console
#and with -o flag it should be stored in file with filename provided on console #stored in output variable declared above
end
end
Use stdlib OptionParser

Resources