How can I execute python script with command line flag - python-2.6

I have this bit of code:
of = open("oldfile")
nf = open("newfile",'w')
for line in of:
if len(line) > 17:
nf.write(line)
of.close()
nf.close()
and instead of specifying 17, I want to be able to use a variable put it in my scripts directory and execute it directly. If there is no flag, it could print something like 'scriptname'. If there is a flag, as there is below, it would execute the code.
$ myscriptname -l 17 oldfile newfile

See the optparse module for checking the flag and setting the value, or the newer (and better) argparse if you're willing to use 2.7+. As for putting it in my scripts directory I don't quite understand what you want exactly.

If you just want quick and dirty access to the command line parameters:
import sys
print sys.argv # <-- this is an array containing all the command line parameters
If you want some more control, you can use the optparse module.

Related

How to see syntax errors reported with actual line numbers in the parent script when Perl is embedded within shell script?

For no justifiable reason at all, I have a pretty substantial Perl script embedded within a Bash function that is being invoked within an autoenv .env file.
It looks something like this:
perl='
$inverse = "\e[7m";
$invoff = "\e[27m";
$bold = "\e[1m";
⋮
'
perl -e "$perl" "$inputfile"
I understand that standalone Perl scripts and the PATH variable are a thing, and I understand that Term::ANSIColor is a thing. This is not about that.
My question is, if there's a syntax error in the embedded Perl code, how can I get Perl to report the actual line number within the parent shell script?
For example, say the perl= assignment occurs on line 120 within that file, but there's a syntax error on the 65th line of actual Perl code. I get this:
syntax error at -e line 65, near "s/(#.*)$/$comment\1$endcomment/"
Execution of -e aborted due to compilation errors.
…but I want to see this (the actual line number in the parent script) instead:
syntax error at -e line 185, near "s/(#.*)$/$comment\1$endcomment/"
Things I've tried (that didn't work):
assigning to __LINE__
don't even know why I thought that would work; it's not a variable, it's a constant, and you get an error stating the same
assigning to $. ($INPUT_LINE_NUMBER with use English)
I was pretty sure this wasn't going to work anyway, because this is like NR in Awk, and this clearly isn't what this is for
As described in perlsyn, you can use the following directive to set the line number and (optionally) the file name of the subsequent line:
#line 42 "file.pl"
This means that you could use
#!/bin/sh
perl="#line 4 \"$0\""'
warn("test");
'
perl -e "$perl"
Output:
$ ./a.sh
test at ./a.sh line 4.
There's no clean way to avoid hardcoding the line number when using sh, but it is possible.
#!/bin/sh
script_start=$( perl -ne'if (/^perl=/) { print $.+1; last }' -- "$0" )
perl="#line $script_start \"$0\""'
warn("test");
'
perl -e "$perl"
On the other hand, bash provides the current line number.
#!/bin/bash
script_start=$(( LINENO + 2 ))
perl="#line $script_start \"$0\""'
warn("test");
'
perl -e "$perl"
There is this useful tidbit in the perlrun man page, under the section for -x, which "tells Perl that the program is embedded in a larger chunk of unrelated text, such as in a mail message."
All references to line numbers by the program (warnings, errors, ...) will treat the #! line as the first line. Thus a warning on the 2nd line of the program, which is on the 100th line in the file will be reported as line 2, not as line 100. This can be overridden by using the #line directive. (See Plain Old Comments (Not!) in perlsyn)
Based on the bolded statement, adding #line NNN (where NNN is the actual line number of the parent script where that directive appears) achieves the desired effect:
perl='#line 120
$inverse = "\e[7m";
$invoff = "\e[27m";
$bold = "\e[1m";
⋮
'
⋮

Trying to create a Ruby script. I want to append Strings containing commands to a file

mysqldump = "mysqldump"
`#{mysqldump} > backup_file.sql`
I'm supposed to append several of those mysqldump Strings (I simplified it for this example; normally line 2 would have the username and password as options) into the SQL file.
The problem is line 2, when I try to call the Bash operator '>' to append the String. Instead of appending, the script ends up calling the mysqldump command itself.
What can I do to store the String "mysqldump" into the file backup_file.sql? I want to do it the same way as line 2: automatically appending through the Bash.
if you are trying to append "like" you said and not overwrite the target file use >> instead of > . Here is a working version of your script:
za$ emacs ./foo.rb
#!/usr/bin/env ruby
target_dir = "/Users/za/ruby-practice/backup_file.sql"
mysqldump = "mysqldump"
`echo #{mysqldump} >> "#{target_dir}"`
You can also do something like : system %Q{echo "#{mysqldump}" >> "#{target_dir}"}
. Personally , I would say use IO#puts instead of making system calls inside your script , if you want a pure ruby solution/system independent solution.
Why don't you use pure ruby to do it? Like:
File.open("backup_file.sql", "w") do |f|
dump_lines.each do |line|
f.puts line
end
end
assuming that you have the dump in an array..

How to bring system grep results into ruby

I'm currently grep-ing the system and returning the results into ruby to manipulate.
def grep_system(search_str, dir, filename)
cmd_str ="grep -R '#{search_str}' #{dir} > #{filename}"
system(cmd_str)
lines_array = File.open(filename, "r").read.split("\n)
end
As you can see, I'm just writing the results from the grep into a temp file, and then re-opening that file with "File.open".
Is there a better way to do this?
Never ever do anything like this:
cmd_str ="grep -R '#{search_str}' #{dir}"
Don't even think about it. Sooner or later search_str or dir will contain something that the shell will interpret in unexpected ways. There's no need to invoke a shell at all, you can use Open3.capture3 thusly:
lines = Open3.capture3('grep', '-R', search_str, dir).first
lines, _ = Open3.capture3('grep', '-R', search_str, dir)
That will leave you with a newline delimited list in lines and from there it should be easy.
That will invoke grep directly without using a shell at all. capture3 also nicely lets you ignore (or capture) the command's stderr rather than leaving it be printed wherever your stderr goes by default.
If you use this form of capture3, you don't have to worry about shell metacharacters or quoting or unsanitary inputs.
Similarly for system, if you want to use system with arguments you'd use the multi-argument version:
system('ls', some_var)
instead of the potentially dangerous:
system("ls #{some_var}")
You shouldn't need to pass an argument for the temporal filename. After all, writing and reading to/from a temporal file is something you should avoid if possible.
require "open3"
def grep_system(search_str, dir)
Open3.capture2("grep -R '#{search_str}' #{dir}").first.each_line.to_a
end
Instead of using system(cmd_str), you could use:
results = `#{cmd_str}`
Yes, there are a few better ways. The easiest is just to assign the result of invoking the command with backticks to a variable:
def grep_system(search_str, dir, filename)
cmd_str ="grep -R '#{search_str}' #{dir}"
results = `#{cmd_str}`
lines_array =results.split("\n)
end

How to grep on gdb print

Is there a way to grep on the output of print command in gdb? In my case, I am debugging a core dump using gdb and the object I am debugging contains hell lots of elements. I am finding it difficult to look for a matching attribute i.e:
(gdb) print *this | grep <attribute>
Thanks.
You can use pipe command
>>> pipe maintenance info sections | grep .text
[15] 0x5555555551c0->0x5555555554d5 at 0x000011c0: .text ...
>>> pipe maintenance info sections | grep .text | wc
1 10 100
(gdb) print *this | grep
The "standard" way to achieve this is to use Meta-X gdb in emacs.
An alternative:
(gdb) set logging on
(gdb) print *this
(gdb) set logging off
(gdb) shell grep attribute gdb.txt
The patch mentioned by cnicutar sure looks attractive compared to the above. I am guessing the reason it (or its equivalent) was never submitted is that most GDB maintainers use emacs, and so don't have this problem in the first place.
The simplest way is to exploit gdb python. One-liner:
gdb λ py ["attribute" in line and print(line) for line in gdb.execute("p *this", to_string=True).splitlines()]
Assuming you have enabled history of commands, you can type this just once, and later then press Ctrl+R b.exec to pull it out of history. Next simply change attribute and *this per your requirements.
You can also make this as simple as this:
gdb λ grep_cmd "p *this" attribute
For that just add the following to your .gdbinit file:
py
class GrepCmd (gdb.Command):
"""Execute command, but only show lines matching the pattern
Usage: grep_cmd <cmd> <pattern> """
def __init__ (_):
super ().__init__ ("grep_cmd", gdb.COMMAND_STATUS)
def invoke (_, args_raw, __):
args = gdb.string_to_argv(args_raw)
if len(args) != 2:
print("Wrong parameters number. Usage: grep_cmd <cmd> <pattern>")
else:
for line in gdb.execute(args[0], to_string=True).splitlines():
if args[1] in line:
print(line)
GrepCmd() # required to get it registered
end
I know this is an old post but since I found it looking to do the same thing I thought I would add to Hi-Angel's answer to say you can highlight the search term, in the python output, in a red colour by replacing the print line with the one below:
print(line.replace(args[1], "\033[91m"+args[1]+"\033[0m"))
This just uses ascii escape commands for the colour, so should work on Linux and Windows terminal, and you can easily change the colour.
Sorry, don't have enough rep to add this as a comment.

In a bash script, use /dev/stdin for first of multiple command line inputs in wrapped script

Let's say I'm writing a bash script myscript.bash, which expects a single argument ($1). One of things it does is call wrapped.py, a python script, which prompts the user for four inputs. I want to submit $1 for the first of these inputs automatically, and then have the user prompted for the rest as normal.
How can I do this? I tried echo $1 | wrapped.py < /dev/stdin, but this submits EOF for the second input requested by wrapped.py, causing a Python EOFError. It does work if I echo -e "$1\na\nb\nc", that is, echo all four inputs...but I want the user to be prompted for the other three. I could write a full-fledged wrapper for the Python script, but that creates maintenance issues, as an update to wrapped.py could e.g. add a fifth question.
Here's what the actual error looks like:
$ echo 'test_app' | django-startproject.py test_app tmp < /dev/stdin
Project name [PROJECT]: Project author [Lincoln Loop]: Traceback (most recent call last):
File "/usr/local/bin/django-startproject.py", line 7, in <module>
execfile(__file__)
File "/home/rich/src/ll-django-startproject/bin/django-startproject.py", line 9, in <module>
main()
File "/home/rich/src/ll-django-startproject/bin/django-startproject.py", line 5, in main
start_project()
File "/home/rich/src/ll-django-startproject/django_startproject/management.py", line 44, in start_project
value = raw_input(prompt) or default
EOFError: EOF when reading a line
The easy way:
(echo "$1"; cat) | rest of the pipe here
The disadvantage of this aproach is that the rest of the pipe sees the input as a pipe, and tends to lose most of the nice "interactive" properties. Then again, it depends on your script.
For anything more fancy, you should look into expect.
You can set up things like this:
Your bash script
#!/bin/sh
./test.py $1
And python script
#!/usr/bin/python
import sys
print("In py script now")
for i in sys.argv:
print i
print raw_input('What day is it? ')
print raw_input('What date is it? ')
print raw_input('What month is it? ')
print ("Exiting py script")
And run like this
./myscript.bash abc
Output
In py script now
./test.py
abc
What day is it? 65
65
What date is it? 98
98
What month is it? 14
14
Exiting py script

Resources