How to execute a script in zsh and then become interactive? - shell

I want to run a predetermined set of commands after the invocation of zsh (i.e. after .zshrc is executed) that returns the user to an interactive shell when complete.
Things like this comes to my mind:
urxvt -e 'zsh -c ". scriptname"'
but instead of exiting zsh and the terminal once the script finishes, I want an interactive shell at the end. The idea is to simply save users from having to type ". scriptname" whenever they log in.
Application: Several users are using the same account (strange but true) and I want to help in adjusting user specific settings. Yes, I know that one could use different accounts for that :-)

Not really what you were asking for, but should have the desired result. You use an environment variable to pass the name of the user-specific script to source to .zshrc (or other appropriate startup file).
urxvt -e 'USERSCRIPT=scriptname zsh'
Then in .zshrc for the actual user, include
. $USERSCRIPT
(All of this is not to say that there isn't an option to run a command then remain in interactive mode; I just can't find a way to do it, so I offer this workaround.)

Related

How do I map a certain input in bash to a command?

So I was wondering if it is possible for me to map a certain input to a command in gnome terminal. For example, when I type "foo" in the command shell, it would automatically execute a certain command like going to a directory where a program is located and execute that program in a specific configuration.
Yes, it is called an alias:
A Bash alias is essentially nothing more than a keyboard shortcut, an
abbreviation, a means of avoiding typing a long command sequence. If,
for example, we include alias lm="ls -l | more" in the ~/.bashrc file,
then each lm [1] typed at the command-line will automatically be
replaced by a ls -l | more. This can save a great deal of typing at
the command-line and avoid having to remember complex combinations of
commands and options. Setting alias rm="rm -i" (interactive mode
delete) may save a good deal of grief, since it can prevent
inadvertently deleting important files.
So basically:
alias foo="cd /path/to/dir; ./myprogram; cd -"
cd - is following #Cyrus's suggestion - to return you to the directory you started from. This is safer and more expected of most commands, but of course, you can use whatever you like.

Launch interactive Bash shell in Ruby script, with initial command

I'm working on an interactive Ruby script, which build and packages resources. In the middle of the process, I'd like to drop into an interactive shell, but pre-cd'd into a specific working directory, as well as with an explanatory message (CTRL-D to continue). The interactive bash + given initial command is what's problematic.
Per the answer for doing something like this in Bash, given at https://stackoverflow.com/a/36152028, I've tried
system '/bin/bash', '--init-file', '<(echo "cd ~/src/devops; pwd")'
However, bash runs interactively but completely ignores the '<(echo "cd ~/src/devops; pwd")' section.
Interestingly system '/bin/bash', '--init-file complains if no argument is given, but literally anything runs bash, but with no initial command.
*Note that (--rcfile instead of --init-file) has the same effect.
Change the working directory of the Ruby script first, so that bash inherits the correct working directory.
curr_dir = Dir.pwd
Dir.chdir("#{Dir.home}/src/devops")
system "/bin/bash"
Dir.chdir(curr_dir) # Restore the original working directory if desired
Oh, this is probably far better (you can probably guess how little familiarity I have with Ruby):
system("/bin/bash", :chdir=>"#{Dir.home}/src/devops")

Bash script calls vi for manual editing, then script resumes?

I wrote a script that creates a backup of a text file, and a second script that verifies some syntax in text file using SED.
In the middle, there is a manual process: Users edit the original file adding some strings. This process must remain manual.
I would like to merge my two scripts so the backup is created, vi is open for the user, when the user is done editing the file, the script resumes doing the syntax verification.
I am learning by doing, but really do not know how to code the "open vi, wait for the user to do his editing, take control over and resume with verification" part.
I read there is a function called system (in Perl) that could be used, but my code is in BASH.
Any suggestions on how to get this done in BASH? Thanks!
In bash, each statement is essentially like an implicit call to system (unless it's a builtin shell command) since shell scripts are designed to make it easy to run other programs.
backup some_file.txt
vi some_file.txt # The script blocks until the user exits vi
verify_syntax some_file.txt
The only difference between using vi and a command like ls is that ls will do its thing and exit without user intervention, while vi (or any interactive command) will run until the user explicitly exits.

Ruby, Unicorn, and environment variables

While playing with Heroku, I found their approach of using environment variables for server-local configuration brilliant. Now, while setting up an application server of my own, I find myself wondering how hard that would be to replicate.
I'm deploying a sinatra application, riding Unicorn and Nginx. I know nginx doesn't like to play with the environment, so that one's out. I can probably put the vars somewhere in the unicorn config file, but since that's under version control with the rest of the app, it sort of defeats the purpose of having the configuration sit in the server environment. There is no reason not to keep my app-specific configuration files together with the rest of the app, as far as I'm concerned.
The third, and last (to my knowledge) option, is setting them in the spawning shell. That's where I got lost. I know that login and non-login shells use different rc files, and I'm not sure whether calling something with sudo -u http stuff is or not spawning a login shell. I did some homework, and asked google and man, but I'm still not entirely sure on how to approach it. Maybe I'm just being dumb... either way, I'd really appreciate it if someone could shed some light on the whole shell environment deal.
I think your third possibility is on the right track. What you're missing is the idea of a wrapper script, whose only function is to set the environment and then call the main program with whatever options are required.
To make a wrapper script that can function as a control script (if prodEnv use DB=ProdDB, etc), there is one more piece that simplifies this problem. Bash/ksh both support a feature called sourcing files. This an operation that the shell provides, to open a file and execute what is in the file, just as if it was in-lined in the main script. Like #include in C and other languages.
ksh and bash will automatically source /etc/profile, /var/etc/profile.local (sometimes), $HOME/.profile. There are other filenames that will also get picked up, but in this case, you'll need to make your own env file and the explicitly load it.
As we're talking about wrapper-scripts, and you want to manage how your environment gets set up, you'll want to do the sourcing inside the wrapper script.
How do you source an environment file?
envFile=/path/to/my/envFile
. $envFile
where envFile will be filled with statements like
dbServer=DevDBServer
webServer=QAWebServer
....
you may discover that you need to export these variable for them to be visble
export dbServer webServer
An alternate assignment/export is supported
export dbServer=DevDBServer
export webServer=QAWebServer
Depending on how non-identical your different environments are, you can have your wrapper script figure out which environment file to load.
case $( /bin/hostame ) in
prodServerName )
envFile=/path/2/prod/envFile ;;
QASeverName )
envFile=/path/2/qa/envFile ;;
devSeverName )
envFile=/path/2/dev/envFile ;;
esac
. ${envFile}
#NOW call your program
myProgram -v -f inFile -o outFile ......
As you develop more and more scripts in your data processing environment, you can alway source your envFile at the top. When you eventually change the physical location of a server (or it's name), then you have only one place that you need to make the change.
IHTH
Also a couple of gems dealing with this. figaro that works both with or without heroku. Figaro uses a yaml file (in config and git ignored) to keep track of variables. Another option is dotenv that reads variables from an .env file. And also another article with all them options.
To spawn an interactive shell (a.k.a. login shell) you need to invoke sudo like this:
sudo -i -u <user> <command>
Also you may use -E to preserve the environment. This will allow some variables to be pased for your current environment to the command invoked with sudo.
I solved a similar problem by explicitly telling Unicorn to read a variables file as part of startup in its init.d script. First I created a file in a directory above the application root called variables. In this script I call export on all my environment variables, e.g. export VAR=value. Then I defined a variable GET_VARS=source /path/to/variables in the /etc/init.d/unicorn file. Finally, I modified the start option to read su - $USER -c "$GET_VARS && $CMD" where $CMD is the startup command and $USER is the app user. Thus, the variables defined in the file are exported into the shell of Unicorn's app user on startup. Note that I used an init.d script almost identical to the one from this article.

Can I use what I wrote on the shell (bash, cmd, irb, etc) in a script automatically?

The general idea is pretty simple, I want to make a script for a certain task, I do it in the shell (any shell), and then I want to copy the commands I have used.
If I copy all the stuff in the window, then I have a lot of stuff to delete and to correct. (and is not easy to copy from shell)
Resume: I want to take all the things I wrote...
Is there an easy way to do this easy task?
Update: Partial solution
In bash, the solution is pretty simple, there is a history command, and there are ports of the idea:
IRB: Tweaking IRB
Cmd: Use PowerShell -> Get-History (or use cygwin)
Another Update:
I found that doskey have a parameter history to do this:
cmd: Doskey /history >> history.cmd
Yes, you can use:
history -w filename.sh
This will save your command history to filename.sh. You may need to edit that to keep just the lines at the end that are part of your command sequence.
NOTE: This is a bash command and will not work with all shells.
script may help here. Typing script will throw you into a new shell and save
all input and output to a file called typescript. When you're done with your interaction,
exit the shell. The file typescript is then amenable to grep'ing. For example, you might
grep for your prompt and save the output to the file. If you're a clumsy typist like me, then you may need to do some cleanup work to remove backspaces. There used to be a program that did thisbut I don't seem to find it right now. Here is one I found on the
'net: http://www.cat.pdx.edu/tutors/files/fixts.cpp
This approach is especially useful if you want to track and post on the web an entire interactive session.

Resources