I love using aliases on my ubuntu server for repeated commands as they're a huge timesaver and they're absolutely irreplaceable for me now.
I've been using cmder a lot recently on Windows as it is the best console replacement for windows that I know of. It is a wonderful piece of software and I have almost all the basic bash commands including aliases.
However, I cannot find a way to chain multiple alias commands. I've tried delving into doskey at this link Microsoft DOSKEY and the macros without any luck.
So, basically I want to create multiple aliases. For e.g.
alias loginuser1='ssh -i ~/user1keyfile user1#$s'
alias mynewcloudserver='901.801.701.601'
and want to be able to login by typing:
loginuser1 mynewcloudserver
loginuser5 mytestingcloudserver
I have currently tried this:
loginuser1 mynewcloudserver
which produces this error:
ssh: Could not resolve hostname mynewcloudserver: no address associated with name
I get that this is because it is probably looking in my hosts file for mynewcloudserver and is unable to find an entry. I am able to login by doing this instead:
loginuser1 901.801.701.601
which brings us to my problem. I am unable to call one alias from another alias
I know the above might not be the best way to create those aliases, but I just want to understand the logic and how to chain aliases together in cmder which will open up a host of possibilities pun intended.
If anyone can help me out, that would be great.
The only option I've found is to create a myscript.sh file with the commands, and create an alias to call the file.
It may be helpful to include wait between commands if they need to finish before the next one runs.
The first time you run it, it may ask you which program to use. Choose Git for Windows.
Related
So I have written a tool that generates and offline replica of another system. It allows you to run commands typically run online, offline by specify a series of .mgmt and .sock files in the command.
However, I want users to be able to enter these commands as if they were on the live system. Therefore, I had it generate a script that can be sourced to set the environment variables and aliases necessary to allow the user to enter commands easily.
There are a few issues this created that I want to work around, and I am curious is there is a standard best practice when doing this.
I want the bash prompt to change (or atleast be appended to) when the user sources the new variables so it is clear they are running commands on the offline replica. I can do this by setting $PS1. However, I also want a 'deactivate' script to restore the previous user environment. How do I undo this change to the previous one?
When they source my script, env variables change that may have been previously set. I have the script store any previous variables as OLD_<env_variable_name> and create a second deactivate script that restores them as well as removes any aliases (and eventually resets the bash prompt). Is this the best way to do this, or is there a much simpler method I may be missing?
I have a bash script that works at the moment. It gets an image and JDK 8 from a link and then runs a installer for the JDK 8 to move on to setting up another piece of software.
As I was debugging the script, I kept finding myself having to delete directories and even the java installation because when I introduce a fix and rerun the script, I have to wait for everything to download again and I have to worry about duplicate files messing up my current logic -which can probably be improved, but I'll go to the StackExchange Code Review site later.
At the moment, I would like to know what approaches there are to prevent commands -like downloading the JDK and running the JDK installer script all over again and others- from running again.
What kind of general approaches are out there for cases such as these?
For the JDK download and running the installer, I did think of simply checking for the existing of java on the system and if there is then bash would not not to run those commands.
However, there are other commands I do not want run and I do want to simply check, for example, the existence of certain files to prevent wget-ing them all over again and moving them -causing duplicates. (Should I maybe suck it up and do that anyway as that might be best practice?)
I did also think of perhaps, at each successful command, outputting like a 1 to a text file and mapping each line in that text file to the commands run in the script (like using an if statement to see if that command had a 1 or not in the text file) and if it was a 0, then the script would know only to run that command and never the 1s.
That sounded clunky to me and I am pretty sure that is not a good approach.
So, I'm working on a bash script to manage some of my version control commands that I've been doing manually via the command line. It's a long story why I'm doing this, and not just using one of the SVN tools out there. Short answer: I'm using both Git and SVN, with the same codebase, but for different purposes. These scripts allow me to execute all the commands at once.
Anyway, I've got the script going great. Everything works perfectly. My one concern is the times that SVN prompts the user for input. The big one I'm thinking about is merge conflicts. Whenever there's a conflict when its downloading files from the server, it prompts the user to take action about that conflict.
Ideally, I want to suppress all output from the SVN command during execution, and if there are merge conflicts, just postpone them and then tell the user at the end that such conflicts exist (the user can then go in and resolve them).
So, for the question: Is there a way to use bash to handle those user input prompts. To detect them, respond to them, and keep the process going?
For the sake of argument, let's work off of this simple SVN command. Assume that this command is in a Bash script, and when it is executed there is a merge conflict. What, if anything, can I do?
svn update .
Thanks in advance.
I use the
svn update my_checkout_path --accept postpone --config-option config:miscellany:preserved-conflict-file-exts=*
Where --accept postpone is to skip all auto conflict operations, and preserved-conflict-file-exts is to dissallow auto merge for all files.
Read more:
http://svnbook.red-bean.com/en/1.8/svn.tour.cycle.html#svn.tour.cycle.resolve.pending
http://svnbook.red-bean.com/en/1.8/svn.advanced.confarea.html
Update
To detect the conflict situation you can look for Summary of conflicts string in update output (if you sure you have use the english version).
I want to execute my bash scripts normally but no body see my source codes.
How can i encrypt my bash script?
thanks a lot.
Bash is a pure interpreted language so the interpreter (bash) can only run it if it is clear text.
You can try to obfuscate the code:
How to minify/obfuscate a bash script
On the other hand, you can restrict which users access to that code using system privileges.
Sorry to wake up a "dead horse", just wanted to share what I have done using gpg. As mentioned earlier bash can only run it (the script) if it is in clear text.
encrypt the shell script with gpg:
gpg -c <your_bash_script.sh>
This will ask you for a passphrase and confirm it.
This will create an encrypted file with a .gpg extension. the default encryption is CAST5 if you want a more strict cipher add --cipher-algo "cipher_name" (check the man pages for details)
<your_bash_script.sh.gpg>
decrypt your shell script with gpg:
gpg -d <your_bash_script.sh.gpg>
This will prompt you for the passphrase assigned to the file and display its contents in the on the screen.
if you put it all together, you have:
gpg -d <your_bash_script.sh.gpg> | bash
You can even use gpg keys
Every time you edit your script, you edit the un-encrypted version of the script or pipe the output of the decryption to a file and re-encrypt it when done.
You could use shc. Here is an example. Not really sure if this is a great place to ask this question though. Doesn't seem super programming-related. Super User might be a better place for it. :)
http://www.thegeekstuff.com/2012/05/encrypt-bash-shell-script/
Other answers and comments have already said that encryption isn't feasible so I won't go into that. I don't think that path even addresses the problem you are trying to solve.
You haven't given a practical example of what you are trying to achieve, so the suggestions below may not be sufficient.
With almost any problem it's best to start of with the simplest approach first and add complexity as needed. First of all, you may not need to do anything at all! When you execute a shell script, the process list will only show the name of the shell executing the script (bash) and the name of the script. No-one will be able to see the contents of the script this way.
If that doesn't meet your needs then the next step would be to use standard file permissions to ensure that no-one can look at the contents of the file. ie. Remove read/write/execute permissions for group and other
chmod go-rwx <name of script>
If neither of these are enough, you will have to provide more details about what you are trying to do and what your concerns are.
I recommend you try submitting your script to this site if you wish to protect it from public view.
While many will disagree with the idea of hiding the source code of a script written in an interpreted language, i understand why there's a desire for this work.
As someone who has had his work stolen many times, I just dont care if "obfuscation" or "encryption" is a taboo. As long as my script is protected and it works as it did before encryption, I'm happy. Never again will I allow someone else to take credit for my work. And no, writing my script in a compiled language is not an option. I do not know how to.
Anyway, if you do not want to use the site mentioned above, try the latest version of shc. I believe they've updated it in github to address the many security concerns others have mentioned. Type the following into google "shc github" and you'll see a host of available options you can try out.
I'm looking for a good non-interactive, command line FTP client to be run from a Rakefile. Like Weex, but better. Weex has different problems (for me):
It stores its config file in my home dir. I want the FTP config to be part of my project and weex doesn't have a --config-file option or something.
The behavior of ignoring files seems to be completely buggy. It doesn't remove files which it should, it doesn't let me specify relative paths, even though I do it according to the man page's instructions, etc. I've been struggling with it for an hour now and it is just completely inexplicable.
I tried running rsync over FTPFS/FUSE, but that is dead slow because FTP doesn't store mtimes, which makes rsync diff every file. Plus, there are some refresh problems and other bugs that cause access failure (http://bugs.gentoo.org/208168).
I'm stuck with FTP, unfortunately. Any help is appreciated.
Perhaps something from the ncftp suite (http://www.ncftp.com/ncftp/)? This has the ability to specify a config file of your choice and tools to operate non-interactively (ncftpget/ncftpput).
It doesn't appear to have ignore functionality, but hopefully this was helpful to you..
I've used lftp in the past with good results. It's installed by default in many distributions and offers pretty sophisticated functionality (including a couple ways to exclude files).
try sitecopy: http://www.manyfish.co.uk/sitecopy/
The trouble with lftp is that it is very slow for mirroring--which I suppose you want to do since you have been using weex.
Unfortunately, both weex and sitecopy have very limited proxy handling, so if you need to go through a HTTP proxy, lftp may still be your best bet.