how do I make a envvar-modifying alias - bash

I run several Heroku apps from the same folder and I often need to target a specific one for a command I'm typing, through one of two environment variables
the options available to me are:
heroku command --app MYAPPID
heroku command -r MYAPPSGITREMOTEID
HEROKU_APP=MYAPPID heroku command
I currently use -r but it's difficult to build aliases with it, especially if I want to pipe the output of the heroku command to a different command... I can't call myalias -r myappid if the alias is heroku command | tail
I'd much prefer something like
#production heroku command that would evaluate to HEROKU_APP=MYPRODUCTIONID heroku command
bonus points if it'll work with chaining aliases, like #production myalias which would expand both the target app envvar alias and the command to be executed alias
any ideas?

This is where shell functions are the perfect solution:
myheroku () {
local heroku_app=$1
shift
env HEROKU_APP="$heroku_app" heroku "$#"
}
myalias1 () {
myheroku "$1" specific command here
}
myalias2 () {
myheroku "$1" some other command
}
# ...
Then
myalias1 #production
Will eventually invoke
env HEROKU_APP="#production" heroku specific command here

Related

execute aws command in script with sudo

I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.

How to print environment variables on Heroku?

I'd like to print all environment variables set on my Heroku server. How can I do that with command line only?
Ok, I found the way:
heroku config
The heroku run command runs a one-off process inside a Heroku dyno. The unix command that prints environment variables is printenv (manual page). Thus
heroku run -a app-name printenv
is the command you are looking for.
step 1 : list your apps
heroku apps
Copy the name of your app
step 2 : view config variables of this app
heroku config -a acme-web
Append --json to get the output as JSON.
heroku config -a acme-web --json
Append -s to get the output in shell format, to paste directly to a .env file, for example.
heroku config -a your-app -s

Embedded terminal startup script

I usually use bash scripts to setup my environments (mostly aliases that interact with Docker), ie:
# ops-setup.sh
#!/bin/bash
PROJECT_NAME="my_awesome_project"
PROJECT_PATH=`pwd`/${BASH_SOURCE[0]}
WEB_CONTAINER=${PROJECT_NAME}"_web_1"
DB_CONTAINER=${PROJECT_NAME}"_db_1"
alias chroot_project="cd $PROJECT_PATH"
alias compose="chroot_project;COMPOSE_PROJECT_NAME=$PROJECT_NAME docker-compose"
alias up="compose up -d"
alias down="compose stop;compose rm -f --all nginx web python"
alias web_exec="docker exec -ti $WEB_CONTAINER"
alias db="docker exec -ti $DB_CONTAINER su - postgres -c 'psql $PROJECT_NAME'"
# ...
I'd like them to be run when I open the embedded terminal.
I tried Startup Tasks but they are not run in my terminal contexts.
Since I have a dedicated script for each of my projects, I can't run them from .bashrc or other.
How can I get my aliases automatically set at terminal opening ?
Today I'm running . ./ops-setup.sh manually each time I open a new embedded terminal.
You can create an alias in your .bashrc file like so:
alias ops-setup='bash --init-file <(echo '. /home/test/ops-setup.sh'; echo '. /home/test/.bashrc')'
If you call ops-setup, it will open up a new bash inside your terminal, and source .bashrc like it normally would, as well as your own script.
The only way I see to completely automate this is to modify the source code of your shell, e.g. bash, and recompile it. The files that are sourced are hardcoded into the source code.

Terraform `local-exec` to set a local alias

I'm trying to set up an alias to quickly ssh into the newly created host when I create an AWS instance in terraform. I do this by running
# Handy alias to quickly ssh into newly created host
provisioner "local-exec" {
command = "alias sshopenldap='ssh -i ${var.key_path} ubuntu#${aws_instance.ldap_instance.public_dns}'"
}
When I see the output of this execution:
aws_instance.ldap_instance (local-exec): Executing: /bin/sh -c "alias sshopenldap='ssh -i ~/.ssh/mykey.pem ubuntu#ec2-IP.compute-1.amazonaws.com'"
It seems to be ok, but the alias is not set. Could this be that the way the command is being run it wraps it in a new scope and not the one of the current shell? If I copy paste the command as is to the console the alias is set fine.
Is there a workaround for this?
I'm running terraform a MacOS X Mountain Lion's terminal.
You could try something like:
# Handy alias to quickly ssh into newly created host
provisioner "local-exec" {
command = "echo \"alias sshopenldap='ssh -i ${var.key_path} ubuntu#${aws_instance.ldap_instance.public_dns}'\" > script.sh && source script.sh && rm -rf source.sh"
}
Not sure how the quote escaping will go...
It is indeed not possible to set an alias for the current shell in a script file, which is what your are trying to do. The only way to get out of this is to not run a script, but instead source it. So:
source somefile.sh
instead of executing it should do the trick.

Managing multiple AWS accounts on the same computer

I have multiple AWS accounts, and depending on which project directory I'm in I want to use a different one when I type commands into the AWS CLI.
I'm aware that the AWS credentials can be passed in via environmental variables, so I was thinking that one solution would be to make it set AWS_CONFIG_FILE based on which directory it's in, but I'm not sure how to do that.
Using Mac OS X, The AWS version is aws-cli/1.0.0 Python/2.7.1 Darwin/11.4.2, and I'm doing this all for the purpose of utilize AWS in a Rails 4 app.
I recommend using different profiles in the configuration file, and just specify the profile that you want with:
aws --profile <your-profile> <command> <subcommand> [parameters]
If you don't want to type the profile for each command, just run:
export AWS_DEFAULT_PROFILE=<your-profile>
before a group of commands.
If you want to somehow automate the process of setting that environment variable when you change to a directory in your terminal, see Dynamic environment variables in Linux?
I think you can create an alias to aws command, which will export the AWS_CONFIG_FILE variable depending on the directory you are in. Something like following (bash) may work.
First create following shell script, lets call it match.sh and put it in /home/user/
export AWS_CONFIG_FILE=`if [[ "$PWD" =~ "MATCH" ]]; then echo "ABC"; else echo "DEF"; fi`
aws "$#"
Now define alias in ~/.bashrc script
alias awsdirbased="/home/user/match.sh $#"
Now whenever you wanna run "aws" comand instead run "awsdirbased" and it should work

Resources