LD_PRELOAD not working on Heroku + jemalloc + quotaguard - heroku

TL;DR: update your bin/qgtunnel.
I've recently noticed an increase in my web dyno's memory usage. After digging a bit, I could see that the LD_PRELOAD variable that should be set with heroku-buildpack-jemalloc was not set correctly. I used a tiny script (bin/show_preload) that helped me debug that and trace which program was overriding LD_PRELOAD.
#!/usr/bin/env bash
echo "buildpack=foo preload='$LD_PRELOAD' at=start-app cmd='$#'"
$#
I introduced that in our Procfile:
web: bin/show_preload bin/qgtunnel bin/show_preload bin/start-nginx bin/show_preload bin/start-pgbouncer bin/show_preload bundle exec puma -C config/puma.rb
And when lauching on heroku I can see that bin/qgtunnel overrides our LD_PRELOAD configuration.
I created a tiny helper for the time being which makes sure I keep original value as well as what is added by bin/qgtunnel:
#!/usr/bin/env bash
after_qgtunnel_script=$(mktemp)
echo <<-BASH > $after_qgtunnel_script
# Retrieve previous LD_PRELOAD value
export LD_PRELOAD="\$LD_PRELOAD $LD_PRELOAD"
# Clean after usage
rm $after_qgtunnel_script
# Start following commands
$#
BASH
chmod +x $after_qgtunnel_script
bin/qgtunnel $after_qgtunnel_script $#
If you ever need this script use it in place of bin/qgtunnel

After reaching out to Quotaguard, they patched the qgtunnel binary and there is no error anymore:
curl https://quotaguard.s3.amazonaws.com/qgtunnel-2.4.1.tar.gz | tar xz
git add bin/qgtunnel vendor/nss_wrapper/libnss_wrapper.so
git commit -m "Update qgtunnel to fix LD_PRELOAD"
NOTE: new versions may occur since that one, see the related documentation

Related

What that this line means in a make file `DOCKER := $(shell command -v docker)`

$(shell command -v docker) What command means? it's being used in a Makefile.
I saw this in a github repository that I'm trying to understand.
It looks like it's setting a variable with a command to test if docker is installed, and stop the build if its not, the problem is this that I don't have a command installed and I tryed to install command in ubuntu but it can't find it, looking on internet how to install this commad command seems realy difficult because of its name, how to install command in linux/ubuntu didn't bring anything useful, also search for this being using on Makefiles trying to get some clue but nothing so far.
Running the build command seems to work becuse it build the image and yes I have docker installed, but still getting that message in the terminal make: command: Command not found
Any idea?
make build output (trucated):
$ make build
make: command: Command not found
make: command: Command not found
docker build -t codelytv/typescript-ddd-skeleton:dev .
Sending build context to Docker daemon 1.023MB
.....
This is the Makefile:
.PHONY = default deps build test start clean start-database
IMAGE_NAME := codelytv/typescript-ddd-skeleton
SERVICE_NAME := app
# Test if the dependencies we need to run this Makefile are installed
DOCKER := $(shell command -v docker)
DOCKER_COMPOSE := $(shell command -v docker-compose)
deps:
ifndef DOCKER
#echo "Docker is not available. Please install docker"
#exit 1
endif
ifndef DOCKER_COMPOSE
#echo "docker-compose is not available. Please install docker-compose"
#exit 1
endif
default: build
# Build image
build:
docker build -t $(IMAGE_NAME):dev .
# Run tests
test: build
docker-compose run --rm $(SERVICE_NAME) bash -c 'npm run build && npm run test'
# Start the application
start: build
docker-compose up $(SERVICE_NAME) && docker-compose down
# Clean containers
clean:
docker-compose down --rmi local --volumes --remove-orphans
# Start mongodb container in background
start_database:
docker-compose up -d mongo
What it means is that the person who wrote this makefile wasn't careful enough to write it in a portable way.
The command command is part of the shell (which is why you won't see it if you look for it in the GNU make manual). Not only that, it's part of the bash shell specifically: it is not a POSIX sh standard command. The bash man page says:
command [-pVv] command [arg ...]
Run command with args suppressing the normal shell function
lookup. Only builtin commands or commands found in the PATH are
executed.
Basically, running command docker ... means that any shell alias or function named docker is ignored, and only the actual docker command is run.
However, GNU make always runs /bin/sh as its shell, including for both recipes and for the $(shell ...) function.
So, if you're on a system (such as Red Hat or CentOS or Fedora GNU/Linux) where the /bin/sh is a link to the bash shell, then the above command will work.
However, if you're on a system (such as Debian or Ubuntu GNU/Linux) where the /bin/sh is a link to a simpler POSIX shell such as dash, then the above command will not work.
In reality, this is not needed because there won't be any shell aliases or functions defined in the shell that make invokes, regardless. However, if the author wants to use bash shell features in their makefiles and allow them to work, they also need to tell make to use bash as its shell, by adding this to their makefile:
SHELL := /bin/bash
(of course this assumes that the user has a /bin/bash on their system, but...)

Git hook on Ubuntu broken

I recently got a git hook from someone that aims to add the issue number, which is in a specific location of the branch name, to the beginning of all commits. The goal is to take the #number from feature/#number-issue. Here is some info:
➜ .githooks pwd
/home/luctia/.githooks
➜ .githooks git config --global --list
user.name=luctia
user.email=myemail
core.hookspath=/home/luctia/.githooks
➜ .githooks cat commit-msg
#!/bin/sh
WI=$(git status --branch | grep -iPo "(feature|bug)\/#\d+" | head -1)
WI=$(echo "($WI)" | grep -Po "\d+")
if [[ ! -z "$WI" ]]; then
WI="#$WI"
CM=$(cat "$1")
if [[ ! $CM == *"$WI "* ]]; then
echo "$WI $CM" > "$1"
fi
fi
This doesn't seem to work, though. The script is executable for every user, so that's not the issue. I have tried switching from sh to bash, and with that edit I've executed the script on a file in a repo, which added the number to the beginning of the file, so I know it works. I'm not sure if git hooks can execute bash files, but it doesn't make a difference whether I use sh or bash, though I would like to know if it can run bash scripts.
I'm using Webstorm for my IDE right now, and it doesn't work in there, and it also doesn't work on CLI git. I have no idea how to proceed.
Edit: I am pretty sure the script is not executed. When I add data > /tmp/hook to the script, no file appears. I do have to change from sh to bash though.
The problem was that I was trying to make this work on a pre-existing project, with an existing .git directory. I thought changing the config with the --global flag would just work, but apparently the config inside the .git directory of the project did not change, and the old hookspath was still there. When I changed it, the script started working.

Docker unable to start an interactive shell if the image has an entry script

My custom-made image ends with
ENTRYPOINT [ "/bin/bash", "-c", "/home/tool/entry_script.sh" ]
This is absolutely needed because at runtime, the first thing the user must do is to update an already cloned github project, and users will often forget to do it.
But then, when i try to launch using
docker run -it --rm my_image /bin/bash
i can see that the ENTRYPOINT script is being executed, but then the container exit.
I expect to have /bin/bash being executed and the shell to remain in interactive mode, due to -it flags.
What am I doing wrong?
UPDATE: I add my entry script
#!/bin/bash
echo "UPDATING GIT REPO";
cd /home/tool/cloned_github_tools_root
git pull
git submodule init
git submodule update
echo "Entrypoint ended";
Actually I've not kind of errors at runtime
When you set and entry point in a docker container. It is the only thing it will run. It's the one and only process that matters (PID 1). Once your entry_point.sh script finishes running and returns and exit code, docker thinks the container has done what it needed to do and exits, since the only process inside it exits.
If you want to launch a shell inside the container, you can modify your entry point script like so:
#!/bin/bash
echo "UPDATING GIT REPO";
cd /home/tool/cloned_github_tools_root
git pull
git submodule init
git submodule update
echo "Entrypoint ended";
/bin/bash "$#"
This starts a shell after the repo update has been done. The container will now exit when the user quits the shell.
The -i and -t flags will make sure the session gives you an stdin/stdout and will allocate a psuedo-tty for you, but they will not automatically run bash for you. Some containers don't even have bash in them.
I think the original question and answer are pretty good (thank you!). However I had the same exact problem but the provided solution did not work for me. I ended up wasting a lot of time figuring out what I was doing wrong. Hence I came up with a solution that should work all the time, if this could save time for others. In my docker entry point I'm sourcing a shell script file from Intel compiler and the received parameters $# are somewhat changed by the 'source' command. Then when ending the script with /bin/bash "$#" the original parameters are gone. Here is my updated version that would be safer for all use cases:
#!/bin/bash
# Save original parameters
allparams=("$#")
echo "UPDATING GIT REPO";
cd /home/tool/cloned_github_tools_root
git pull
git submodule init
git submodule update
echo "Entrypoint ended";
# Forward initial parameters
/bin/bash "${allparams[#]}"

How can I default to a login shell for Jenkins shell execution

I want to use rvm (or rbenv/chruby for that matter) to select different ruby versions from within my Jenkins jobs.
By default, Jenkins will use /bin/sh, which on Ubuntu, is dash.
For this to change, I can add
#!/bin/bash -l
To the top of every single shell execute function everywhere. Seeing as that's a lot of annoying work, I'd like to be able to set that somewhere central.
Using the "Shell executable" configuration setting, I can get it to run bash, adding parameters like '-l' however will fail with
"/bin/bash -l" -xe /tmp/hudson5660076222778817826.sh FATAL:
command execution failed java.io.IOException: Cannot run program
"/bin/bash -l" (in directory
"/home/jenkins/jobs/workspace/rvm-test"): error=2, No such file or
directory
I tried using the rvm plugin for jenkins, but that doesn't even install on the current release version.
Any ideas? :)
You could work around by creating a wrapper around bash:
#!/bin/sh
# for ex.: /usr/local/bin/login-bash
exec /bin/bash -l "$#"
If you want to use the default ruby just use the rvm-shell, which comes with rvm.
Login as the jenkins user and type:
$ which rvm-shell
/home/jenkins/.rvm/bin/rvm-shell
to get the path of the rvm-shell.
Use this path for the "Shell executable" option.

Setting path for whenever in cron so it can find ruby

My ruby is in /usr/local/bin. whenever can't find it, and setting PATH at the top of my cron file doesn't work either, I think because whenever is running the command inside of a new bash instance.
# this does not work
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/sbin
# Begin Whenever generated tasks for: foo
0 * * * * /bin/bash -l -c 'cd /srv/foo/releases/20110429110637 && script/rails runner -e production '\''ActiveRecord::SessionStore::Session.destroy_recent(15)'\'''
# End Whenever generated tasks for: foo
How can I tell whenever where my ruby binary is? Making a symbolic link from /usr/bin seems messy to me, but I guess that might be the only option.
This question offers env :PATH, "..." in schedule.rb as a solution, but (a) I can't find any documentation of that feature anywhere in the docs (b) it doesn't seem to have solved the asker's problem (unfortunately it takes non-trivial turnaround time for me to just try it).
update actually it is in the bottom of this page, i'll try it now.
more info
I can't modify the cron command because it's generated by whenever
i verified that if I make a new bash shell with bash -l, /usr/bin/env finds ruby just fine
I just tried the exact command in cron, starting with /bin/bash, from the command line of that user, and it worked.
so, this is very mysterious...
The solution is to put this in schedule.rb:
env :PATH, ENV['PATH']
Here's a little guide I put together on the topic.
rewrite your crontab as
0 * * * * { PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/sbin ; export PATH ;/bin/bash -l -c 'cd /srv/foo/releases/20110429110637 && script/rails runner -e production '\''ActiveRecord::SessionStore::Session.destroy_recent(15)'\''' ; }
Or you should try to figure out why your BASH shell is not picking the PATH=... that is almost certainly in your .profile or .bash_profile.
I hope this helps.
As John Bachir pointed out, you can do it via env. But let me add more input. I am deploying on AWS Opsworks. Unfortunately they do not have a ruby manager (RVM, Rbenv, etc) installed by default.
The first thing I needed to do was SSH into the instance and figure out which ruby I was using. This was easy enough by executing the which ruby command in a terminal.
$ which ruby
/usr/local/bin/ruby
Cron was using ruby located at /usr/bin/ruby. This needed to be changed.
In schedule.rb, I have:
set :env_path, ''
env :PATH, #env_path if #env_path.present?
In local, env_path doesn't need to be set. For most users, the only thing to do is execute whenever as such:
bundle exec whenever --set 'environment=development' --update-crontab
On a staging / production environment, ruby may be installed elsewhere. So running this may be more appropriate:
bundle exec whenever --set 'environment=staging&env_path=/usr/bin/local' --update-crontab
You will need to replace /usr/bin/local with the output of echo $PATH.
In Opsworks, however, I needed to create a custom Chef recipe that looked like:
node[:deploy].each do |application, deploy|
execute 'whenever' do
user 'deploy'
group 'nginx'
cwd "#{deploy[:deploy_to]}/current"
command "bundle exec whenever --set 'environment=#{deploy[:environment_variables][:RAILS_ENV]}&env_path=#{ENV['PATH']}' --update-crontab"
end
end
I hope the information here is clear enough.

Resources