Elasticbeanstalk throwing a permission denied error on whenever config - ruby

I am trying to integrate whenever gem into my elasticbeanstalk amazon linux 1 but in my schedule.rb, it throws out a permission denied error when trying to create my output error file.
This is my schedule.rb
require 'dotenv/load'
set :job_template, "TZ=\"Asia/Kuala_Lumpur\" bash -c ':job'"
env :PATH, ENV['PATH']
every 1.minute do
if (ENV['RAILS_APP_TYPE'] == "single") && (ENV['RACK_ENV'] == 'develop')
rake 'sidekiq:send_alert', :output => {:error => "log/sidekiq_check_error.log", :standard => "log/sidekiq_check.log"}
end
end
and this is my .ebextensions/whenever.config
commands:
create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/update_cron_tab.sh":
mode: "000755"
content: |
#!/bin/sh
RAILS_APP_TYPE=$(/opt/elasticbeanstalk/bin/get-config environment -k RAILS_APP_TYPE)
RACK_ENV=$(/opt/elasticbeanstalk/bin/get-config environment -k RACK_ENV)
echo "WHENEVER GEM CRON"
echo "$RACK_ENV"
EB_SCRIPT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k script_dir)
EB_SUPPORT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k support_dir)
EB_APP_CURRENT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir)
EB_CONFIG_APP_LOGS=$(/opt/elasticbeanstalk/bin/get-config container -k app_log_dir)
echo "$EB_CONFIG_APP_LOGS"
touch "$EB_APP_CURRENT_DIR/log/sidekiq_check.log"
touch "$EB_APP_CURRENT_DIR/log/sidekiq_check_error.log"
sudo chown webapp:webapp "$EB_APP_CURRENT_DIR/log/sidekiq_check.log"
sudo chown webapp:webapp "$EB_APP_CURRENT_DIR/log/sidekiq_check_error.log"
. $EB_SUPPORT_DIR/envvars
. $EB_SCRIPT_DIR/use-app-ruby.sh
echo '* * * * * TZ="Asia/Kuala_Lumpur" bash -c 'cd $EB_APP_CURRENT_DIR && RAILS_ENV=$RACK_ENV' bundle exec rake sidekiq:send_alert --silent >> $EB_APP_CURRENT_DIR/log/sidekiq_check.log 2>> $EB_APP_CURRENT_DIR/log/sidekiq_check_error.log'
sudo su - webapp -c "cd $EB_APP_CURRENT_DIR; whenever --update-crontab --set environment='$RACK_ENV'"
sudo su - webapp -c "crontab -l"
Whenever this gets deployed, I end up getting an error in my /var/spool/mail/ec2-user file that says permission denied log/sidekiq_check.log
Is there a reason why I get this error? And how can I fix it?

Related

permission denied with yapf in ssh script because of make

I have this strange error : i have a makefile with a format target that runs yapf -ir -vv --style pep8 .
When I ssh log as user into my debian 11 server and run make format it works. When i run sudo make format i get the error
yapf -ir -vv --style pep8 .
make: yapf: No such file or directory
make: *** [makefile:5: format] Error 127
When i run a ssh script from my local machine that logs to the server and runs with the line make format, i get the following error :
yapf -ir -vv --style pep8 .
make: yapf: Permission denied
make: *** [makefile:5: format] Error 127
I also get similar output for linting:
/bin/sh: 1: pylint: Permission denied
make: *** [makefile:7: lint] Error 127
I've tried to change the owner with chown to my user, i've tried to make read and write permissions to user, group and other and it's the same...so i suspect it's a sudo thing but i'm not sure.
The user belongs to sudo group but i'm confused why the script doesnt work and it works manually...
The script is like this :
ssh -p 4444 -i /mykey user#ip << 'ENDSSH'
echo 'deploying zabbix'
cd /zabbix && docker-compose up -d
echo 'zabbix deployed'
echo 'copying zabbix module to container'
sudo mkdir -p /var/lib/zabbix/modules/dockermodule
docker cp /zabbix/zabbix_module_docker.so zabbixserver:/var/lib/zabbix/modules/dockermodule
echo 'copied zabbix module to container done'
echo 'extracting tar file'
sudo rm -rf /app/* && sudo tar -xf /tmp/project.tar -C /
sudo chown -R user /app
sudo chmod -R u=rwx /app
echo 'tar file extracted'
echo 'going into app folder'
cd /app
echo 'getting rid of hidden macos files'
sudo find . -type f -name '._*' -delete
echo 'hidden macos files deleted'
echo 'running install'
make install
echo 'install done'
echo 'running format'
make format
echo 'format done'
echo 'running lint'
make lint
echo 'lint done'
echo 'running test'
make test
echo 'test done'
#echo 'running vulnerability check'
#trivy fs --security-checks vuln --severity HIGH,CRITICAL / > security_check.txt
#echo 'vuln check done'
echo 'running docker build & run'
make docker
echo 'docker built and running'
ENDSSH
the make commands are :
make format:
yapf -ir -vv --style pep8 .
make lint:
cd ..; pylint app --verbose --disable=R,C -sy
The commands are not failing when i replave make format or make lint with the commands they run in my script...

Remote execution of SSH command hangs in ruby using Net::SSH for a particular command

I found a similar question here, but the answer in such a question didn't work for me.
I am trying to connect the remote ssh server via ruby using Net::SSH.
It is working fine for me for all the commands provided via script and I could read the output of the command successfully.
But when I use the below command it is getting stuck in SSH.exec!(cmd) and control is not returned from the line. Only if i click Ctrl+c in command line the script is getting ended.
sudo -S su root -c 'cockroach start --advertise-addr=34.207.235.139:26257 --certs-dir=/home/ubuntu/certs --store=node0015 --listen-addr=172.31.17.244:26257 --http-addr=172.31.17.244:8080 --join=34.207.235.139:26257 --background --max-sql-memory=.25 --cache=.25;'
This is the script I run from a SSH terminal with no issue:
sudo -S su root -c 'pkill cockroach'
sudo -S su root -c '
cd ~;
mv /home/ubuntu/certs /home/ubuntu/certs.back.back;
mkdir /home/ubuntu/certs;
mkdir -p /home/ubuntu/my-safe-directory;
cockroach cert create-ca --allow-ca-key-reuse --certs-dir=/home/ubuntu/certs --ca-key=/home/ubuntu/my-safe-directory/ca.key;
cockroach cert create-node localhost 34.207.235.139 172.31.17.244 $(hostname) --certs-dir /home/ubuntu/certs --ca-key /home/ubuntu/my-safe-directory/ca.key;
cockroach cert create-client root --certs-dir=/home/ubuntu/certs --ca-key=/home/ubuntu/my-safe-directory/ca.key;
'
sudo -S su root -c 'cockroach start --advertise-addr=34.207.235.139:26257 --certs-dir=/home/ubuntu/certs --store=node0015 --listen-addr=172.31.17.244:26257 --http-addr=172.31.17.244:8080 --join=34.207.235.139:26257 --background --max-sql-memory=.25 --cache=.25;'
This is the Ruby script who attempts to do exactly the same, but it gets stuck:
require 'net/ssh'
ssh = Net::SSH.start('34.207.235.139', 'ubuntu', :keys => './plank.pem', :port => 22)
s = "sudo -S su root -c 'pkill cockroach'"
print "#{s}... "
puts ssh.exec!(s)
s = "sudo -S su root -c '
cd ~;
mv /home/ubuntu/certs /home/ubuntu/certs.back.#{rand(1000000)}};
mkdir /home/ubuntu/certs;
mkdir -p /home/ubuntu/my-safe-directory;
cockroach cert create-ca --allow-ca-key-reuse --certs-dir=/home/ubuntu/certs --ca-key=/home/ubuntu/my-safe-directory/ca.key;
cockroach cert create-node localhost 34.207.235.139 172.31.17.244 $(hostname) --certs-dir /home/ubuntu/certs --ca-key /home/ubuntu/my-safe-directory/ca.key;
cockroach cert create-client root --certs-dir=/home/ubuntu/certs --ca-key=/home/ubuntu/my-safe-directory/ca.key;
'"
print "Installing SSL certifications... "
puts "done (#{ssh.exec!(s)})"
s = "sudo -S su root -c 'cockroach start --advertise-addr=34.207.235.139:26257 --certs-dir=/home/ubuntu/certs --store=node0015 --listen-addr=172.31.17.244:26257 --http-addr=172.31.17.244:8080 --join=34.207.235.139:26257 --background --max-sql-memory=.25 --cache=.25;'"
print "Running start command... "
puts "done (#{ssh.exec!(s)})"
# Use this command to verify the node is running:
# ps ax | grep cockroach | grep -v grep
s = "ps ax | grep cockroach | grep -v grep"
print "#{s}... "
sleep(10)
puts "done (#{ssh.exec!(s)})"
ssh.close
exit(0)
Here is the put put of the ruby script:
C:\code2\blackstack-deployer\examples>ruby start-crdb-environment.rb
sudo -S su root -c 'pkill cockroach'...
Installing SSL certifications... done ()
Running start command...
As you can see, the command gets stuck in the line Running start command...
I tried putting the command in the background:
s = "sudo -S su root -c 'cockroach start --advertise-addr=34.207.235.139:26257 --certs-dir=/home/ubuntu/certs --store=node0015 --listen-addr=172.31.17.244:26257 --http-addr=172.31.17.244:8080 --join=34.207.235.139:26257 --background --max-sql-memory=.25 --cache=.25 &'"
print "Running start command... "
puts "done (#{ssh.exec!(s)})"
but what happned is that the cockroach process never starts (the ps ax | grep cockroach | grep -v grep returns nothing)
I figured out how to fix it.
I added > /dev/null 2>&1 at the end of the command, and it worked.
cockroach start --background --max-sql-memory=.25 --cache=.25 --advertise-addr=%net_remote_ip%:%crdb_database_port% --certs-dir=%crdb_database_certs_path%/certs --store=%name% --listen-addr=%eth0_ip%:%crdb_database_port% --http-addr=%eth0_ip%:%crdb_dashboard_port% --join=%net_remote_ip%:%crdb_database_port% > /dev/null 2>&1;
Are there any logs/output in the cockroach-data/logs directory (should be located where you are running the start command from).
Or perhaps try redirecting stdout+stderr to a file and seeing if there is any output there.
My hypothesis is that the CRDB process isn't starting correctly and so control isn't being returned to the terminal. The cockroachdb docs say that the --background flag only returns control when the crdb process is ready to accept connections. And the question/answer you linked noted that "SSH.exec! will block further execution until the command returns".

Lambda gives No such file or directory(cant find the script file) error while running a bash script inside container. But this is successful in local

I am creating a lambda function from a docker image, this docker image actually runs a bash script inside of the docker container but when I tried to test that then it gives this following error. But this is successful in local. I tested with commented and uncommented entrypoint. Please help me to figure it out.
The dockerfile -
FROM amazon/aws-cli
USER root
ENV AWS_ACCESS_KEY_ID XXXXXXXXXXXXX
ENV AWS_SECRET_ACCESS_KEY XXXXXXXXXXXXX
ENV AWS_DEFAULT_REGION ap-south-1
# RUN mkdir /tmp
COPY main.sh /tmp
WORKDIR /tmp
RUN chmod +x main.sh
RUN touch file_path_final.txt
RUN touch file_path_initial.txt
RUN touch output_final.json
RUN touch output_initial.json
RUN chmod 777 file_path_final.txt
RUN chmod 777 file_path_initial.txt
RUN chmod 777 output_final.json
RUN chmod 777 output_initial.json
RUN yum install jq -y
# ENTRYPOINT ./main.sh ; /bin/bash
ENTRYPOINT ["/bin/sh", "-c" , "ls && ./tmp/main.sh"]
The error -
START RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Version: $LATEST
/bin/sh: ./tmp/main.sh: No such file or directory
/bin/sh: ./tmp/main.sh: No such file or directory
END RequestId: 8d689260-e500-45d7-aac8-ae260834ed96
REPORT RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Duration: 58.29 ms Billed Duration: 59 ms Memory Size: 128 MB Max Memory Used: 3 MB
RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Error: Runtime exited with error: exit status 127
Runtime.ExitError
Here how i did it to Run A C++ over a bash script :
#Pulling the node image from the AWS WCR PUBLIC docker hub.
FROM public.ecr.aws/lambda/provided:al2.2022.10.11.10
#Setting the working directory to /home.
WORKDIR ${LAMBDA_RUNTIME_DIR}
#Copying the contents of the current directory to the working directory.
COPY . .
#This is installing ffmpeg on the container.
RUN yum update -y
# Install sudo, wget and openssl, which is required for building CMake
RUN yum install sudo wget openssl-devel -y
# Install development tools
RUN sudo yum groupinstall "Development Tools" -y
# Download, build and install cmake
RUN yum install -y make
#RUN wget https://github.com/Kitware/CMake/releases/download/v3.22.3/cmake-3.22.3.tar.gz && tar -zxvf cmake-3.22.3.tar.gz && cd ./cmake-3.22.3 && ./bootstrap && make && sudo make install
RUN yum -y install gcc-c++ libcurl-devel cmake3 git
RUN ln -s /usr/bin/cmake3 /usr/bin/cmake
RUN ln -s /usr/bin/ctest3 /usr/bin/ctest
RUN ln -s /usr/bin/cpack3 /usr/bin/cpack
# get cmake versin
RUN cmake --version
RUN echo $(cmake --version)
#This is installing the nodejs and npm on the container.
RUN ./build.sh
RUN chmod 755 run.sh bootstrap
#This is running the nodejs application.
CMD [ "run.sh" ]
You will need a bootstrap file in the root directory : (FROM DOC)
#!/bin/sh
set -euo pipefail
# Initialization - load function handler
source $LAMBDA_RUNTIME_DIR/"$(echo $_HANDLER | cut -d. -f1).sh"
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event. The HTTP request will block until one is received
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
# Extract request ID by scraping response headers received above
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Run the handler function from the script
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA")
# Send the response
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done

Permissions when trying to zip through SSH

I don't how to get permissions to run zip command on the remote server. When I am on the server running this command: sudo -u the_user bash fixes this but when running this through ssh connection generates error: zip I/O error: Permission denied
This generates this error:
ssh "${SERVER}" \
"bash -s" <<'ENDSSH'>&1
zip -r "${DIR_NAME}.zip" $DIR_NAME
ENDSSH
If I add sudo -u the_user bash like so:
ssh "${SERVER}" \
"bash -s" <<'ENDSSH'>&1
sudo -u the_user bash
zip -r "${DIR_NAME}.zip" $DIR_NAME
ENDSSH
...I'm getting:
sudo: sorry, you must have a tty to run sudo

echo string into file fails when cp succeeds

OS: Ubunutu 14.04
In the /home/ubuntu directory, I created the following script:
echo >000-default.conf.test
sudo cp 000-default.conf.test /etc/apache2/sites-enabled/000-default.conf.test
sudo echo 'this is a test'>> /etc/apache2/sites-enabled/000-default.conf.test
sudo cat /etc/apache2/sites-enabled/000-default.conf.test
When I run the script, I get the following error message:
./test_f.sh: line 3: /etc/apache2/sites-enabled/000-default.conf.test: Permission denied
Any ideas why I am getting the error message when the copy operation is succeeding?
Sure.
Redirecting output into files is done by the shell, not by sudo. So if the shell is running under unprivileged user, then >> is invoked earlier than privileges are acquired by sudo.
You can use the following approach:
echo >000-default.conf.test
sudo cp 000-default.conf.test /etc/apache2/sites-enabled/000-default.conf.test
echo 'this is a test' | sudo tee -a /etc/apache2/sites-enabled/000-default.conf.test >/dev/null
sudo cat /etc/apache2/sites-enabled/000-default.conf.test
By the way, instead of
echo >000-default.conf.test
you can use
touch 000-default.conf.test
or even
>000-default.conf.test

Resources