Docker permissions are different when running on CodeBuild - ruby

I am using ruby:2.5.1-alpine for my application's container image. When I run make release locally, which runs the following commands:
docker-compose build --pull release
docker-compose up --abort-on-container-exit test
The test stage runs the migrations, linter and specs. It runs without an issues. When I run the same exact command on CodeBuild which is using a aws/codebuild/ruby:2.5.1 as the host environment I get the following error:
Running RuboCop...
Inspecting 82 files
.................................................................................W
Offenses:
bin/console:1:1: W: Lint/ScriptPermission: Script file console doesn't have execute permission.
I checked the git permissions and everything looks kosher:
edit [...master ] git ls-tree HEAD bin/
100755 blob ad9a02fe6ed489beb105295de771cab5fa87a6af bin/console
100755 blob 9d87e9579b9c16c42d65301e8540888e044ba25d bin/run
100755 blob cf5febb7c6dd34aebdb862792fa147d06a9c5764 bin/setup
edit [...master ]
I added a debug statement to see what the permissions are at the time of the tests running and here is where it diverges.
Locally I get:
test_1_4d039799a73c | File permissions for bin/console are
#<File::Stat ino=10432530, **mode=0100755**, nlink=1, uid=1000, gid=1000,...>
And On the CodeBuild server I get:
File permissions for bin/console are
#<File::Stat ino=525319, **mode=0100666**, nlink=1, uid=0, gid=0, ...>
As I pasted the above I also noticed that the UID and the GID are different. So it looks like the permissions are not being set correctly:
RUN addgroup -g 1000 app && adduser -u 1000 -G app -D app
RUN chown -R app:app $APP_ROOT
WORKDIR $APP_ROOT
This was part of the issue:
https://forums.docker.com/t/not-able-to-change-permissions-for-files-and-folders-created-on-mounted-volumes/45769
After removing the volume for the stage that runs on CodeBuild the mid and gig are correct but the permissions themselves are still off.
·[36mcodebuild_1 |·[0m File permissions for bin/console are
#<File::Stat ino=2755035, mode=0100666, nlink=1, uid=1000, gid=1000...>
Not sure how to go about debugging this.

Related

Setting a laravel storage directory permission by ebextentions

I'm working on elastic beanstalk exextentions. A storage-permission-denied error occurs every deployments and I a have to type command to resolve that. Does the code below(.extensions/chmod.config), prevent the error occur ?
container_commands:
01addpermission:
command: "chmod -R 755 /var/app/current/storage"
01clearcache:
command: "php /var/app/current config:cache"
The code sadly will not work. The reason is that container commands run when your app is in the staging folder, not in current folder:
The specified commands run as the root user, and are processed in alphabetical order by name. Container commands are run from the staging directory, where your source code is extracted prior to being deployed to the application server.
You can try to use relative paths:
container_commands:
01addpermission:
command: "chmod -R 755 ./storage"
02clearcache:
command: "php . config:cache"
The alternative is to use postdeploy platform hook which runs commands after you app is deployed:
Files here run after the Elastic Beanstalk platform engine deploys the application and proxy server

Laravel on ElasticBeanstalk 'log file permission denied' error keeps coming up even the permission is set in the .ebextensions config file

I am deploying my Laravel application to the ElasticBeanstalk environment. But I am having issue with laravel.log file permissions.
I deployed my application using "eb deploy" command. After I deployed, I access my application. But it is throwing the following error.
The stream or file "/var/app/current/storage/logs/laravel.log" could not be opened: failed to open stream: Permission denied
To solve the issue, I ssh into the server and run the following command.
sudo -u root chmod 777 -R /var/app/current/storage/logs
The problem is solved. Now my application is working. But I deploy my application again running "be deploy" command. After the deployment, the issue popped up again. To solve the issue in a consistent way. I tried to run the command in the .ebextensions config file as follow.
container_commands:
01-migrations:
command: "php artisan migrate --force"
02-log-storage-permissions:
command: "sudo -u root chmod -R 777 /var/app/current/storage/logs/"
I could deploy my application. But the issue still persists. It seems like the command is not working. What is wrong with my configuration and how can I fix it?
I believe that this is because container_commands run when your application is in the staging folder. Thus, after you run 02-log-storage-permissions, your /var/app/current will be replaced anyway with the staging folder. So your chmod wont persist.
To rectify the issue, you can try one of the two options:
Use
02-log-storage-permissions:
command: "sudo -u chmod -R 777 ./storage/logs/"
to change the logs in staging folder.
use postdeploy hook to run your script:
after the Elastic Beanstalk platform engine deploys the application and proxy server.

How can I run multi-line commands inside the spinnaker-spinnaker-halyard-0

I am writing a bash file where I wrote some scripts to install the spinnaker in Kubernetes cluster (minikube) everything is working fine, the spinnaker is installed now but when I come inside the halyard and want to run few scripts from my bash file then it is coming inside my halyard container but not executing the next commands because I don't know how to run the multiple commands under it. I tried \ and && as well but not working.
These are my commands
kubectl exec --namespace spinnaker -it spinnaker-spinnaker-halyard-0 bash
hal config features edit --artifacts true
hal config artifact github enable
GITHUB_ACCOUNT_NAME=github_user
hal config artifact github account add ${GITHUB_ACCOUNT_NAME} \
--token
hal deploy apply
if I try kubectl exec --namespace spinnaker -it spinnaker-spinnaker-halyard-0 bash \ then it is running the next command (hal config features edit --artifacts true ) but it is showing error "--unknown flag --artifacts".
NOTE: If I run these command manually in the CLI then everything works fine but I want to run these commands from my bash file.
I'm assuming the commands that you want to run are not stored in a file in the container. If you add these commands to a script file(e.g. config-halyard.sh), and mount a persistent volume to the Halyard container(containing this script), you should be able to execute it from outside the container with this command:
kubectl exec --namespace spinnaker -it spinnaker-spinnaker-halyard-0 /bin/bash config-halyard.sh
That is assuming that the script would be in the container's root directory

How to change files ownership in laravel via docker using powershell

I'm using docker on windows. After I created image from Dockerfile docker build -t my-laravel-image in my laravel project I tried to run it docker run -p 8000:8000 my-laravel-image but got the error in the browser ERR_ADDRESS_INVALID
After some digging I found that it is because /storage and /vendor doesn't have right permissions to write. However, I red on linux forum, that to to give permissions to everyone with chmod 777 is bad so I should change owner with chown, however, I could find command which would do the job on Powershell, so I'm asking you for help
the tutorial I am following is https://www.techiediaries.com/docker-compose-laravel/
EDIT
Tried this ICACLS "C:\Users\Dominykas\Projects\laravel" /setowner "administrator" it succeded, but I get the same error
docker exec -it <your container> chown -R myuser:mygroup laraveldir

docker command not found when executed over bitbucket ssh pipeline

I'm using bitbucket pipeline to deploy my laravel application, when I push to my repo it start to build and it works perfectly until the docker exec command which will send inline command to execute inside the php container, I get the error
bash: line 3: docker: command not found
which is very wired because when I run the command directly on the same server at the same directory it works perfectly, docker is installed on the server and as you can see inside execute.sh docker-compose works with no issues however when running over the pipeline I get the error, notice the pwd to make sure the command executed in the right directory.
bitbucket-pipelines.yml
image: php:7.3
pipelines:
branches:
testing:
- step:
name: Deploy to Testing
deployment: Testing
services:
- docker
caches:
- composer
script:
- apt-get update && apt-get install -y unzip openssh-client
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer require phpunit/phpunit
- vendor/bin/phpunit laravel/tests/Unit
- ssh do.server.net 'bash -s' < execute.sh
Inside execute.sh it looks like this :
cd /home/docker/docker-laravel
docker-compose build && docker-compose up -d
pwd
docker exec -ti php sh -c "php helpershell.php"
exit
And the output from bitbucket pipeline build result looks like this :
Successfully built 1218483bd067
Successfully tagged docker-laravel_php:latest
Building nginx
Step 1/1 : FROM nginx:latest
---> 4733136e5c3c
Successfully built 4733136e5c3c
Successfully tagged docker-laravel_nginx:latest
Creating php ...
Creating mysql ...
Creating mysql ... done
Creating php ... done
Creating nginx ...
Creating nginx ... done
/home/docker/docker-laravel
bash: line 3: docker: command not found
I think that part of the reason this is happening is because docker-compose and docker are two separate commands. Just because one works does not mean they both work. Also you might want to check the indentation of your bitbucket-pipelines.yaml file because yaml can be pretty finicky.
See here for sample structure: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
Are you defining docker as a service in the bitbucket pipeline, according to the documentation, with a top level definitions entry? Like so:
definitions:
services:
docker:
memory: 512 # reduce memory for docker-in-docker from 1GB to 512MB
Alternatively docker is included and ready to use directly in the image the pipeline is running, then you might try removing the service key from your step as that could be conflicting with the docker installed on the image (and since you haven't instantiated the docker service via the top level definitions entry I've posted above, the pipeline may end up in a state where it thinks docker isn't setup.

Resources