AWS Docker image not being pulled completely - bash

I have a custom tomcat docker image where we add the setenv.sh while building the image.
After updating the setenv.sh recently and redeploying, the change is not being seen in AWS, but if I docker pull the created image locally the change is present.
Is there any reason why this could be happening?
FROM tomcat:7.0-jdk8
...
# Copy Application
COPY ./platform-war/ ./platform-war/
COPY ./scripts/ ./scripts/
...
# Tomcat additional env
RUN /bin/bash -c "ln -s $BASE_DIR/scripts/setenv.sh /usr/local/tomcat/bin/setenv.sh"
# Make sripts executable
RUN /bin/bash -c "chmod -R +x $BASE_DIR/scripts"
And inside the image in aws the link is working as expected
-rwxr-xr-x 1 root root 1975 Jul 2 2021 digest.sh
-rwxr-xr-x 1 root root 3718 Jul 2 2021 setclasspath.sh
lrwxrwxrwx 1 root root 32 Feb 3 23:15 setenv.sh -> /opt/base/scripts/setenv.sh
-rwxr-xr-x 1 root root 1912 Jul 2 2021 shutdown.sh
yet, the content of the setenv is not the expected one, it is an old one.
The image looks updated cause it has present code changes.

Related

Laravel storage sym:link not working in local environment

I am using the default laravel folder structure and the filesystem public:
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL') . '/storage',
'visibility' => 'public',
],
Everything runs on docker and the complete laravel folder is mounted into /var/www/html:
#... Laravel PHP-FPM service definition
volumes:
- "./:/var/www/html"
#...
When I run php artisan storage:link and then cd /var/www/html/public and ls -la I see that the symlink exists:
lrwxr-xr-x 1 root root 32 May 5 11:19 storage -> /var/www/html/storage/app/public
If i then check to see if the linked folder also exists in the container cd /var/www/html/storage/app/public everything is there as expected.
Also, when checking ls -la /var/www/html/public/storage/ directly it shows me the content of the linked folder. So everything is working from symlink perspective
# ls -la /var/www/html/public/storage/
total 512
drwxr-xr-x 4 www-data www-data 128 May 5 11:21 .
drwxr-xr-x 11 www-data www-data 352 Apr 19 14:03 ..
-rw-r--r-- 1 www-data www-data 14 Feb 16 16:58 .gitignore
-rw-r--r-- 1 www-data www-data 519648 May 5 10:58 sample.png
However, when opening the url /storage/sample.png the server returns 404.
The remaining contents of /var/www/html/public do work fine, so for example /var/www/html/public/test.png is visible under localhost/test.png.
On production, everything works fine though. Any ideas why the symbolic link is not working on the local system while the link is actually correct?
I already tried removing the storage link and setting the symlink again.
You mentioned that you are using docker. I think the reason that it doesn't work locally, but in production, could be that there is a configuration shift between the two deployments.
For it to work the Nginx container must have access to the storage folder, since it is supposed to serve assets that are located there. I'm guessing that is currently not the case and only the public folder is mounted.
Check if the entire project or both ./public and ./storage are mounted into the Nginx container.
Something like this:
services:
#...
nginx:
#...
volumes:
- "./public:/var/www/html/public:ro"
- "./storage/app:/var/www/html/storage/app:ro"
#...
Add these lines on file configuration of nginx:
location ^~ /storage/ {
root "/var/www/html/public/storage/"
}
Every time when the request start with storage, will access directly the shared path.
It's important to make sure your symlink is created from inside the docker container.
If you run storage:link from outside of your container, the symlink will be relative to your local machine.
Get into the container by running:
docker exec -it name_of_your_php_container bash
Once you are inside, you should be able to run php artisan

Unable to change permissions, after cloning and adding new files

drwxrwxr-x 2 rynostajcar-130991 rynostajcar-130991 4.0K Feb 20 16:13 .
drwxrwxr-x 4 rynostajcar-130991 rynostajcar-130991 4.0K Feb 20 16:13 ..
-rwxrwxr-x 1 rynostajcar-130991 rynostajcar-130991 347 Feb 20 16:13 console
-rw-r--r-- 1 root root 47 Feb 20 16:17 schedule
-rwxrwxr-x 1 rynostajcar-130991 rynostajcar-130991 131 Feb 20 16:13 setup
[16:33:12] (master) bin
// ♥ whoami
rynostajcar-130991
I recently cloned a repository I made this morning onto another computer and added a new file 'schedule'. I noticed now im unable to change permissions of that file, I assume its due to the file being root. Im still new so could someone explain how to change the user from root to rynostajcar?
After looking around I found out my problem was the user, I can not change the file permissions while the user is root. I used
sudo chown rynostajcar-130991:rynostajcar-130991 ./schedule which allowed me to switch the user from root to rynostajcar-130991 and change the permissions of my file schedule. Im still having problems with the new file user always being root so working on how to fix that next.

No such file or directory when run as service?

When i run my application from terminal sudo -u www-data ./scarga and open browser, the template file served well, everything ok. Command executed from /var/www/html/scarga.local/ directory.
When i run my application as sudo service scarga start it says: open ./resources/views/index.html: no such file or directory
File with HTTP handler: https://pastebin.com/MU7YDAWV
The scarga.service file: https://pastebin.com/eBL3jJFx
Tree of project: https://pastebin.com/rFVa8A3P
Rights for index.html file
-rwxr-xr-x 1 www-data www-data 3586 Jun 28 14:27 index.html`
Why this happens and how to solve?
Thank you!
You need to set the correct working directory in your script using WorkingDirectory= - presumably:
WorkingDirectory=/var/www/html/scarga.local

How can I put a Mac OS X .app under version control as one file?

I am quite new to Git; I use SourceTree on Mac OS X.
When I start tracking an .app file, Git treats it as multiple files inside the .app content.
Is there any way to make another Git client only get that .app file as an application package rather than as multiple files?
These .app things are not files; they're directories. To convince yourself of this, try running
ls -la /Applications/
and note the d at the beginning of the permission flags corresponding to *.app entries:
...
drwxr-xr-x+ 3 root wheel 102 31 Jan 16:32 App Store.app
drwxr-xr-x+ 3 root wheel 102 31 Jan 16:32 Automator.app
drwxr-xr-x+ 3 root wheel 102 31 Jan 16:32 Calculator.app
drwxr-xr-x+ 3 root wheel 102 31 Jan 16:32 Calendar.app
drwxr-xr-x+ 3 root wheel 102 31 Jan 16:32 Chess.app
drwxr-xr-x+ 3 root wheel 102 31 Jan 16:32 Contacts.app
...
As long as Unix sees those as directories, so will Git. If you want Git to treat a *.app as a single file, you'll have to archive its contents in some fashion or another.
.app .framework .bundle .pages and more, everyone of these is application package, a directory/folder in your disk.
Either svn or git seen that as directory, because they should support windows/linux. Maybe some git client would shown application package as a "file", but I think shown as directory is better.
Another way store .app as a file is zip it and upload to git, it would be clean.

"(.hg not found)" when cloning from cygwin-hosted repository via ssh to Mac and FreeBSD clients

I am running an archival filesystem on a Windows server that does automatic offsite replication which makes it the ideal place to host Mercurial repositories and be sure of their safety. This filesystem is Windows-only so I have no choice but to use Windows as the host OS.
Setup:
Server ('ungoliant', internal to my network):
Windows 7 master repository host machine.
Cygwin ssh daemon.
hg 1.9.3
Created new repository "/cygdrive/j/mercurial/rcstudio" on Windows with "hg init".
Clients:
Mac OSX 10.7.4 running hg 2.1.2
FreeBSD 8.3 running hg 2.1.2
Problem on Clients (identical on Mac and FreeBSD):
$ hg clone ssh://cjp#ungoliant//cygdrive/j/mercurial/rcstudio
running ssh cjp#ungoliant 'hg -R /cygdrive/j/mercurial/rcstudio serve --stdio'
remote: abort: There is no Mercurial repository here (.hg not found)!
abort: no suitable response from remote hg!
I've confirmed that the path and the URI are correct. There absolutely is an ".hg" directory there:
$ ls -la
total 0
drwxr-xr-x 3 cjp staff 102 Jun 5 17:41 .
drwxr-xr-x 41 cjp staff 1394 Jun 5 16:30 ..
$ scp -r cjp#ungoliant:/cygdrive/j/mercurial/rcstudio/.hg .
$ ls -la
total 0
drwxr-xr-x 3 cjp staff 102 Jun 5 17:41 .
drwxr-xr-x 41 cjp staff 1394 Jun 5 16:30 ..
drwxr-xr-x 5 cjp staff 170 Jun 5 17:41 .hg
$ ls -la .hg
total 16
drwxr-xr-x 5 cjp staff 170 Jun 5 17:41 .
drwxr-xr-x 3 cjp staff 102 Jun 5 17:41 ..
-rw-r--r-- 1 cjp staff 57 Jun 5 17:41 00changelog.i
-rw-r--r-- 1 cjp staff 33 Jun 5 17:41 requires
drwxr-xr-x 2 cjp staff 68 Jun 5 17:41 store
I've found plenty of stackoverflow questions where the issue was an improperly formatted ssh URI ... mine is formatted correctly and describes an absolute path.
I have confirmed all the following:
hg commands run fine both on the server and through ssh.
If I paste the absolute path I am able to confirm the existence of the .hg directory through ssh.
Other hg commands manually issued through ssh succeed (e.g.: ssh cjp#ungoliant 'cd /cygdrive/j/mercurial/rcstudio; hg diff;').
On the host machine I can clone locally from that same absolute path.
I'm stumped here. The Mercurial docs make it sound like 2.1.2 should be able to clone from 1.9.3 parents, so it doesn't appear to be a version conflict.
Would very much appreciate your help! Thanks!
What I have ended up with is a workaround and not an answer.
My own ignorance of remote ssh command processing lead to a misunderstanding. If I would ssh to the machine, "which hg" resulted with the cygwin-aware binary "/usr/bin/hg", but if I did a remote command "ssh cjp#ungoliant 'which hg'", it returned the path to a windows binary I also have installed on that machine.
I played with my .bashrc file and could not get remote command execution to prioritize the cygwin binary. The PATH doesn't even include the Windows PATH entries, so I started hacking around the problem and eventually decided to stop fighting with it.
I have since abandoned the entire idea because I'm convinced that even if I get it to work it will be a delicate kludge and not at all worth the time I've spent on it. I now host my repositories on a BSD box and I have a cron on the Windows machine that pulls from that repository onto the archival filesystem once every 24 hours. It's simple and worked like a charm with zero problems.
I had the same issue, solved it by adding the option --remotecmd like this
hg clone --remotecmd /usr/bin/hg ssh://me#mywindowspc//cygdrive/m/hgrepo
When you attempt an scp you would do the following:
scp username#ipaddr:/cygdrive/d/test/foo.txt ./myDestFolder
Unfortunately for hg that does not work. Instead try:
hg clone ssh://username#ipaddr/D:/testRepo

Resources