I have made changes to my Merb application and deployed those to uat for testing but I am getting
Permission denied - /mnt/project-name/config/../tmp/ruby-inline/.ruby_inline
I checked for the permissions in the path according to
Permission denied - /tmp/.ruby_inline/Inline_ImageScience_cdab.c
But I couldn't able to solve it, so reverted back my changes and deployed old SHA which was running fine but I get same "permission denied Error" with the old SHA.
I understood that the issue is not with the changes I made but with some other and I am not able to get whats going wrong and how to fix this. Please help me on this. Thanks.
adding below the permissions on this path ...
ls -l /mnt/project-name/config/../tmp/ruby-inline/.ruby_inline
-rw-r--r-- 1 nobody nogroup 24571 2013-03-13 18:54 Inline_RawParseTree_ab80.c
-rwxr-xr-x 1 nobody nogroup 33465 2013-03-13 18:54 Inline_RawParseTree_ab80.so
ls -l /mnt/project-name/config/../tmp/
lrwxrwxrwx 1 root root 22 2013-03-13 18:54 pids -> /project-name/shared/pids
-rw-r--r-- 1 root root 69 2013-03-13 18:55 restart.txt
drwx------ 3 nobody nogroup 4096 2013-03-13 18:54 ruby-inline
I don't get whats the issue and do cap deploy changes the file permissions?
This is a permission problem. The user that your application is running as is not allowed to write to /tmp/.ruby_inline. You need to either fix the permissions or ensure that the application is running as a user that has those permissions.
Related
I have a custom tomcat docker image where we add the setenv.sh while building the image.
After updating the setenv.sh recently and redeploying, the change is not being seen in AWS, but if I docker pull the created image locally the change is present.
Is there any reason why this could be happening?
FROM tomcat:7.0-jdk8
...
# Copy Application
COPY ./platform-war/ ./platform-war/
COPY ./scripts/ ./scripts/
...
# Tomcat additional env
RUN /bin/bash -c "ln -s $BASE_DIR/scripts/setenv.sh /usr/local/tomcat/bin/setenv.sh"
# Make sripts executable
RUN /bin/bash -c "chmod -R +x $BASE_DIR/scripts"
And inside the image in aws the link is working as expected
-rwxr-xr-x 1 root root 1975 Jul 2 2021 digest.sh
-rwxr-xr-x 1 root root 3718 Jul 2 2021 setclasspath.sh
lrwxrwxrwx 1 root root 32 Feb 3 23:15 setenv.sh -> /opt/base/scripts/setenv.sh
-rwxr-xr-x 1 root root 1912 Jul 2 2021 shutdown.sh
yet, the content of the setenv is not the expected one, it is an old one.
The image looks updated cause it has present code changes.
drwxrwxr-x 2 rynostajcar-130991 rynostajcar-130991 4.0K Feb 20 16:13 .
drwxrwxr-x 4 rynostajcar-130991 rynostajcar-130991 4.0K Feb 20 16:13 ..
-rwxrwxr-x 1 rynostajcar-130991 rynostajcar-130991 347 Feb 20 16:13 console
-rw-r--r-- 1 root root 47 Feb 20 16:17 schedule
-rwxrwxr-x 1 rynostajcar-130991 rynostajcar-130991 131 Feb 20 16:13 setup
[16:33:12] (master) bin
// ♥ whoami
rynostajcar-130991
I recently cloned a repository I made this morning onto another computer and added a new file 'schedule'. I noticed now im unable to change permissions of that file, I assume its due to the file being root. Im still new so could someone explain how to change the user from root to rynostajcar?
After looking around I found out my problem was the user, I can not change the file permissions while the user is root. I used
sudo chown rynostajcar-130991:rynostajcar-130991 ./schedule which allowed me to switch the user from root to rynostajcar-130991 and change the permissions of my file schedule. Im still having problems with the new file user always being root so working on how to fix that next.
When i run my application from terminal sudo -u www-data ./scarga and open browser, the template file served well, everything ok. Command executed from /var/www/html/scarga.local/ directory.
When i run my application as sudo service scarga start it says: open ./resources/views/index.html: no such file or directory
File with HTTP handler: https://pastebin.com/MU7YDAWV
The scarga.service file: https://pastebin.com/eBL3jJFx
Tree of project: https://pastebin.com/rFVa8A3P
Rights for index.html file
-rwxr-xr-x 1 www-data www-data 3586 Jun 28 14:27 index.html`
Why this happens and how to solve?
Thank you!
You need to set the correct working directory in your script using WorkingDirectory= - presumably:
WorkingDirectory=/var/www/html/scarga.local
Using Rails 3.1 with the paperclip gem on nginx I get the following 500 error when attempting to upload images:
Permission denied - /webapps/my_rails_app/public/system
Observing the guidance offered elsewhere on this issue, I have modified my directory permissions such that
drwxr-xr-x 3 www-data www-data 4096 Mar 10 17:57 system
And, it would appear, this permission structure is maintained for all public sub-directories.
Yet, having restarted nginx, the error persists.
Do I need to modify nginx.conf or take another hack at permissioning the affected directory?
When I've run into this in the past, it's generally because the Rails application was started by a user or process that is not a part of the www-data group, and is not the www-data user. I'd first check to confirm that the www-data is in fact running your app.
This can be done using ps awwwx and perhaps some grep logic to find your app in the process stack.
I might suggest that you change the directory permissions to allow any member of the www-data group to write to the directory as well.
I am running an archival filesystem on a Windows server that does automatic offsite replication which makes it the ideal place to host Mercurial repositories and be sure of their safety. This filesystem is Windows-only so I have no choice but to use Windows as the host OS.
Setup:
Server ('ungoliant', internal to my network):
Windows 7 master repository host machine.
Cygwin ssh daemon.
hg 1.9.3
Created new repository "/cygdrive/j/mercurial/rcstudio" on Windows with "hg init".
Clients:
Mac OSX 10.7.4 running hg 2.1.2
FreeBSD 8.3 running hg 2.1.2
Problem on Clients (identical on Mac and FreeBSD):
$ hg clone ssh://cjp#ungoliant//cygdrive/j/mercurial/rcstudio
running ssh cjp#ungoliant 'hg -R /cygdrive/j/mercurial/rcstudio serve --stdio'
remote: abort: There is no Mercurial repository here (.hg not found)!
abort: no suitable response from remote hg!
I've confirmed that the path and the URI are correct. There absolutely is an ".hg" directory there:
$ ls -la
total 0
drwxr-xr-x 3 cjp staff 102 Jun 5 17:41 .
drwxr-xr-x 41 cjp staff 1394 Jun 5 16:30 ..
$ scp -r cjp#ungoliant:/cygdrive/j/mercurial/rcstudio/.hg .
$ ls -la
total 0
drwxr-xr-x 3 cjp staff 102 Jun 5 17:41 .
drwxr-xr-x 41 cjp staff 1394 Jun 5 16:30 ..
drwxr-xr-x 5 cjp staff 170 Jun 5 17:41 .hg
$ ls -la .hg
total 16
drwxr-xr-x 5 cjp staff 170 Jun 5 17:41 .
drwxr-xr-x 3 cjp staff 102 Jun 5 17:41 ..
-rw-r--r-- 1 cjp staff 57 Jun 5 17:41 00changelog.i
-rw-r--r-- 1 cjp staff 33 Jun 5 17:41 requires
drwxr-xr-x 2 cjp staff 68 Jun 5 17:41 store
I've found plenty of stackoverflow questions where the issue was an improperly formatted ssh URI ... mine is formatted correctly and describes an absolute path.
I have confirmed all the following:
hg commands run fine both on the server and through ssh.
If I paste the absolute path I am able to confirm the existence of the .hg directory through ssh.
Other hg commands manually issued through ssh succeed (e.g.: ssh cjp#ungoliant 'cd /cygdrive/j/mercurial/rcstudio; hg diff;').
On the host machine I can clone locally from that same absolute path.
I'm stumped here. The Mercurial docs make it sound like 2.1.2 should be able to clone from 1.9.3 parents, so it doesn't appear to be a version conflict.
Would very much appreciate your help! Thanks!
What I have ended up with is a workaround and not an answer.
My own ignorance of remote ssh command processing lead to a misunderstanding. If I would ssh to the machine, "which hg" resulted with the cygwin-aware binary "/usr/bin/hg", but if I did a remote command "ssh cjp#ungoliant 'which hg'", it returned the path to a windows binary I also have installed on that machine.
I played with my .bashrc file and could not get remote command execution to prioritize the cygwin binary. The PATH doesn't even include the Windows PATH entries, so I started hacking around the problem and eventually decided to stop fighting with it.
I have since abandoned the entire idea because I'm convinced that even if I get it to work it will be a delicate kludge and not at all worth the time I've spent on it. I now host my repositories on a BSD box and I have a cron on the Windows machine that pulls from that repository onto the archival filesystem once every 24 hours. It's simple and worked like a charm with zero problems.
I had the same issue, solved it by adding the option --remotecmd like this
hg clone --remotecmd /usr/bin/hg ssh://me#mywindowspc//cygdrive/m/hgrepo
When you attempt an scp you would do the following:
scp username#ipaddr:/cygdrive/d/test/foo.txt ./myDestFolder
Unfortunately for hg that does not work. Instead try:
hg clone ssh://username#ipaddr/D:/testRepo