Redmine files upload error - ruby

I have a problem with files upload in Redmine. Error is: 500 internal server error.
In logs I have no information ( apache log & production.log ).
What I have tried:
Change rights on /usr/share/redmine/files to 777 ( no result )
Change rights on /usr/share/redmine to 777 ( no result )
Checked owner for /usr/share/redmine ( it is admin )
Checked PassengerDefaultUser ( it is admin )
Result of "ps -ef | grep Passenger | grep -v grep" command:
root 27534 26624 0 17:23 ? 00:00:00 PassengerWatchdog
admin 27537 27534 0 17:23 ? 00:00:01 PassengerHelperAgent
nobody 27543 27534 0 17:23 ? 00:00:00 PassengerLoggingAgent
admin 27588 1 0 17:23 ? 00:00:04 Passenger RackApp: /usr/share/redmine
System: Apache 2.4, Passenger Apache/2.4.10 (Debian) mod_fcgid/2.3.9 Phusion_Passenger/4.0.53, Redmine 2.5.2.stable
I tried all that I can, but have no result. :-(

Related

Writing to a mounted Windows share

I am using Ubunutu 20.04 and I am trying to write to a mounted Windows share. This is the command I am using to mount the share:
sudo mount.cifs //192.168.1.5/tv /mnt/tv -o username=xxxxxxxxxx,password=xxxxxxxxx,file_mode=0777,dir_mode=0777
I am able to view the contents of the Windows share in Ubuntu:
darren#homeserver:~$ ls -l /mnt/tv/
total 0
drwxrwxrwx 2 root root 0 Jun 30 15:33 '$RECYCLE.BIN'
drwxrwxrwx 2 root root 0 Jan 1 2019 MSOCache
drwxrwxrwx 2 root root 0 Apr 28 00:38 'Plex dance'
drwxrwxrwx 2 root root 0 Dec 30 2019 'System Volume Information'
drwxrwxrwx 2 root root 0 Jun 24 15:37 'TV Shows'
-rwxrwxrwx 1 root root 0 Jan 1 2019 desktop.ini
But if I try to create a test file i get this error:
[ Error writing lock file /mnt/tv/.test.swp: Permission denied ]
I have the Windows share permissions set to "Everyone":
Any thoughts?
Try this configuration:
-fstype=cifs,credentials=<fileWithCred>,vers=3.0,dir_mode=0777,file_mode=0777,noserverino ://<IP-Winshare>/Path

HDFS NFS locations using weird numerical username values for directory permissions

Seeing nonsense values for user names in folder permissions for NFS mounted HDFS locations, while the HDFS locations themselves (using Hortonworks HDP 3.1) appear fine. Eg.
➜ ~ ls -lh /nfs_mount_root/user
total 6.5K
drwx------. 3 accumulo hdfs 96 Jul 19 13:53 accumulo
drwxr-xr-x. 3 92668751 hadoop 96 Jul 25 15:17 admin
drwxrwx---. 3 ambari-qa hdfs 96 Jul 19 13:54 ambari-qa
drwxr-xr-x. 3 druid hadoop 96 Jul 19 13:53 druid
drwxr-xr-x. 2 hbase hdfs 64 Jul 19 13:50 hbase
drwx------. 5 hdfs hdfs 160 Aug 26 10:41 hdfs
drwxr-xr-x. 4 hive hdfs 128 Aug 26 10:24 hive
drwxr-xr-x. 5 h_etl hdfs 160 Aug 9 14:54 h_etl
drwxr-xr-x. 3 108146 hdfs 96 Aug 1 15:43 ml1
drwxrwxr-x. 3 oozie hdfs 96 Jul 19 13:56 oozie
drwxr-xr-x. 3 882121447 hdfs 96 Aug 5 10:56 q_etl
drwxrwxr-x. 2 spark hdfs 64 Jul 19 13:57 spark
drwxr-xr-x. 6 zeppelin hdfs 192 Aug 23 15:45 zeppelin
➜ ~ hadoop fs -ls /user
Found 13 items
drwx------ - accumulo hdfs 0 2019-07-19 13:53 /user/accumulo
drwxr-xr-x - admin hadoop 0 2019-07-25 15:17 /user/admin
drwxrwx--- - ambari-qa hdfs 0 2019-07-19 13:54 /user/ambari-qa
drwxr-xr-x - druid hadoop 0 2019-07-19 13:53 /user/druid
drwxr-xr-x - hbase hdfs 0 2019-07-19 13:50 /user/hbase
drwx------ - hdfs hdfs 0 2019-08-26 10:41 /user/hdfs
drwxr-xr-x - hive hdfs 0 2019-08-26 10:24 /user/hive
drwxr-xr-x - h_etl hdfs 0 2019-08-09 14:54 /user/h_etl
drwxr-xr-x - ml1 hdfs 0 2019-08-01 15:43 /user/ml1
drwxrwxr-x - oozie hdfs 0 2019-07-19 13:56 /user/oozie
drwxr-xr-x - q_etl hdfs 0 2019-08-05 10:56 /user/q_etl
drwxrwxr-x - spark hdfs 0 2019-07-19 13:57 /user/spark
drwxr-xr-x - zeppelin hdfs 0 2019-08-23 15:45 /user/zeppelin
Notice the difference for users ml1 and q_etl that they have numerical user values when running ls on the NFS locations, rather then their user names.
Even doing something like...
[hdfs#HW04 ml1]$ hadoop fs -chown ml1 /user/ml1
does not change the NFS permissions. Even more annoying, when trying to change the NFS mount permissions as root, we see
[root#HW04 ml1]# chown ml1 /nfs_mount_root/user/ml1
chown: changing ownership of ‘/nfs_mount_root/user/ml1’: Permission denied
This causes real problems, since the differing uid means that I can't access these dirs even as the "correct" user to write to them. Not sure what to make of this. Anyone with more Hadoop experience have any debugging suggestions or fixes?
UPDATE:
Doing a bit more testing / debugging, found that the rules appear to be...
If the NFS server node has no uid (or gid?) that matches the uid of the user on the node accessing the NFS mount, we get the weird uid values as seen here.
If there is a uid associated to the username of the user on the requesting node, then that is the uid user that we see assigned to the location when accessing via NFS (even if that uid on the NFS server node is not actually for the requesting user), eg.
[root#HW01 ~]# clush -ab id ml1
---------------
HW[01,04] (2)
---------------
uid=1025(ml1) gid=1025(ml1) groups=1025(ml1)
---------------
HW[02-03] (2)
---------------
uid=1027(ml1) gid=1027(ml1) groups=1027(ml1)
---------------
HW05
---------------
uid=1026(ml1) gid=1026(ml1) groups=1026(ml1)
[root#HW01 ~]# exit
logout
Connection to hw01 closed.
➜ ~ ls -lh /hdpnfs/user
total 6.5K
...
drwxr-xr-x. 6 atlas hdfs 192 Aug 27 12:04 ml1
...
➜ ~ hadoop fs -ls /user
Found 13 items
...
drwxr-xr-x - ml1 hdfs 0 2019-08-27 12:04 /user/ml1
...
[root#HW01 ~]# clush -ab id atlas
---------------
HW[01,04] (2)
---------------
uid=1027(atlas) gid=1005(hadoop) groups=1005(hadoop)
---------------
HW[02-03] (2)
---------------
uid=1024(atlas) gid=1005(hadoop) groups=1005(hadoop)
---------------
HW05
---------------
uid=1005(atlas) gid=1006(hadoop) groups=1006(hadoop)
If wondering why I have, user on the cluster that have varying uids across the cluster nodes, see the problem posted here: How to properly change uid for HDP / ambari-created user? (note that these odd uid setting for hadoop service users was set up by Ambari by default).
After talking with someone more knowledgeable in HDP hadoop, found that the problem is that when Ambari was setup and run to initially install the hadoop cluster, there may have been other preexisting users on the designated cluster nodes.
Ambari creates its various service users by giving them the next available UID of a nodes available block of user UIDs. However, prior to installing Ambari and HDP on the nodes, I created some users on the to-be namenode (and others) in order to do some initial maintenance checks and tests. I should have just done this as root. Adding these extra users offset the UID counter on those nodes and so as Ambari created users on the nodes and incremented the UIDs, it was starting from different starting counter values. Thus, the UIDs did not sync and caused problems with HDFS NFS.
To fix this, I...
Used Ambari to stop all running HDP services
Go to Service Accounts in Ambari and copy all of the expected service users name strings
For each user, run something like id <service username> to get the group(s) for each user. For service groups (which may have multiple members), can do something like grep 'group-name-here' /etc/group. I recommend doing it this way as the Ambari docs of default users and groups does not have some of the info that you can get here.
Use userdel and groupdel to remove all the Ambari service users and groups
Then recreate all the groups across the cluster
Then recreate all the users across the cluster (may need to specify UID if nodes have other users not on others)
Restart the HDP services (hopefully everything should still run as if nothing happend, since HDP should be looking for the literal string (not the UIDs))
For the last parts, can use something like clustershell, eg.
# remove user
$ clush -ab userdel <service username>
# check that the UID you want to use is actually available on all nodes
$ clush -ab id <some specific UID you want to use>
# assign that UID to a new service user
$ clush -ab useradd --uid <the specific UID> --gid <groupname> <service username>
To get the lowest common available UID from each node, used...
# for UID
getent passwd | awk -F: '($3>1000) && ($3<10000) && ($3>maxuid) { maxuid=$3; } END { print maxuid+1; }'
# for GID
getent passwd | awk -F: '($4>1000) && ($4<10000) && ($4>maxuid) { maxuid=$4; } END { print maxuid+1; }'
Ambari also creates some /home dirs for users. Once you are done recreating the users, will need to change the permissions for the dirs (can also use something like clush there as well).
* Note that this was a huge pain and you would need to manually correct the UIDs of users whenever you added another cluster node. I did this for a test cluster, but for production (or even a larger test) you should just useKerberos or SSSD + Active Directory.

session.save_path incorrect in magento + memcache for session

I am trying to configure Magento to use memcache for session. I have installed memcached and also php5-memcache. I have also added "extension=memcache.so" in memcache.ini.
I have made sure the memcached instance is also running in the localhost port number 11213. However, when I try to login to Magento admin I get an error -
Warning: Unknown: Failed to write session data (memcache). Please verify that the current setting of session.save_path is correct (tcp://127.0.0.1:11213?persistent=0&weight=2&timeout=10&retry_interval=10) in Unknown on line 0
The following is the memcache configuration in local.xml -
<session_save><![CDATA[memcache]]></session_save>
<session_save_path><![CDATA[tcp://127.0.0.1:11213?persistent=0&weight=2&timeout=10&retry_interval=10]]></session_save_path>
The following are the grep for memcached,
www-data 1329 1 0 08:13 ? 00:00:00 /usr/bin/memcached -d -m 64 -p 11213 -u www-data -l 127.0.0.1
www-data 1511 1 0 08:18 ? 00:00:00 /usr/bin/memcached -d -m 64 -p 11211 -u www-data -l 127.0.0.1
www-data 1518 1 0 08:18 ? 00:00:00 /usr/bin/memcached -d -m 64 -p 11212 -u www-data -l 127.0.0.1
I have been meddling up with this for a couple of days now and I am not sure what the issue. Any help is appreciated.
Thanks,
G
Please note there is a difference between memcache and memcached. I’ve found that the Magento sessions integration expects you to use this:
<session_save><![CDATA[memcached]]></session_save>
You should install the PHP memcached libraries, as well.

TCPServer Error: Address already in use - bind(2)

Jekyll was working fine for me few weeks back but now all of a sudden it gives me the following error:
TCPServer Error: Address already in use - bind(2)
INFO WEBrick::HTTPServer#start: pid=7300 port=4000
% lsof -i :4000
<fetches nothing>
Even though nothing is running on the port. Below are the details:
% jekyll --version
Jekyll 0.11.2
% where jekyll
/home/bhaarat/.rvm/gems/ruby-1.9.2-p290/bin/jekyll
/usr/bin/jekyll
% ruby --version
ruby 1.9.2p290 (2011-07-09 revision 32553) [i686-linux]
% rvm --version
rvm 1.10.0
Here is the output
% jekyll --server
Configuration from /home/bhaarat/blog/omnipresent.github.com/_config.yml
Auto-regenerating enabled: /home/bhaarat/blog/omnipresent.github.com -> /home/bhaarat/blog/omnipresent.github.com/_site
[2012-04-21 13:46:40] regeneration: 38 files changed
[2012-04-21 13:46:40] INFO WEBrick 1.3.1
[2012-04-21 13:46:40] INFO ruby 1.9.2 (2011-07-09) [i686-linux]
[2012-04-21 13:46:40] WARN TCPServer Error: Address already in use - bind(2)
[2012-04-21 13:46:40] INFO WEBrick::HTTPServer#start: pid=7382 port=4000
I know the address isn't in use and jekyll is probably breaking for some other reason but throwing that error. What are my options? I've tried re-installing as well.
Type this in your terminal to find out the PID of the process that's using the 3000 port:
$ lsof -wni tcp:3000
Then, use the number in the PID column to kill the process:
$ kill -9 PID
I was not qualified to post comment. So I added a new answer.
I encountered this problem on Mac OS X 10.10.3. And I had never installed/used Jekyll before. I was not able to start jekyll server with its default port number 4000. The reason was that the port was the same as what NoMachine used. With
$ sudo lsof -wni tcp:4000
Note: Running this command without sudo will have no output.
I saw this output:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nxd 449 nx 3u IPv4 0x8d22************ 0t0 TCP *:terabase (LISTEN)
nxd 449 nx 4u IPv6 0x8d22************ 0t0 TCP *:terabase (LISTEN)
The port 4000 was occupied by nxd, which was the process started by NoMachine. And
$ sudo kill -9 449
would not work, because NoMachine's nxd process would keep restarting, with a new PID.
Therefore, I had to either:
Changed my jekyll server port in the site _config.yml to another spared one. I appended the line below to _config.yml and it worked.
port: 3000 # change server port to 3000
or
Changed NoMachine's default nxd port, or Uninstall NoMachine
Ctrl-Z doesn't terminate a program, but rather suspends it and sends it to the background. You can resume the program with the "fg" command. To actually terminate it, use Ctrl-C.
The actual error message seems to be bogus and can be ignored. I am getting the same error message "address in use" but jekyll works fine anyway at the expected port.
I have met this problem recently.
I tried out all the method mentioned above, and even restarted my computer, but still couldn't solve it!!! Then I removed the jekyll and installed a new version, it just worked.
gem uninstall jekyll & gem install jekyll (maybe you need super user priviledge).
If you really get annoyed with similar bugs, this sb method is worth a try...
Based on the top answer, I came up with a simple way
lsof -wni tcp:3000 | xargs -I{} kill -9 {}
It takes the output of finding PID in which port the tcp server is running at. In this case it is 3000.
It then kills the processes running that
PID
Check that you don't have another terminal open where you are already running a server.
If that is the case, do a CTRL-C to shutdown the server, and that will free the port/address.
First you need to find PID of the process that's using the 3000 port:
$ps -ef
Output Like this:
1003 4953 2614 0 08:51 pts/0 00:00:00 -bash
1003 5634 1 0 08:56 pts/0 00:00:00 spring server | moviestore | started 2 hours ago
1003 5637 5634 0 08:56 ? 00:00:01 spring app | moviestore | started 2 hours ago | development mode
1003 6078 4953 0 09:03 pts/0 00:00:03 puma 3.6.0 (tcp://localhost:3000) [moviestore]
1003 6117 2614 0 09:03 pts/1 00:00:00 -bash
root 6520 2 0 09:57 ? 00:00:00 [kworker/u8:2]
root 6936 1225 0 11:09 ? 00:00:00 [lightdm] <defunct>
1003 7084 1 0 11:09 ? 00:00:00 /usr/bin/python /usr/share/apport/apport-gtk
1003 7475 1 0 11:10 ? 00:00:00 /usr/bin/python /usr/share/apport/apport-gtk
root 8739 1225 1 11:29 tty8 00:00:11 /usr/bin/X :1 -auth /var/run/lightdm/root/:1 -nolisten tcp vt8 -novtswitch
root 8853 1225 0 11:29 ? 00:00:00 lightdm --session-child 13 22
1002 8943 1 0 11:30 ? 00:00:00 /usr/bin/gnome-keyring-daemon --daemonize --login
1002 8954 8853 0 11:30 ? 00:00:00 gnome-session --session=ubuntu
1002 8992 8954 0 11:30 ? 00:00:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session --session=ubuntu
1002 8995 1 0 11:30 ? 00:00:00 /usr/bin/dbus-launch --exit-with-session gnome-session --session=ubuntu
1002 8996 1 0 11:30 ? 00:00:00 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
1002 9007 8954 0 11:30 ? 00:00:00 /usr/lib/gnome-settings-daemon/gnome-settings-daemon
1002 9015 1 0 11:30 ? 00:00:00 /usr/lib/gvfs/gvfsd
1002 9018 8954 1 11:30 ? 00:00:07 compiz
1002 9021 1 0 11:30 ? 00:00:00 /usr/lib/x86_64-linux-gnu/gconf/gconfd-2
1002 9028 8954 0 11:30 ? 00:00:00 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1
1002 9029 8954 0 11:30 ? 00:00:01 nautilus -n
1002 9030 8954 0 11:30 ? 00:00:00 /usr/lib/gnome-settings-daemon/gnome-fallback-mount-helper
1002 9031 8954 0 11:30 ? 00:00:00 nm-applet
1002 9032 8954 0 11:30 ? 00:00:02 /opt/mTrac/mTrac
1002 9033 8954 0 11:30 ? 00:00:00 bluetooth-applet
1002 9045 9032 0 11:30 ? 00:00:00 /opt/mTrac/mTrac --type=zygote --no-sandbox
1002 9050 1 0 11:30 ? 00:00:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor
1002 9054 1 0 11:30 ? 00:00:00 /usr/bin/pulseaudio --start --log-target=syslog
1002 9057 1 0 11:30 ? 00:00:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor
1002 9062 1 0 11:30 ? 00:00:00 /usr/lib/gvfs/gvfs-afc-volume-monitor
here you can see :
1003 6078 4953 0 09:03 pts/0 00:00:03 puma 3.6.0 (tcp://localhost:3000) [moviestore]
localhost:3000 have pid: 6078
kill that process by
$sudo kill 6078
then run
$rails s
we can user fuser command
fuser -k 3000/tcp
work around
in /_site run: python -m SimpleHTTPServer 8080

OSX Server: apache2 running but “Problem loading page”

I have setup OSX Server 10.6, all updates installed, started apache2, which is running:
sudo apachectl graceful
I see in /var/log/apache2/errorlog
[Fri Dec 17 10:11:49 2010] [notice] Apache/2.2.15 (Unix) configured -- resuming normal operations
Also
ps -ef | grep httpd
shows several processes:
0 49388 1 0 0:00.05 ?? 0:00.07 /usr/sbin/httpd -D FOREGROUND
70 49389 49388 0 0:00.00 ?? 0:00.00 /usr/sbin/httpd -D FOREGROUND
70 49390 49388 0 0:00.00 ?? 0:00.00 /usr/sbin/httpd -D FOREGROUND
70 49391 49388 0 0:00.00 ?? 0:00.00 /usr/sbin/httpd -D FOREGROUND
70 49392 49388 0 0:00.00 ?? 0:00.00 /usr/sbin/httpd -D FOREGROUND
...
In httpd.conf I edited DocumentRoot:
ServerName bioinfo.mni.fh-giessen.de:80
DocumentRoot "/Volumes/ServerHD2/Web_Documents"
ErrorLog "/var/log/apache2/error_log"
<Directory "/Volumes/ServerHD2/Web_Documents">
Order Allow,Deny
Allow from All
</Directory>
Syntax is OK:
apachectl configtest
Syntax OK
Yet, I get timeouts at http://bioinfo.mni.fh-giessen.de :
Problem loading page
Any clue ?
Are you sure you edited the right file? OS X Server has a way of not keeping to standards. The actual configuration entries are not in /etc/httpd/httpd.conf, but in subdirectories of /etc/apache2/sites. Check out this FAQ.
I would recommend using the Server Admin utility to set up the server. It will be a lot easier and quicker than trying to get the configuration right manually. You can always add or change rules later, once the site is up and running.
No direct local GUI access possible for a few days, no access via Remote Desktop either. My only way is ssh right now, but I will be happy to try Server Admin utility in a few days. For now I am restricted to the command line. Yes, I edited both /etc/apache2/httpd.conf and /etc/apache2/sites/0000_any_80_.conf, without avail. Is
sudo serveradmin fullstatus web
anyhow instructive :
web:readWriteSettingsVersion = 1
web:totalKBytes = 0
web:emailRulesRunning = no
web:boundToKerberos = yes
web:teamsRunning = yes
web:postfixRunning = no
web:servicePortsRestrictionInfo = _empty_array
web:health = _empty_dictionary
web:currentThroughput = 0
web:passwordResetRunning = no
web:ApacheMode = 2
web:statusMessage = ""
web:apacheVersion = "Unknown"
web:state = "RUNNING"
web:setStateVersion = 1
web:apacheState = "RUNNING"
web:proxyState = "STOPPED"
web:htCacheCleanRunning = no
web:calendarRunning = yes
web:servicePortsAreRestricted = "YES"
web:currentRequestsBy10 = 0
web:logPaths:logPathsArray = _empty_array
web:totalRequests = 0
web:startedTime = ""
?

Resources