Laravel permission issue with caching - laravel

I use Centos 7.
I have an issue regarding OS permission. My all supervisor processes work as root user.
[root#ip-172-31-9-100 example.com]# ll storage/framework/cache/data/
drwxr-xr-x 3 apache apache 16 Sep 22 11:29 00
drwxr-xr-x 5 apache apache 36 Sep 26 10:27 02
drwxr-xr-x 3 apache apache 16 Sep 23 15:14 03
drwxr-xr-x 3 apache apache 16 Sep 22 11:30 04
drwxr-xr-x 3 apache apache 16 Sep 22 12:55 05
drwxr-xr-x 3 root root 16 Sep 22 10:47 06
drwxr-xr-x 3 apache apache 16 Sep 23 16:39 08
My supervisor configuration:
[program:api-horizon]
process_name=%(program_name)s
command=php /var/www/html/example.com/artisan horizon
autostart=true
autorestart=true
user=root
redirect_stderr=true
stopwaitsecs=86400
apache apache is created by laravel project and root root is created by supervisor processes. When laravel project wants to user root root permission cache file, I get an permission error:
[2021-09-23 09:00:05] production.ERROR: Unable to create lockable
file:
/var/www/html/example.com/storage/framework/cache/data/e9/a0/e9a039230d7835a69038c5a295dc7bfa88213125.
Please ensure you have permission to create files in this location.
{"userId":605,"exception":"[object] (Exception(code: 0): Unable to
create lockable file:
/var/www/html/example.com/storage/framework/cache/data/e9/a0/e9a039230d7835a69038c5a295dc7bfa88213125.
Please ensure you have permission to create files in this location. at
/var/www/html/example.com/vendor/laravel/framework/src/Illuminate/Filesystem/LockableFile.php:73)
Please, help me to fix this permission issue. Thanks

Update your configuration to run as apache user:
[program:api-horizon]
process_name=%(program_name)s
command=php /var/www/html/example.com/artisan horizon
autostart=true
autorestart=true
user=apache //instead of root
redirect_stderr=true
stopwaitsecs=86400
Then restart supervisor:
sudo supervisorctl reread && sudo supervisorctl update && sudo supervisorctl restart all
This way, supervisor will run as apache user instead of root. The files it will create will be readable, writable by apache which should fix your problems.
Technically, the role of Supervisor is to ensure a process is always running. You have to configure it the way it would run if you were executing it manually.
In your case, php /var/www/html/example.com/artisan horizon should be executed by the user who own the project, which is apache.
If you don't, then everything Horizon does will be done by root.
Horizon will work - root can do anything - but when your apache user will try to retrieve data or write directories that root created... you guessed it, permission denied.
Don't forget to fix your permissions, because the root owned directories and files will stay and cause errors, even after you changed the configuration.
Execute sudo chown -R apache:apache /var/www/html/example.com/ so everything inside this directory is owner by apache again.

Related

HDFS fails to start with Hadoop 3.2 : bash v3.2+ is required

I'm building a small Hadoop cluster composed of 2 nodes : 1 master + 1 worker. I'm using the latest version of Hadoop (3.2) and everything is executed by the root user. In the installation process, I've been able to hdfs namenode -format. Next step is to start the HDFS daemon with start-dfs.sh.
$ start-dfs.sh
Starting namenodes on [master]
bash v3.2+ is required. Sorry.
Starting datanodes
bash v3.2+ is required. Sorry.
Starting secondary namenodes [master]
bash v3.2+ is required. Sorry.
Here's the generated logs in the journal:
$ journalctl --since "1 min ago"
-- Logs begin at Thu 2019-08-29 11:12:27 CEST, end at Thu 2019-08-29 11:46:40 CEST. --
Aug 29 11:46:40 master su[3329]: (to root) root on pts/0
Aug 29 11:46:40 master su[3329]: pam_unix(su-l:session): session opened for user root by root(uid=0)
Aug 29 11:46:40 master su[3329]: pam_unix(su-l:session): session closed for user root
Aug 29 11:46:40 master su[3334]: (to root) root on pts/0
Aug 29 11:46:40 master su[3334]: pam_unix(su-l:session): session opened for user root by root(uid=0)
Aug 29 11:46:40 master su[3334]: pam_unix(su-l:session): session closed for user root
Aug 29 11:46:40 master su[3389]: (to root) root on pts/0
Aug 29 11:46:40 master su[3389]: pam_unix(su-l:session): session opened for user root by root(uid=0)
Aug 29 11:46:40 master su[3389]: pam_unix(su-l:session): session closed for user root
As I'm using Zsh (with Oh-my-Zsh), I logged into a bash console to give it a try. Sadly, I get the same result. In fact, this error happens for all sbin/start-*.sh scripts. However, the hadoop and yarn commands work like a charm.
Since I didn't find much information on this error on the Internet, here I am. Would be glad to have any advice!
Other technical details
Operating system info:
$ lsb_release -d
Description: Debian GNU/Linux 10 (buster)
$ uname -srm
Linux 4.19.0-5-amd64 x86_64
Available Java versions (tried with both):
$ update-alternatives --config java
There are 2 choices for the alternative java (providing /usr/bin/java).
Selection Path Priority Status
------------------------------------------------------------
0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode
* 1 /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java 1081 manual mode
2 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode
Some ENV variables you might be interested in:
$ env
USER=root
LOGNAME=root
HOME=/root
PATH=/root/bin:/usr/local/bin:/usr/local/hadoop/bin:/usr/local/hadoop/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SHELL=/usr/bin/zsh
TERM=rxvt-unicode
JAVA_HOME=/usr/lib/jvm/adoptopenjdk-8-hotspot-amd64
HADOOP_HOME=/usr/local/hadoop
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
ZSH=/root/.oh-my-zsh
Output of the Hadoop executable:
$ hadoop version
Hadoop 3.2.0
Source code repository https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
Compiled by sunilg on 2019-01-08T06:08Z
Compiled with protoc 2.5.0
From source with checksum d3f0795ed0d9dc378e2c785d3668f39
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0.jar
My Zsh and Bash installation:
$ zsh --version
zsh 5.7.1 (x86_64-debian-linux-gnu)
$ bash --version
GNU bash, version 5.0.3(1)-release (x86_64-pc-linux-gnu)
# only available in a console using *bash*
$ echo ${BASH_VERSINFO[#]}
5 0 3 1 release x86_64-pc-linux-gnu
TL;DR: use a different user (e.g. hadoop) instead of root.
I found the solution but not the deep understanding on what is going on. Despite how sad I can be, here's the solution I found:
Running with root user:
$ start-dfs.sh
Starting namenodes on [master]
bash v3.2+ is required. Sorry.
Starting datanodes
bash v3.2+ is required. Sorry.
Starting secondary namenodes [master_bis]
bash v3.2+ is required. Sorry
Then I created a hadoop user and gave this user privileges on the Hadoop installation (R/W access). After logging in with this new user I have the following output for the command that caused me some troubles:
$ start-dfs.sh
Starting namenodes on [master]
Starting datanodes
Starting secondary namenodes [master_bis]
Moreover, I noticed that processes created by start-yarn.sh were not listed in the output of jps while using Java 11. Switching to Java 8 solved my problem (don't forget to update all $JAVA_HOME variables, both in /etc/environment and hadoop-env.sh).
Success \o/. However, I'd be glad to understand why the root user cannot do this. I know it's a bad habit to use root but in an experimental environment this is not of our interest to have a clean "close-to" production environment. Any information about this will be kindly appreciated :).
try
chsh -s /bin/bash
to change the default shell back to bash

Hiveserver2: Failed to create/change scratchdir permissions to 777: Could not create FileClient

I'm running a MapR Community Edition Hadoop cluster (M3).
Unfortunately, the HiveServer2 service crashes and, according the log file in /opt/mapr/hive/hive-0.13/logs/mapr/hive.log, there's a problem with permissions on the scratch directory:
2015-02-24 21:21:08,187 WARN [main]: server.HiveServer2 (HiveServer2.java:init(74)) - Failed to create/change scratchdir permissions to 777: Could not create FileClient java.io.IOException: Could not create FileClient
I checked the settings for the scratch directory using hive -e 'set;' | grep scratch:
hive.exec.scratchdir=/user/mapr/tmp/hive/
hive.scratch.dir.permission=700
I notice that hive.scratch.dir.permission is set to 700 and the error message suggests that it wants to change this to 777. However, according to the filesystem, /mapr/my.cluster.com/user/mapr/tmp has 777 permissions and belongs to the mapr user.
mapr#hadoop01:/mapr/my.cluster.com/user/mapr/tmp$ ls -al
total 2
drwxr-xr-x 3 mapr mapr 1 Feb 22 10:39 .
drwxr-xr-x 5 mapr mapr 3 Feb 24 08:40 ..
drwxrwxrwx 56 mapr mapr 54 Feb 23 10:20 hive
Judging by the filesystem permissions, I would expect the mapr user to do whatever it wants with this folder and so don't understand the error message.
I'm curious to know if anyone's seen this before and, if so, how did you fix it?
Update:
I had a look at the source code, and notice some relevant comments just prior to the warning:
// When impersonation is enabled, we need to have "777" permission on root scratchdir, because
// query specific scratch directories under root scratchdir are created by impersonated user and
// if permissions are not "777" the query fails with permission denied error.
I added set the following properties in hive-site.xml:
<property>
<name>hive.scratch.dir.permission</name>
<value>777</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive/</value>
</property>
... and created the /tmp/hive/ folder in HDFS with 777 permissions:
mapr#hadoop01:~$ hadoop fs -ls -d /tmp/hive
drwxrwxrwx - mapr mapr 0 2015-02-27 08:38 /tmp/hive
Although this looked promising, I still got the same warning in hive.log.
Update the permission of your /tmp/hive HDFS directory to set it to 777:
hadoop fs -chmod 777 /tmp/hive
Or remove /tmp/hive , temporary files will be created anyway even when you delete them.
hadoop fs -rm -r /tmp/hive;
rm -rf /tmp/hive

Restarting Amazon EMR cluster

I'm using Amazon EMR (Hadoop2 / AMI version:3.3.1) and I would like to change the default configuration (for example replication factor). In order for the change to take effect I need to restart the cluster and that's where my problems start.
How to do it? The script I found at ./.versions/2.4.0/sbin/stop-dfs.sh doesn't work. The slaves file ./.versions/2.4.0/etc/hadoop/slaves is empty anyway. There are some scripts in init.d:
$ ls -l /etc/init.d/hadoop-*
-rwxr-xr-x 1 root root 477 Nov 8 02:19 /etc/init.d/hadoop-datanode
-rwxr-xr-x 1 root root 788 Nov 8 02:19 /etc/init.d/hadoop-httpfs
-rwxr-xr-x 1 root root 481 Nov 8 02:19 /etc/init.d/hadoop-jobtracker
-rwxr-xr-x 1 root root 477 Nov 8 02:19 /etc/init.d/hadoop-namenode
-rwxr-xr-x 1 root root 1632 Oct 27 21:12 /etc/init.d/hadoop-state-pusher-control
-rwxr-xr-x 1 root root 484 Nov 8 02:19 /etc/init.d/hadoop-tasktracker
but if I for example stop the namenode something will start it again immediately. I looked for documentation and Amazon provides a 600 pages user guide but it's more how to use the cluster and not that much about maintenance.
On EMR 3.x.x , it used traditional SysVInit scripts for managing services. ls /etc/init.d/ can tell you the list of such services. You can restart a service like so,
sudo service hadoop-namenode restart
But if I for example stop the namenode something will start it again
immediately.
However, EMR also has a process called service-nanny that monitors hadoop related services and ensure all of em' are always running. This is the mystery process that brings it back.
So, for truly restarting a service, you would need to stop the service-nanny for a while and then restart/stop the necessary processes. Once you bring back service nanny , it will again do its job. So, you might run commands like -
sudo service service-nanny stop
sudo service hadoop-namenode restart
sudo service service-nanny start
Note that this behavior is different in 4.x.x and 5.x.x AMI's where upstart is used to stop/start applications and service-nanny no longer brings back applications.

Cloudera's CDH4 WordCount hadoop tutorial - issues

I am following this tutorial:
http://www.cloudera.com/content/cloudera-content/cloudera-docs/HadoopTutorial/CDH4/Hadoop-Tutorial/ht_topic_5_2.html
It says the following:
javac -cp classpath -d wordcount_classes WordCount.java
where classpath is:
CDH4 - /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/*
CDH3 - /usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u4-core.jar
I have downloaded the "cloudera-quickstart-demo-vm-4.2.0-vmware" .
Running as user cloudera.
[cloudera#localhost wordcount]$ javac -cp /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/* -d wordcount_classes WordCount.java
incorrect classpath: /usr/lib/hadoop/*
incorrect classpath: /usr/lib/hadoop/client-0.20/*
----------
1. ERROR in WordCount.java (at line 8)
import org.apache.hadoop.fs.Path;
^^^^^^^^^^
When checking the cp folder: .
[cloudera#localhost wordcount]$ ls -l /usr/lib/hadoop
total 3500
drwxr-xr-x. 2 root root 4096 Apr 22 14:37 bin
drwxr-xr-x. 2 root root 4096 Apr 22 14:33 client
drwxr-xr-x. 2 root root 4096 Apr 22 14:33 client-0.20
drwxr-xr-x. 2 root root 4096 Apr 22 14:36 cloudera
drwxr-xr-x. 2 root root 4096 Apr 22 14:30 etc
-rw-r--r--. 1 root root 16536 Feb 15 14:24 hadoop-annotations-2.0.0-cdh4.2.0.jar
lrwxrwxrwx. 1 root root 37 Apr 22 14:30 hadoop-annotations.jar -> hadoop-annotations-2.0.0-cdh4.2.0.jar
-rw-r--r--. 1 root root 46855 Feb 15 14:24 hadoop-auth-2.0.0-cdh4.2.0.jar
lrwxrwxrwx. 1 root root 30 Apr 22 14:30 hadoop-auth.jar -> hadoop-auth-2.0.0-cdh4.2.0.jar
-rw-r--r--. 1 root root 2266171 Feb 15 14:24 hadoop-common-2.0.0-cdh4.2.0.jar
-rw-r--r--. 1 root root 1212163 Feb 15 14:24 hadoop-common-2.0.0-cdh4.2.0-tests.jar
lrwxrwxrwx. 1 root root 32 Apr 22 14:30 hadoop-common.jar -> hadoop-common-2.0.0-cdh4.2.0.jar
drwxr-xr-x. 3 root root 4096 Apr 22 14:36 lib
drwxr-xr-x. 2 root root 4096 Apr 22 14:33 libexec
drwxr-xr-x. 2 root root 4096 Apr 22 14:31 sbin
What am I doing wrong?
This is directly from the Cloudera Quickstart VM with CDH4 installed.
Following the "Hadoop Tutorial" .
It even says
**Prerequisites**
Ensure that CDH is installed, configured, and running. The easiest way to get going quickly is to use a CDH4 QuickStart VM
Which is exactly from where I am running this tutorial from - the CDH4 QuickStart VM.
What am I doing wrong?
*update
Version information;
[cloudera#localhost cloudera]$ cat cdh_version.properties
# Autogenerated build properties
version=2.0.0-cdh4.2.0
git.hash=8bce4bd28a464e0a92950c50ba01a9deb1d85686
cloudera.hash=8bce4bd28a464e0a92950c50ba01a9deb1d85686
cloudera.base-branch=cdh4-base-2.0.0
cloudera.build-branch=cdh4-2.0.0_4.2.0
cloudera.pkg.version=2.0.0+922
cloudera.pkg.release=1.cdh4.2.0.p0.12
cloudera.cdh.release=cdh4.2.0
cloudera.build.time=2013.02.15-18:39:29GMT
cloudera.pkg.name=hadoop
CLASSPATH ENV:
[cloudera#localhost bin]$ echo $CLASSPATH
:/usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/*
EDIT!!
So I think I figured it out.
This is a new issue possibly with the Cloudera CD4 VM quickstart VM:
from: This Post dated yesterday
Another person was having the exact same problem.
It seems the javac program does not accept wildcards properly on exported paths.
I had to do the following:
export CLASSPATH=/usr/lib/hadoop/client-0.20/\*:/usr/lib/hadoop/\*
Then
javac -d [Without a -cp override]
javac -d wordcount_classes/ WordCount.java
Only warnings will appear.
I wonder if Cloudera has to fix their quickstart VM.
You need to have a classpath variable set that includes those directories in /usr/lib/hadoop if you want javac to find them. You can set this env var as follows
$: export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/*
javac will now find those libs. If you have any additional complaining regarding classpath variables you can just append them to the above line using a colon (:) as a delimiter
You could include this in a bash script, but it is best practice to set the correct env variables at runtime, then you get exactly what you want. In this case it could be word count or the CDH4 env is setting it, but it is best to just set it yourself.
I spent some time searching for a response to the same issue (also using VM with CDH4) so I shall leave my solution here in hopes that it might help others.
Unfortunately, neither of the solutions above worked in my case.
However, I was able to successfully compile the example by closing my terminal and opening a new one. My issue was having previously switched to the 'cloudera' user with 'sudo su cloudera' as mentioned in the tutorial.
Reference:
http://community.cloudera.com/t5/Apache-Hadoop-Concepts-and/Classpath-Problem-on-WordCount-Tutorial-QuickStart-VM-4-4-0-1/td-p/3613

hbase 0.90.5 not work after replace hadoop*.jar in hbase/lib/

i have Debian 6.03 and problem with best friends hbase and hadoop
step by step, I want working configuration hbase (standalone for the first step) and hadoop :
wget http://www.sai.msu.su/apache//hbase/hbase-0.90.5/hbase-0.90.5.tar.gz
tar xzfv hbase-0.90.5.tar.gz
sudo mv hbase-0.90.5 /usr/local/
sudo ln -s hbase-0.90.5/ hbase
sudo chown -R hduser:hadoop hbase*
lrwxrwxrwx 1 hduser hadoop 13 Янв 21 10:11 hbase -> hbase-0.90.5/
drwxr-xr-x 8 hduser hadoop 4096 Янв 21 10:11 hbase-0.90.5
dan#master:/usr/local/hbase$ su hduser
hduser#master:/usr/local/hbase$ bin/start-hbase.sh
starting master, logging to /usr/local/hbase/bin/../logs/hbase-hduser-master-master.out
hduser#master:/usr/local/hbase$ bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.90.5, r1212209, Fri Dec 9 05:40:36 UTC 2011
hbase(main):001:0> list
TABLE
0 row(s) in 0.8560 seconds
But, after unpack hadoop core v 1.0 in hbase lib/ folder - I got:
hduser#master:/usr/local/hbase$ bin/stop-hbase.sh
hduser#master:/usr/local/hbase$ cp ../hadoop/hadoop-core-1.0.0.jar lib/
hduser#master:/usr/local/hbase$ rm lib/hadoop-core-0.20-append-r1056497.jar
hduser#master:/usr/local/hbase$ bin/start-hbase.sh
starting master, logging to /usr/local/hbase/bin/../logs/hbase-hduser-master-master.out
hduser#master:/usr/local/hbase$ bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.90.5, r1212209, Fri Dec 9 05:40:36 UTC 2011
hbase(main):001:0> list
TABLE
ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately. This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and then make sure you are reusing HBaseConfiguration as often as you can. See HTable's javadoc for more information.
Why I need zookeeper on standalone after replace hadoop-core*.jar?
how to fix it?
Have you configured hbase-env.sh to manage Zookeeper itself?
Have you configured zookeeper quorums in hbase-site.xml?
I have the same problem, and solved it by configuring yarn and map-reduce.
Try this post.

Resources