C:\Projects\k>mysqldump --tab=c:\temp\multifile -ps -us s
mysqldump: Got error: 1: Can't create/write to file 'c:\temp\multifile\archive.txt' (Errcode: 13) when executing 'SELECT INTO OUTFILE'
How can i fix it on windows? I don't have any limitations for this user...
Windows error 13 is "permission denied". Maybe the file already exists and you can't delete it which would be required to create a new file with that name.
Possible reasons:
This's the output when I run mysqldump with user work
$ ll
total 908
-rw-rw-r-- 1 work work 1824 Apr 28 14:47 test.sql
-rw-rw-rw- 1 mysql mysql 922179 Apr 28 14:47 test.txt
The test.sql is created by user work, but the test.txt is created by user mysql, so is "permission denied!". In this situation, you should chmod your directory.
It is best that --tab be used only for dumping a local server. If you use it with a remote server, the --tab directory must exist on both the local and remote hosts, and the .txt files will be written by the server in the remote directory (on the server host), whereas the .sql files will be written by mysqldump in the local directory (on the client host).
Refer: Dumping Data in Delimited-Text Format with mysqldump
You need the FILE privilege in order to be allowed to use SELECT...INTO OUTFILE, which seems to be what mysqldump --tab uses to generate the tab-separated dump.
This privilege is global, which means it may only be granted "ON ." :
GRANT FILE ON *.* TO 'backup'#'%';
Refer: Which are the proper privileges to mysqldump for the Error Access denied when executing 'SELECT INTO OUTFILE'.?
First few ideas: is there enough space? Does mysql have rights to write there?
Another thing I found: Turning "off" the scanning of windows/temp folder in the anti-virus resolved the issue for me.
Hope it helps
I had similar error on win7 and Wn2008 R2 -- error 13 "permission denied". tried all the suggested solutions didn't work.
I created a folder e.g c:\temp and gave full control to logged in user. issue resolved.
Right click target folder --> Properties --> Security --> Edit... , Make sure permissions are editable under active user.
I observed that this problem also occurs if "Allow" permissions are grayed out for target folder even under active user. after creating new folder and granting full control, all permissions are editable and no more permission error.
I had a similar issue and resolved it by adding "everyone" permissions to the target folder.
Related
Operating system: Windows 10
Database: Postgres 13
When trying to execute an sql file mydatabase=>\i C:\tmp\sqlfile.sql I get a
C:: Permission denied
error. The file permissions have been set to read / write for all. No matter where I put the file, bin directory, public directory, tmp directory, I get the same error.
I have created a user bill and log in as Bill, and I still get the same error. The '\' has been replaced with '/' with no success. The postgres account does not work either.
Is there a system setting that needs to be set to allow Postgres to read a file?
I'm trying to execute several different PostgreSQL commands inside of different bash scripts. I thought I had the .pgpass file properly configured, but when I try to run pg_dump, vacuumdb, or reindexdb, I get errors about how a password isn't being supplied. For my bash script to execute properly, I need these commands to return an exit code of 0.
I'm running PostgreSQL 9.5.4 on macOS 10.12.6 (16G1408).
In an admin user account [neither root nor postgres], I have a .pgpass file in ~. The .pgpass file contains:
localhost:5432:*:postgres:DaVinci
The user is indeed postgres and the password is indeed DaVinci.
Permissions on the .pgpass file are 600.
In the pg_hba.conf file, I have:
# pg_hba.conf file has been edited by DaVinci Project Server. Hence, it is recommended to not edit this file manually.
# TYPE DATABASE USER ADDRESS METHOD
local all all md5
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
So, for example, from a user account [neither root nor postgres], I run:
/Library/PostgreSQL/9.5/pgAdmin3.app/Contents/SharedSupport/pg_dump --host localhost --username postgres testworkflow13 --blobs --file /Users/username/Desktop/testdestination1/testworkflow13_$(date "+%Y_%m_%d_%H_%M").backup --format=custom --verbose --no-password
And I get the following error:
pg_dump: [archiver (db)] connection to database "testworkflow13" failed: fe_sendauth: no password supplied
I get the same result if I run this with sudo as well.
Curiously, pg_dump does execute, and does export out a .backup file to the testdestination1 directory, but since it throws an error, if it's in a bash script, the script is halted.
Where am I going wrong? How can I make sure that the .pgpass file is being properly read so that the --no-password flag in the command works?
Please start with a read to official docs.
Also, even this topic is more than 2 years also, i strongly suggest to update to at least to version 10, anyhow nothing relevant has been changed around .pgpass
.pgpass need to be chmod 600, fine, the user that uses that must can read, so that must be the owner of that file.
Please remove the --no-password that just confuse and is not needed.
Using 127.0.0.1 instead of localhost clarify where you are going, "usually" are the same.
... from a user account [neither root nor postgres] ...
The user you are using for must have read access to .pgpass, as said, so you have to clarify that and provide that file to that user, maybe using the PGPASSFILE env variable could be useful for you.
Another way is the use of .pg_service.conf file with or without the .pgpass, for what you have written it looks like that may be more appropriate
Also you could set the PGPASSWORD in the env of the user.
Think about security, some choices look the simpliest but can expose accesses .. and as DBA I'm frankly tired about peoples that store password in visible places, printed in logs or on github or set "trust" in pg_hba and finally comes to me to say "postgreSQL is insecure".. hahaha!
Final note, you do not have a pg_hba error, in case you will have a "pg_hba" error message.
Turns out that changing all three lines in the pg_hba.conf file to the trust method of authentication solved this.
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
Since the method is trust, the .pgpass file may be entirely irrelevant--I'm not sure, but at least I got it working.
I'm trying to copy a local directory to the root directory of my 1and1 server. I'm on a mac and I've ssh'ed into the server just fine. I looked online and saw numerous examples all along the same lines. I tried:
scp -r ~/Desktop/Projects/resume u67673257#aimhighprogram.com:/
The result in my terminal was:
I'm not sure where Kuden/homepages/29/d401405832/htdocs came from, I thought the ~ would take me to the macbook user directory
Any help would be appreciated, I'm not sure if I'm just missing something simple.
Thanks in advance
To scp, issue the command on your Mac, don't SSH into 1and1.
The error message is telling you that ~/Desktop/Projects/resume is not on the 1and1 server, which you know - because you're working to put it there.
More ...
scp myfile myuser#myserver:~/mypath/myuploadedfile
You would read this as:
scp myfile to myserver, logging in as myuser and place it under the mypath directory of the myuser account, with the name myuploadedfile
I am new to hadoop and I have a problem.
The problem is I want give someone to use hdfs command, but cannot give them root password, So everything that needed "sudo" or "su hdfs" could not work. I have the reason that i cannot give others root permission.
I have found some solution like:
Create a group, change and let the group have HDFS permission, and add a user in, so that user would have HDFS permission. I had try it but fail.
So, I want to let a user be able to use hdfs commands without using "sudo -su hdfs" command or any command needed sudo permission. Could you tell me how to set the related settings or files with deeper details or any useful reference website ? Thank you all!
I believe by just setting the permission of /usr/bin/hdfs to be '-rwxr-xr-x 1 root root' ,other accounts should be able to execute the command hdfs successfully. have you tried it?
Rajest is right, 'hdfs command does not need sudo', probably you are using 'sudo -su hdfs' because command is attacking to path where only user 'hdfs' has permissions, you must organize data for your users.
A workarround (if you are not using kerberos) for using hdfs with any user is executing this line before working:
export HADOOP_USER_NAME=<HDFS_USER>
I have written a little test script to prevent running my script simultaneously with flock:
#!/bin/bash
scriptname=$(basename $0)
lock="/var/run/${scriptname}"
umask 0002
exec 200>$lock
flock -n 200 || exit 1
## The code:
sleep 60
echo "Hello world"
When I run the script with my user and try to run the script with another user I got following error message with the lock file.
/var/run/test.lock: Permission denied
Any idea?
Kind regards,
Andreas
In a comment, you mention that
other user is in the same group. file permissions are -rw-r--r--
In other words, only the first user has write permissions on the lock file.
However, your script does:
exec 200>$lock
which attempts to open the lockfile for writing. Hence the "permission denied" error.
Opening the file for writing has the advantage that it won't fail if the file doesn't exist, but it also means that you can't easily predict who the owner of the file will be if your script is being run simultaneously by more than one user. [1]
In most linux distributions, the umask will be set to 0022, which causes newly-created files to have permissions rw-r--r--, which means that only the user which creates the file will have write permissions. That's sane security policy but it complicates using a lockfile shared between two or more users. If the users are in the same group, you could adjust your umask so that new files are created with group write permissions, remembering to set it back afterwards. For example (untested):
OLD_UMASK=$(umask)
umask 002
exec 200>"$lock"
umask $OLD_UMASK
Alternatively, you could apply the lock with only read permissions [2], taking care to ensure that the file is created first:
touch "$lock" 2>/dev/null # Don't care if it fails.
exec 200<"$lock" # Note: < instead of >
Notes:
[1]: Another issue with exec 200>file is that it will truncate the file if it does exist, so it is only appropriate for empty files. In general, you should use >> unless you know for certain that the file contains no useful information.
[2]: flock doesn't care what mode the file is open in. See man 1 flock for more information.
I was trying to use flock on a file with shared group permissions with a system account. Access permissions changed in Ubuntu 19.10 due to an updated kernel. You must be logged in as the user who owns the file, and not a user whose group matches the file permissions. Even sudo -u will show 'permission denied' or 'This account is currently not available'. It affects fifo files like the ones used by the flock command.
The reason for the change is due to security vulnerabilities.
There is a workaround to get the older behaviour back in:
create /etc/sysctl.d/protect-links.conf with the contents:
fs.protected_regular = 0
Then restart procps:
sudo systemctl restart procps.service
Run the whole script by sudo /path/script.sh instead of only /path/script.sh