Trying to write
#echo 3 > /proc/sys/vm/drop_caches with code but getting permission denied error.
I have tried using CAP_Admin Cap but It doesn't worked, and also i don't want to make the process uid to 0 /root.
Can anyone provide some solution for this. Thanks!
I need to write file using file operation but getting permission denied error
Related
Operating system: Windows 10
Database: Postgres 13
When trying to execute an sql file mydatabase=>\i C:\tmp\sqlfile.sql I get a
C:: Permission denied
error. The file permissions have been set to read / write for all. No matter where I put the file, bin directory, public directory, tmp directory, I get the same error.
I have created a user bill and log in as Bill, and I still get the same error. The '\' has been replaced with '/' with no success. The postgres account does not work either.
Is there a system setting that needs to be set to allow Postgres to read a file?
When installing tripwire on debian, it prompts me if I want to create a site key, local key, and finally, need to click 'ok' when completed.
Is there a way I can install tripwire, not create any keys, and answer the 'ok' at the end?
I'm using Digital Ocean's 'user data' where I copy & paste a bunch of bash commands so I can deploy a new droplet quickly.
Edit:
Looks like I was able to mute them but I still get this:
Setting up tripwire (2.4.2.2-4) ...
chmod: cannot access ‘/etc/tripwire/site.key’: No such file or directory
chmod: cannot access ‘/etc/tripwire/debian-512mb-nyc2-01-local.key’: No such file or directory
How can I avoid the chmod: cannot access errors?
To just suppress the errors redirect the stderr to /dev/null in your userdata script. Or if you want a log of the errors redirect it to a file so you can review upon startup.
chmod /etc/tripwire/site.key 2>/dev/null
or
chmod /etc/tripwire/site.key &>/tmp/chmod.log
I have written a little test script to prevent running my script simultaneously with flock:
#!/bin/bash
scriptname=$(basename $0)
lock="/var/run/${scriptname}"
umask 0002
exec 200>$lock
flock -n 200 || exit 1
## The code:
sleep 60
echo "Hello world"
When I run the script with my user and try to run the script with another user I got following error message with the lock file.
/var/run/test.lock: Permission denied
Any idea?
Kind regards,
Andreas
In a comment, you mention that
other user is in the same group. file permissions are -rw-r--r--
In other words, only the first user has write permissions on the lock file.
However, your script does:
exec 200>$lock
which attempts to open the lockfile for writing. Hence the "permission denied" error.
Opening the file for writing has the advantage that it won't fail if the file doesn't exist, but it also means that you can't easily predict who the owner of the file will be if your script is being run simultaneously by more than one user. [1]
In most linux distributions, the umask will be set to 0022, which causes newly-created files to have permissions rw-r--r--, which means that only the user which creates the file will have write permissions. That's sane security policy but it complicates using a lockfile shared between two or more users. If the users are in the same group, you could adjust your umask so that new files are created with group write permissions, remembering to set it back afterwards. For example (untested):
OLD_UMASK=$(umask)
umask 002
exec 200>"$lock"
umask $OLD_UMASK
Alternatively, you could apply the lock with only read permissions [2], taking care to ensure that the file is created first:
touch "$lock" 2>/dev/null # Don't care if it fails.
exec 200<"$lock" # Note: < instead of >
Notes:
[1]: Another issue with exec 200>file is that it will truncate the file if it does exist, so it is only appropriate for empty files. In general, you should use >> unless you know for certain that the file contains no useful information.
[2]: flock doesn't care what mode the file is open in. See man 1 flock for more information.
I was trying to use flock on a file with shared group permissions with a system account. Access permissions changed in Ubuntu 19.10 due to an updated kernel. You must be logged in as the user who owns the file, and not a user whose group matches the file permissions. Even sudo -u will show 'permission denied' or 'This account is currently not available'. It affects fifo files like the ones used by the flock command.
The reason for the change is due to security vulnerabilities.
There is a workaround to get the older behaviour back in:
create /etc/sysctl.d/protect-links.conf with the contents:
fs.protected_regular = 0
Then restart procps:
sudo systemctl restart procps.service
Run the whole script by sudo /path/script.sh instead of only /path/script.sh
I am trying to transfer a file to an ec2 instance. I followed the Amazon's documentation, this is what my command looked like:
scp -i [the key's location] Documents/[the file's location] ec2-user#[public dns]:[home/[destination]]
where I replaced all the variables with the proper things, I am sure it's the correct key and it has permission 400. When I call the command, it tells me the RSA key fingerprint, asks me if I want to continue connecting. I type yes and it replies with
Permission denied (publickey,gssapi-with-mic)
lost connection
I have looked at many of the other similar questions on stack overflow and can't find a correct way to do it.
Also ssh traffic is enabled on port 22.
The example amazon provided is correct. It sounds like a folder permissions issue. If you created the folder you are trying to copy to with another user or another user created it, chances are you don't have permissions to copy to it or edit it.
If you have sudo abilities, you can try opening access for yourself. Though not recommended to be left this way, you could try this command:
sudo chmod 777 /folderlocation
That gives complete read/write/executable permissions to anyone (hence why you shouldn't leave it at 777) but it will give you the chance to test your scp command to rule out permissions.
Afterwards if you aren't familiar with permissions, I suggest you read up on it. this is an example: http://www.tuxfiles.org/linuxhelp/filepermissions.html It is generally suggested you lock down the folder as much as possible depending on the type of information held within.
If that was not the cause some other things you might want to check:
are you in the directory of your key when executing the 'scp -i keyname' command?
do you have permissions to use the folder you are transferring from?
Best of luck.
The problem may be the user name. I copied a file to my Amazon instance and first tried to use the command:
scp -r -i ../.ssh/Amazon_server_key_pair.pem ./empty.test ec2-user#ec2-xx-yy-zz-tt.compute-1.amazonaws.com:~
and got the error:Permission denied (publickey).
I then realized that my instance is an Ubuntu environment and the user user is then "ubuntu" the correct command that worked for me is then:
scp -r -i ../.ssh/Amazon_server_key_pair.pem ./empty.test ubuntu#ec2-xx-yy-zz-tt.us-west-2.compute.amazonaws.com:~
The file "empty.test" is a text file containing the text "testing ...". Replace the address of your virtual server with the correct address to your instance's Public DNS. I have replaced the ip to my instance with xx.yy.zz.tt.
I have to use ubuntu# instead of ec2-user# because when i ssh i was seeing ubuntu# in my terminal, try changing to the name you see at your terminal
Also you have to set permission for pem file in your computer
chmod 400 /path/my-key-pair.pem
The below code will copy file from your computer to Ec2 instance.
scp -i ~/location_of_your_ec2_key_pair.pem ~/location_of_transfer_file/sample.txt ubuntu#ec2_your_ec2_instance.compute.amazonaws.com:~/folder_to_which_it_needs_to_be_copied
The below code will copy file from Ec2 instance to your computer
scp -i ~/location_of_your_ec2_key_pair.pem ubuntu#ec2_your_ec2_instance.compute.amazonaws.com:~/location_of_transfer_file/sample.txt ~/folder_to_which_it_needs_to_be_copied
I was facing the same problem. Hope this will work for you.
scp -rp -i yourfile.pem ~/local_directory username#instance_url:directory
Permission should also be correct to make this work.
Might be ones uses wrong username. Happened to me, was the same error msg -> Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection
C:\Projects\k>mysqldump --tab=c:\temp\multifile -ps -us s
mysqldump: Got error: 1: Can't create/write to file 'c:\temp\multifile\archive.txt' (Errcode: 13) when executing 'SELECT INTO OUTFILE'
How can i fix it on windows? I don't have any limitations for this user...
Windows error 13 is "permission denied". Maybe the file already exists and you can't delete it which would be required to create a new file with that name.
Possible reasons:
This's the output when I run mysqldump with user work
$ ll
total 908
-rw-rw-r-- 1 work work 1824 Apr 28 14:47 test.sql
-rw-rw-rw- 1 mysql mysql 922179 Apr 28 14:47 test.txt
The test.sql is created by user work, but the test.txt is created by user mysql, so is "permission denied!". In this situation, you should chmod your directory.
It is best that --tab be used only for dumping a local server. If you use it with a remote server, the --tab directory must exist on both the local and remote hosts, and the .txt files will be written by the server in the remote directory (on the server host), whereas the .sql files will be written by mysqldump in the local directory (on the client host).
Refer: Dumping Data in Delimited-Text Format with mysqldump
You need the FILE privilege in order to be allowed to use SELECT...INTO OUTFILE, which seems to be what mysqldump --tab uses to generate the tab-separated dump.
This privilege is global, which means it may only be granted "ON ." :
GRANT FILE ON *.* TO 'backup'#'%';
Refer: Which are the proper privileges to mysqldump for the Error Access denied when executing 'SELECT INTO OUTFILE'.?
First few ideas: is there enough space? Does mysql have rights to write there?
Another thing I found: Turning "off" the scanning of windows/temp folder in the anti-virus resolved the issue for me.
Hope it helps
I had similar error on win7 and Wn2008 R2 -- error 13 "permission denied". tried all the suggested solutions didn't work.
I created a folder e.g c:\temp and gave full control to logged in user. issue resolved.
Right click target folder --> Properties --> Security --> Edit... , Make sure permissions are editable under active user.
I observed that this problem also occurs if "Allow" permissions are grayed out for target folder even under active user. after creating new folder and granting full control, all permissions are editable and no more permission error.
I had a similar issue and resolved it by adding "everyone" permissions to the target folder.