I am currently trying to setup Squid proxy for caching purposes on a server running Ubuntu 20.04 but it fails to start with the following errors:
2021/03/28 15:37:38| Created PID file (/var/run/squid.pid)
2021/03/28 15:37:38 kid1| Set Current Directory to /mnt/Biggy/squid
2021/03/28 15:37:38 kid1| Creating missing swap directories
2021/03/28 15:37:38 kid1| Not currently OK to rewrite swap log.
2021/03/28 15:37:38 kid1| storeDirWriteCleanLogs: Operation aborted.
2021/03/28 15:37:38 kid1| FATAL: Failed to make swap directory /mnt/Biggy/squid: (13) Permission denied
2021/03/28 15:37:38 kid1| Squid Cache (Version 4.10): Terminated abnormally.
The said folder was given correct ownership and permission:
drwxrwxr-x 2 proxy proxy 4096 Mar 28 14:50 squid
The relevant directives in squid.conf are below:
cache_store_log /mnt/Biggy/squid/store.log
coredump_dir /mnt/Biggy/squid
cache_dir ufs /mnt/Biggy/squid 10000 16 256
cache_effective_user proxy
cache_effective_group proxy
I checked user, group, permissions, etc all seem correct. There however another important point: if I change cache_effective_user to my own username and take ownership of the folder /mnt/Biggy/squid/ then it works. I would however prefer not to to that.
Does anyone have a suggestion? I don't mind deleting the /mnt/Biggy/squid/ folder if needed but it has to be stored on /mnt/Biggy.
Thank you
Related
While am trying to execute rsync command from macOS Catalina terminal i was able to successfully copy the data from volume to local drive
This is the command am using
rsync -avxhPE /Volumes/pathtofolder/assets/. ./assets
But the same command is not working in Jenkins. Am getting the following error.
+ rsync -avxhPE /Volumes/pathtofolder/assets/. ./assets
19:28:22 building file list ...
19:28:24 0 files...
rsync: opendir "/Volumes/pathtofolder/assets/." failed: Operation not permitted (1)
19:28:24 1 file to consider
19:28:24 ./
19:28:24
19:28:24 sent 83 bytes received 26 bytes 43.60 bytes/sec
19:28:24 total size is 0 speedup is 0.00
19:28:24 rsync error: some files could not be transferred (code 23) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-54/rsync/main.c(996) [sender=2.6.9]
19:28:24 Build step 'Execute shell' marked build as failure
19:28:24 Finished: FAILURE
The user who run jenkins do not have permission to read the directory. Check this line:
rsync: opendir "/Volumes/pathtofolder/assets/." failed: Operation not permitted (1)
This is a consequence of new user privacy protections in macOS 10.15. See WWDC 2019 Session 701 Advances in macOS Security for all the details. As a user, you can grant access to the tool by adding it to the list in
System Preferences > Security & Privacy > Privacy > Files and Folders.
Share and Enjoy
I am attempting to set up gpg preset passphrase caching using the gpg agent so I can automate my file encryption process. In order for the gpg-agent to run and properly cache the passphrase, it seems there needs to be a S.gpg-agent socket located within the ~/.gnupg/ directory that gets generated in the root directory when I set up gpg and gpg-agent.
What I have done (and which seemed to work in the past) is I would start up everything as root and copy over the contents of the /.gnupg directory to my less privileged user and grant permissions to that socket and directory to the user. The commands I ran to start up the gpg-agent daemon and cache passphrase:
gpg-agent --homedir /home/<user>/.gnupg --daemon
/usr/libexec/gpg-preset-passphrase --preset --passphrase <passphrase> <keygrip>
gpg-agent process seems to be running just fine but I get the below error from the second line:
gpg-preset-passphrase: can't connect to `/home/<user>/.gnupg/S.gpg-agent': Connection refused
gpg-preset-passphrase: caching passphrase failed: Input/output error
I have made sure the socket exists in the directory with proper permissions and this process runs as root. It seems that this socket is still inherently tied to root even if I copy and modify permissions. So my questions are
How exactly does this socket get initialized?
Is there a way to do so manually as another user?
To add, the agent process seems to run just fine for both users but where I get a little hazy is how the gpg-preset-passphrase is using the socket and if its that or the agent that is refusing the connection to S.gpg-agent
I also assume that I don't need to explicitly start the agent but figured I would this so that I could set any values such as the homedir if needed.
It turns out the issue was unrelated to the gpg-agent and gpg-preset-passprhase.
Note: This is not a permanent solution but it did allow me to get past the issue I was facing.
After modifying the /etc/selinux/config and disabling SE Linux, I no longer experienced the permissions issue above. SE Linux is a Linux kernel security module developed by Red Hat (I am currently running this on RHEL7). It seems the next step will likely be to make sure these binaries and packages are allowed access from my user using audit2allow. Bit more information on this here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-allowing_access_audit2allow
Having trouble installing postgres 10 on my Mac, version 10.12.5.
I've tried installing 2 ways:
(1) Downloading Postgres.app
(2) `brew install postgresql`
and tried to manually run a bunch of variations of these commands for initdb:
$ initdb /Applications/Postgres.app/Contents/Versions/latest/bin/ver-10
$ initdb /usr/local/var/postgres -E utf8
All yield the same error FATAL: could not create semaphores: Invalid argument.
Full trace:
The files belonging to this database system will be owned by user "foo".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.UTF-8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /usr/local/var/postgres ... ok
creating subdirectories ... ok
selecting default max_connections ... 10
selecting default shared_buffers ... 400kB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... 2018-03-12 20:49:58.654 PDT [98144] FATAL: could not create semaphores: Invalid argument
2018-03-12 20:49:58.654 PDT [98144] DETAIL: Failed system call was semget(1, 17, 03600).
child process exited with exit code 1
initdb: removing contents of data directory "/usr/local/var/postgres"
I'm not sure what FATAL: could not create semaphores: Invalid argument means. I've seen a lot of other answers related to insufficient space, but not this one.
Thanks in advance!
I had to remove Postgres which I install via Brew, and then I reinstalled Postgres.app, and then restarted my computer, and then opening the app and clicking 'initialize' fixed it.
I am trying to setup Ambari on single node cluster.
Ambari setup was done as root user
I tried all the post related to this , change the permission and did set up as permission
http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Installing_HDP_AMB/content/_set_up_password-less_ssh.html
cd ~/.ssh
rm -rf /root/.ssh
ssh-keygen -t dsa
cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys
cat /root/.ssh/authorized_keys
Copied the the Key from above line in Ambari while setting up cluster Step
ambari-server restart
When I try to Register and Confirm in lInstall Options I get below error
However I am able to do "ssh root#hadoop.maxsjohn.com without giving the password.
==========================
Creating target directory...
==========================
Command start time 2017-03-13 03:35:43
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
SSH command execution finished
host=hadoop.maxsjohn.com, exitcode=255
Command end time 2017-03-13 03:35:43
ERROR: Bootstrap of host hadoop.maxsjohn.com fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
STDOUT:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).[Error Message][1]
So, coming in a year later I got a very similar error but with a multiple host cluster. In case it helps, I found this error happens for the host running Ambari Server when the private key file chosen on the 'Install Options' page in the 'Cluster Install Wizard' is incorrect (in my case I re-created the keys but neglected to update Ambari). From the host OS perspective the passwordless SSH works just fine but Ambari fails to install the host until the corresponding SSH Private Key file is chosen.
I suspect the password cannot be blank. You need to set a password. If this is for your learning, i would suggest take a copy of VM from hortonworks site and use it. You don't have to go through the pain of installing and configuring. Here is the link
I've joined a new company and to get caught up to speed, I've been playing with Vagrant for my VM. I had my system nearly set up and then a weird error forced me to shut off my laptop without disconnecting via vagrant destroy. Now when trying to get set, I run vagrant up and get the following error message.
[default] Running provisioner: Vagrant::Provisioners::ChefClient...
[default] Creating folder to hold client key...
[default] Uploading chef client validation key...
[default] Generating chef JSON and uploading...
[default] Running chef-client...
stdin: is not a tty
[Wed, 16 Jan 2013 05:20:20 -0500] INFO: *** Chef 0.10.2 ***
[Wed, 16 Jan 2013 05:20:20 -0500] INFO: Client key /etc/chef/client.pem is not present - registering
[Wed, 16 Jan 2013 05:20:21 -0500] INFO: HTTP Request Returned 409 Conflict: Client already exists.
[Wed, 16 Jan 2013 05:20:22 -0500] INFO: HTTP Request Returned 403 Forbidden: Merb::ControllerExceptions::Forbidden
[Wed, 16 Jan 2013 05:20:22 -0500] FATAL: Stacktrace dumped to /srv/chef/file_store/chef-stacktrace.out
[Wed, 16 Jan 2013 05:20:22 -0500] FATAL: Net::HTTPServerException: 403 "Forbidden"
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
chef-client -c /tmp/vagrant-chef-1/client.rb -j /tmp/vagrant-chef-1/dna.json
Now from my own research I see that this means a client already exists with the name specified, so I decided to manually shut it down. I tried to list all the knife clients with knife client list but then got the following message:
WARNING: No knife configuration file found
ERROR: Your private key could not be loaded from /etc/chef/client.pem
Check your configuration file and ensure that your private key is readable
Strange. I know knife.rb exists, I see it when I ls so I don't know how the knife configuration file couldn't exist. I can't see my knife clients without this private key apparently. I'm completely new to Vagrant, Knife AND Chef so I'm stumped.
Thoughts?
So the convention is that your knife.rb be located in ~/.chef/knife.rb or /etc/chef/knife.rb - I prefer the former, as it keeps it in my home folder, adn constrained to MY user account.
I will also typically keep my Chef Server client certificate there as well.
Once you are able to execute a knife client list successfully, then you will be able to identify and remove the offending client certificate. (You might also be able to use the Web UI in the interim).
Having Vagrant remove the client's cert on destroy was a suggested feature but was never implemented, leaving it to the operator to make that decision.
Additionally - it looks like you're using a VERY old version of Chef - 0.10.2 - and we've just had 10.18.2 released today. Something to consider.