I'm trying to copy files from a windows share to my node's cache. Apparently there's no way to do that from remote_file so my alternative thought is to try and mount the windows share to a local drive and access the files I need via the file resource. However even though Chef is telling me the mount succeeded I'm unable to see the share and access it on the node.
mount "H:" do
device "\\\\ \\software"
action :mount
end
Just like https://tickets.opscode.com/browse/CHEF-1267 suggests.
However this isn't working:
Recipe: ossec::default←[0m
* mount[H:] action mount←[0m[2014-06-04T07:37:03-07:00] INFO: Processing mount
[H:] action mount (ossec::default line 20)
[2014-06-04T07:37:03-07:00] INFO: mount[H:] mounted
←[32m
- mount to H:←[0m
←[0m
[2014-06-04T07:37:07-07:00] INFO: Chef Run complete in 3.8376 seconds
[2014-06-04T07:37:07-07:00] INFO: Running report handlers
[2014-06-04T07:37:07-07:00] INFO: Report handlers complete
Chef Client finished, 1 resources updated←[0m
Based on this output the share is getting mounted, however it's no available on the windows node.
This is normal. Windows drive mappings are not shared across sessions, so drives mapped in the session where Chef is running are not visible in any other sessions. In addition, mappings are not persistent by default so a mapping made in one Chef session will not be available (by default) in later sessions.
Related
i resently upgraded a system. after reboot i was not able to login again. all users have been rejected with Login incorrect. systemd with journaling was running and writing error messages to file in /var/log/journal as usual.
i so booted a system from a revovery usb stick (same distribution) mounted the root device of the failed system /mnt and tried to analyze the logs with journalctl --root=/mnt/var/log/journal -xe. journalctl did not find journal files.
question: how can i read systemd journal content of a dead system using a recovery system?
have fun
I may be a bit late, but I stumbled upon this question and here is what I found:
journalctl logs are located in /var/log/journal/*
journalctl app can read foreign journal files with the following switches:
--file= followed by the *.journal file of your choice. This option may be used multiples times
--root= followed by the root directory of you choice, probably a mounted partition
--image= followed by a disk image,
files as argument, with the option --file
I am using user-data of ec2 instance to power up my auto scale instances and run the application. I am running node js application.
But it is not working properly. I have debugged and checked the instance cloud monitor output. So it says
pm2 command not found
After reading and investigating a lot I have found that the path for the command as root is not there.
As EC2 user-data when it tries to run it finds the path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
After ssh as ec2-user it is
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
After ssh as sudo su it is
/root/.nvm/versions/node/v10.15.3/bin:/sbin:/bin:/usr/sbin:/usr/bin
It works only for the last path.
So what is the way or script to run the command as root during launch of the instance provided by user-data?
All thought to start your application with userdata is not recommended, because as per AWS documentation they are not assuring that instance will only come up after successful execution of user data. Even if user data failed it will spin up your instance.
For your problem, I assume if you give the complete absolute path of the binary, It will work.
/root/.nvm/versions/node/v10.15.3/bin/pm2
Better solution for this approach, create a service file for your application startup and start application with systemd or service.
During vagrant up command I saw that it is generating settler_default_xxxxx folder in my virtual machines folder, and then it renames the folder to homestead-7, and then tries to run the machine and fails with:
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'poweroff' state.
When I look at the machine, I see that it has wrong hard disk attachment. The disk still points to the path in settler_ folder which no longer exists. I have to manually remove the attached disk and attach the right one in the homestead-7 folder instead.
Why doesn't Vagrant rename the disk attachment to homestead-7? Am I doing something wrong? If it's a bug, then is it Vagrant's, Laravel's or VirtualBox's bug?
My config:
Vagrant 1.8.1
VirtualBox 5.0.14
VAGRANT_HOME points to V:\VAGRANT_HOME
VirtualBox File -> Preferences -> Default Machine Folder points to V:\
Computer:
Windows 10 64-bit
8GB RAM
I am trying to set up a RabbitMQ cluster on Windows servers, and this requires using shared Erlang cookie file. According to the documentation, all I need to do is to ensure that the root directories on different machines contain the same .erlang.cookie file. So what I did is found these files on both machines and overwrote them with the same shared version.
After that all rabbitmqctl commands failed on the machine with new file version with "unable to connect to node..." error message. I tried to restart RabbitMQ Windows service, but still rabbitmqctl complained. I even reinstalled RabbitMQ on that machine, but then .erlang.cookie was reset back to the old version. Whenever I tried to use new version of cookie file, rabbitmqctl failed. When I restored an old version, it worked fine.
Basically I am stuck and can not proceed with cluster setup until I resolve this issue. Any help is appreciated.
UPDATE: Received an answer from RabbitMQ:
"rabbitmqctl will pick up the cookie from the user home directory while the service will pick it up from C:\windows. So you will need to synchronise those with each other, as well as with the other machine."
This basically means that cookie file needs to be repaced in two places: C:\Windows and current_user.
You have the above correct. The service will use the cookie at C:\Windows and when you use rabbitmqctl.bat to query the status it is using the cookie in your user directory (%USERPROFILE%).
When the cookies don't match the error look like
C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-2.8.2\sbin>rabbitmqctl.bat status
Status of node 'rabbit#PC-FOOBAR' ...
Error: unable to connect to node 'rabbit#PC-FOOBAR': nodedown
DIAGNOSTICS
===========
nodes in question: ['rabbit#PC-FOOBAR']
hosts, their running nodes and ports:
- PC-FOOBAR: [{rabbit,49186},{rabbitmqctl30566,63150}]
current node details:
- node name: 'rabbitmqctl30566#pc-foobar'
- home dir: U:\
- cookie hash: Vp52cEvPP1PukagWi5S/fQ==
There is one more gotcha for RabbitMQ cookies on Windows... If you have a %HOMEDIR% and %HOMEPATH% environment variables (as we do in our current test environment, and sets homedir above to U:\), then RabbitMQ will get the cookie there and if there isn't one it makes one up and writes it there. This left me banging my head on my desk for quite a while when trying to get this working. Once I found this gotcha it was obvious the cookie files were the problem (as documented) they were just at an odd location (not documented AFAIK).
Hope this solves someones pain setting up RabbitMQ Clustering on Windows.
I have just installed RabbitMQ on my WindowsXP PC. I have fulfilled the Erlang OPC15 prereq as well.
My rabitmq seems to be working. I did a simple test using pika in python and it seems to work. The service is urnning.
The problem is that I cannot do anything with rabbitmqctl.bat. I always get the response:
Status of node rabbit#MYPCNAME ...
Error: unable to connect to node rabbit#MYPCNAME: nodedown
diagnostics:
- nodes and their ports on MYPCNAME: [{rabbit,3097},{rabbitmqctl17251,1132}]
- current node: rabbitmqctl17251#mypcname
- current node home dir: C:\Documents and Settings\Myuser
- current node cookie hash: NOTSUREIFTHISISSENSITIVESOREMOVED==
In my rabbitmq log file I get:
=ERROR REPORT==== 12-Feb-2012::17:01:22 ===
** Connection attempt from disallowed node rabbitmqctl17251#mypcname **
From various forums I deduce this has something to do with cookies. What cookies are we talking about? What do I need to do to be able to manage my RabbitMQ instance using rabbitmqctl.bat? Please word your answer in a way that a non-erlang non-functional programmer would understand.
Had the same problem, this instruction straight out of the manual installation guide solved my problem:
Synchronise Erlang Cookies (when running a manually installed Windows Service)
Erlang Security Cookies used by the service account and the user
running rabbitmqctl.bat must be synchronised for rabbitmqctl.bat to
function.
To ensure Erlang cookie files contain the same string, copy the .erlang.cookie file from the Windows directory (normally C:\WINDOWS\.erlang.cookie) to replace the user .erlang.cookie. The user cookie will be in the user's home directory (%HOMEDRIVE%%HOMEPATH%), e.g. C:\Documents and Settings\%USERNAME%\.erlang.cookie or C:\Users\%USERNAME%\.erlang.cookie (Windows Vista and later).
Shortcut command for #Lining answer:
copy C:\Windows\.erlang.cookie %HOMEDRIVE%%HOMEPATH%\.erlang.cookie
Try to create a file called .erlang.cookie in your $HOME directory and put a simple passphrase in there.
Then restart rabbitmq and it might work. If it doesn't then rabbitmq is doing something to make sure you cannot put a system wide cookie in place.
It worked for me after replacing ".erlang.cookie" file under c:\Windows in C:\Documents and Settings\username folder, because cookie should be same as per my understanding.