How do you restart Samba on OSX 10.6.7? I've looked everywhere can't fine any docs.
Reason for asking is that on occasion Samba just hangs and I have to forcibly restart the mac to fix it. (restart command just hangs the mac)
On a terminal, sudo killall smbd will terminate all smbd instances.
They will be respawned on demand accoring to my tests (i.e. kill all smbd, then try to browse the services with smbclient -L hostname, and the daemons are restarted automagically).
You can check for smbd processes easily with ps uax | grep smbd
Edit: if you need to really assassinate blocked smbd processes, use killall -9 smbd (which is not recommended, see section on signals )
Related
I want to install Pseudo-Distributed HBase environment on my Mac OS Sierra (10.12.4), and it requires ssh installed and can log with ssh localhost without password. But sometimes I came across with error when I use ssh to log in. Above all are question background, and the actual question is where can I find debug logs of sshd so I could know why logging is failed in further?
As I know, Mac OS already have sshd installed and use launchd to manage it, and I know one way to output debug logs by sshd -E /var/log/sshd.log, but when I reviewed /etc/ssh/sshd_config configuration and there are two lines:
#SyslogFacility AUTH
#LogLevel INFO
I guess these two lines are used to config debug mode, then I removed # before them and set LogLevel to DEBUG3 and then restarted sshd:
$ launchctl unload -w /System/Library/LaunchDaemons/ssh.plist
$ launchctl load -w /System/Library/LaunchDaemons/ssh.plist
And then I set log path in /etc/syslog.conf:
auth.*<tab>/var/log/sshd.log
<tab> means tab character here, and reloaded the config:
$ killall -HUP syslogd
But sshd.log file can not be found in /var/log folder when I executed ssh localhost. I also tried config the /etc/asl.log:
> /var/log/sshd.log format=raw
? [= Facility auth] file sshd.log
And the result was the same, can someone help me?
Apple, as usual, decided to re-invent the wheel.
In super-user window
# log config --mode "level:debug" --subsystem com.openssh.sshd
# log stream --level debug 2>&1 | tee /tmp/logs.out
In another window
$ ssh localhost
$ exit
Back in Super-user window
^C (interrupt)
# grep sshd /tmp/logs.out
2019-01-11 08:53:38.991639-0500 0x17faa85 Debug 0x0 37284 sshd: (libsystem_network.dylib) sa_dst_compare_internal <private>#0 < <private>#0
2019-01-11 08:53:38.992451-0500 0xb47b5b Debug 0x0 57066 socketfilterfw: (Security) [com.apple.securityd:unixio] open(/usr/sbin/sshd,0x0,0x1b6) = 12
...
...
In super-user window, restore default sshd logging
# log config --mode "level:default" --subsystem com.openssh.sshd
You can find it in /var/log/system.log. Better if you filter by "sshd":
cat /var/log/system.log | grep sshd
Try this
cp /System/Library/LaunchDaemons/ssh.plist /Library/LaunchDaemons/ssh.plist
Then
vi /Library/LaunchDaemons/ssh.plist
And add your -E as shown below
<array>
<string>/usr/sbin/sshd</string>
<string>-i</string>
<string>-E</string>
<string>/var/log/system.log</string>
</array>
And lastly restart sshd now you will see sshd logs in /var/log/system.log
launchctl unload /System/Library/LaunchDaemons/ssh.plist && launchctl load -w /Library/LaunchDaemons/ssh.plist
I also had an ssh issue that I wanted to debug further and was not able to figure out how to get the sshd debug logs to appear in any of the usual places. I resorted to editing the /System/Library/LaunchDaemons/ssh.plist file to add a -E <log file location> parameter (/tmp/sshd.log, for example). I also edited /etc/ssh/sshd_config to change the LogLevel. With these changes, I was able to view the more verbose logs in the specified log file.
I don't have much experience with MacOS so I'm sure there is a more correct way to configure this, but for lack of a better approach this got the logs I was looking for.
According to Apple's developer website, logging behavior has changed in macOS 10.12 and up:
Important:
Unified logging is available in iOS 10.0 and later, macOS 10.12 and later, tvOS 10.0 and later, and watchOS 3.0 and later, and supersedes ASL (Apple System Logger) and the Syslog APIs. Historically, log messages were written to specific locations on disk, such as /etc/system.log. The unified logging system stores messages in memory and in a data store, rather than writing to text-based log files.
Unfortunately, unless someone comes up with a pretty clever way to extract the log entries from memory or this mysterious "data store", I think we're SOL :/
There is some sshd log in
/var/log/system.log
for example
Apr 26 19:00:11 mac-de-mamie com.apple.xpc.launchd[1] (com.openssh.sshd.7AAF2A76-3475-4D2A-9EEC-B9624143F2C2[535]): Service exited with abnormal code: 1
Not very instructive. I doubt if more can be obtained. LogLevel VERBOSE and LogLevel DEBUG3 in sshd_config do not help.
According to man sshd_config :
"Logging with a DEBUG level violates the privacy of users and is not recommended."
By the way, I relaunched sshd not with launchctl but with System preference Sharing, ticking Remote login.
There, I noticed the option : Allow access for ...
I suspect this settings to be OUTSIDE /etc/ssh/sshd_config
(easy to check but I have no time).
Beware that Mac OS X is not Unix : Apple developpers can do many strange things behind the scene without any care for us command line users.
I am trying to force shutdown multiple mac computers every night which are all connected to a server. I am unsure if the best way to do this is by running a sudo shutdown command through a for loop using IP addresses or ssh'ing. Or any other method. Any advice would be appreciated!
I don't know any better method than ssh.
Generate and install your ssh key on those macs in the root account, in the file /var/root/.ssh/authorized_keys2 of each of them,
Ensure each of your mac has the line "PermitRootLogin yes" uncommented in the file /etc/ssh/sshd_config, if not change it and relaunch sshd.
And finaly use ssh to run the shutdown command.
Here is the command line in bash shell :
for host in host01 host02 host03; do ssh root#$host "shutdown -h"; done
I have set up an environment with AWS EC2 based on ubuntu 14.04 and configure vncserver under it. After everything is done, I am able to connect the EC2 instance with VNC viewer and see the desktop. However, after a period of time idle on vncviewer, the connection is disconnected and I have error
"Too many authentication failures"
After I restart the vncserver by going through ssh to EC2, I am able to use vncviewer to connect to the instance again. Any solution for me to not having the error and connection is not disconnected?
I faced the same scenario. For me this happened because, multiple sessions of vncserver was running on my Server. Do the following steps...
Step 1: See the multiple VNC sessions running on your server.
You will see multiple process IDs running. (If not, still proceed to the next steps)
$ pgrep vnc
72063
119177
This is because you have run vncserver command multiple times on the server.
Step 2: Kill all processes from step 1
$ kill 72063
$ kill 119177
Step 3: Restart the VNC session
$ vncserver
Step 4: Verify whether it is working.
$ nc 104.197.91.140 5901
// alternatively you can use telnet
$ telnet 104.197.91.140 5901
// the response should like this
RFB 003.008
Simply try loading the VNC viewer session again
You might try these commands:
# echo $DISPLAY
# ps -aef | grep sesman
# netstat -natp | grep vnc
If memory serves, if you get to more than ten no-longer-established vnc sessions, some VNC clients no longer allow additional connections. In this case, you need to kill the vnc processes that no longer have the established status.
How can i connect to my ovh vps server from Osx ?
I've tried to connect using Chicken of the VNC, but connection failed.
Does any ovh vps servers have remote desktop connection available ?
Thank you for your help !
first you can detach your script from your terminal : run them in background (&) with nohup from a bash term launched into your initial session when you close your session your scripts will be attached to init
$ bash
$ nohup my_command &
$ exit
second part of the question
Chicken is not a term ???
What do you want to do ?
If you want to launch GUI programs use ssh -X and setup X11 both on your mac and your server (there are many posts about this)
I've gotten sick of how many steps it takes me to get started in the morning. Yes it only takes me a few minutes to start up my whole environment, but I'd really rather just run a single command on boot-up and be ready to go immediately.
I'm writing an app on Rails connected to SqlServer. To develop for it I have a local version of the DB I use on a VM. My manual process goes like this:
Run VirtualBox.
Start the VM.
When the VM is done booting:
Open terminal
Run `rails s`
When rails is done starting:
open browser
navigate to localhost:3000 and start developing
Run Sublime
I'd love to do this in one script:
VirtualBox Windows7 &
sublime &
google-chrome &
But I can't figure out how to run this command only once the VM is done booting:
gnome-terminal --working-directory=git/my_project --tab -e 'rails s' --tab -e 'git status'
Also, it'd be nice (but not necessary) to have chrome start after rails s has succeeded.
Is this even possible?
I'm not opposed to polling, but it feels like this is something VirtualBox should be able to do a bit more naturally.
EDIT
From Comment:
I'm using Host-Only network with two Bridged Interfaces (one for wireless and one for wired) available. (It allows me to use the VM whether or not I'm connected to a network, and lets me freely switch between wired and wireless without noticing the difference).
Here is how I would do:
In the VM, create a script which will find the default gateway, & keep pinging to it. & add it to user's startup. (needs parsing of ipconfig /all which can be done with vbscript/python.)
In host, look at the network interface between host & VM. Find the default gateway on host (parse route -n output in bash script). Since both use same physical interface, the gateway would be same (assuming NAT & ONE physical interface). Use tcpdump, to wait for the ping packets to the gateway.
"Default gateway" was chosen because that was something host & VM can find out independent of each other. Other alternative was to hard-code host's address.
After the host tcpdump on host exits, it means that the VM is alive & booted upto windows desktop.
I looked into this line of inquiry before, and I think Devil's Pie is the closest you can get to setting that up:
http://burtonini.com/blog/computers/devilspie
You could try starting with this (VBoxManager startvm):
How to automatically start and shut down VirtualBox machines?
and then look at some working scripts to add to your init.d and/or rc.local once your VM is up to finish the rest of the job in order:
Get To Know Linux: The /etc/init.d Directory
I needed to orchestrate something similar. I'm using a Windows VM (guest) as a proxy (it runs a Windows-only corporate VPN client) for my Linux laptop (host). The approach is to fully automate the guest and wait until it's ready:
The host must have no funky routes (yet)
The VM starts and runs a powershell script (via Windows Task Scheduler, run-on-startup) that connects the VPN client and sets up ICS (Internet Connection Sharing, basically routing).
The host now adds funky routes that send some traffic via the VM's host-only interface. If it added these routes too soon, step 2 would not work.
The VM also runs squid (http proxy) and its port is NAT port forwarded from the host, so localhost:3128 actually goes to the guest. So a curl using this proxy goes to the corporate network and indicates whether the guest is fully up and connected.
(Squid is also useful as a backup to this complicated but very convenient mechanism, I can still ssh via corkscrew, etc)
So, I run this script on the host (simplified version shown):
#!/bin/bash
VM=vm #Name of the Virtual Machine
SCRIPT_DIR=/some/dir
PROXY_ADDRESS=localhost:3128
REMOTE_CURL_HOST=any.corporate.hostname
function waitloop() {
echo -n "Waiting to hear from $REMOTE_CURL_HOST "
while ! curl -s -m 5 --proxy $PROXY_ADDRESS $REMOTE_CURL_HOST > /dev/null ; do
echo -n .
sleep 10
done
echo "!"
}
# a separate script that takes down my routes, you may not need this.
bash $SCRIPT_DIR/network-config-vboxnet0.sh down
# error is OK if it's already running
vboxmanage startvm $VM
waitloop && bash $SCRIPT_DIR/network-config-vboxnet0.sh up && echo "Completed"
Essentially, the script waits until curl works through the VM.