Suspicious Activity in system.log OSX - macos

A mac user was having some clock errors, and thought they had seen someone using remote/VNC action on their screen. I went through the system.log and most of this activity is showing at times when the laptop was off and unplugged (no battery) and the user was asleep.
System.log file here- https://ghostbin.com/paste/mcukf
These were the lines that interested me.
Java connection causing clock to be off.
23:54:32 Ushas-Air Java Updater[531]: Original euid:501
Apr 24 23:54:32 Ushas-Air com.apple.xpc.launchd[1] (com.apple.preference.datetime.remoteservice[366]): Service exited due to signal: Killed: 9 sent by com.apple.preference.datetime.re[366]
Apr 24 23:54:32 Ushas-Air Java Updater[531]: Host name is javadl-esd-secure.oracle.com
Apr 24 23:54:32 Ushas-Air Java Updater[531]: Feed URL: https
Apr 24 23:54:32 Ushas-Air Java Updater[531]: Hostname check passed. Valid Oracle hostname
Apr 24 23:54:33 Ushas-Air com.apple.xpc.launchd[1] (com.apple.bsd.dirhelper[523]): Endpoint has been activated through legacy launch(3) APIs. Please switch to XPC or bootstrap_check_in(): com.apple.bsd.dirhelper
Apr 24 23:54:36 Ushas-Air java[541]: objc[541]: Class JavaLaunchHelper is implemented in both /Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin/java (0x1023604c0) and /Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/lib/jli/./libjli.dylib (0x119327480). One of the two will be used. Which one is undefined.
Instances of IMRemoteURLConnection Agent happening
Apr 25 00:14:11 Ushas-MacBook-Air com.apple.xpc.launchd[1] (com.apple.imfoundation.IMRemoteURLConnectionAgent): Unknown key for integer: _DirtyJetsamMemoryLimit
Apr 25 00:01:22 Ushas-MacBook-Air com.apple.xpc.launchd[1] (com.apple.imfoundation.IMRemoteURLConnectionAgent): Unknown key for integer: _DirtyJetsamMemoryLimit
Apr 25 00:05:57 Ushas-MacBook-Air com.apple.xpc.launchd[1] (com.apple.preferences.users.remoteservice[762]): Service exited due to signal: Killed: 9 sent by com.apple.preferences.users.remo[762]
Multiple cache deletes requested after.
Apr 25 00:01:27 Ushas-MacBook-Air logd[57]: _handle_cache_delete_with_urgency(0x7fdf19412a60, 3, 0)
Apr 25 00:01:27 Ushas-MacBook-Air logd[57]: _handle_cache_delete_with_urgency(0x7fdf19412a60, 3, 0)
Apr 25 00:01:31 Ushas-MacBook-Air com.apple.preferences.icloud.remoteservice[700]: BUG in libdispatch client: kevent[EVFILT_MACHPORT] monitored resource vanished before the source cancel handler was invoked
Apr 25 00:01:33 Ushas-MacBook-Air logd[57]: _handle_cache_delete_with_urgency(0x7fdf19658620, 3, 0)
Apr 25 00:01:33 Ushas-MacBook-Air logd[57]: _volume_contains_cached_data(is /private/var/db/diagnostics/ in /) - YES
Apr 25 00:01:34 Ushas-MacBook-Air logd[57]: 239517600 bytes of purgeable space from log files
Apr 25 00:01:34 Ushas-MacBook-Air logd[57]: _purge_uuidtext only runs at urgency 0 (3)
Apr 25 00:01:34 Ushas-MacBook-Air logd[57]: 0 bytes of purgeable space from uuidtext files
And appears to be launching the FamilyCircleFramework
Apr 24 23:56:11 Ushas-Air com.apple.xpc.launchd[1] (com.apple.imfoundation.IMRemoteURLConnectionAgent): Unknown key for integer: _DirtyJetsamMemoryLimit
Apr 24 23:56:16 --- last message repeated 1 time ---
Apr 24 23:56:16 Ushas-Air familycircled[615]: objc[615]: Class FAFamilyCloudKitProperties is implemented in both /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/FamilyCircle (0x7fffbe466a60) and /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/Resources/familycircled (0x10aa01178). One of the two will be used. Which one is undefined.
Apr 24 23:56:16 Ushas-Air familycircled[615]: objc[615]: Class FAFamilyMember is implemented in both /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/FamilyCircle (0x7fffbe466880) and /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/Resources/familycircled (0x10aa01268). One of the two will be used. Which one is undefined.
Apr 24 23:56:16 Ushas-Air familycircled[615]: objc[615]: Class FAFamilyCircle is implemented in both /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/FamilyCircle (0x7fffbe466a10) and /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/Resources/familycircled (0x10aa01358). One of the two will be used. Which one is undefined.
Activity related to Findmyfriends happening. The mac owner doesn't use FindMyFriends, or have a mac phone.
Apr 25 00:30:00 Ushas-MacBook-Air syslogd[40]: Configuration Notice:
ASL Module "com.apple.mobileme.fmf1.internal" sharing output destination "/var/log/FindMyFriendsApp/FindMyFriendsApp.asl" with ASL Module "com.apple.mobileme.fmf1".
Output parameters from ASL Module "com.apple.mobileme.fmf1" override any specified in ASL Module "com.apple.mobileme.fmf1.internal".
Apr 25 00:30:00 Ushas-MacBook-Air syslogd[40]: Configuration Notice:
ASL Module "com.apple.mobileme.fmf1.internal" sharing output destination "/var/log/FindMyFriendsApp" with ASL Module "com.apple.mobileme.fmf1".
Output parameters from ASL Module "com.apple.mobileme.fmf1" override any specified in ASL Module "com.apple.mobileme.fmf1.internal".
Apr 25 00:30:00 Ushas-MacBook-Air syslogd[40]: Configuration Notice:
The keybaglogd being shared with com.apple.mkb
Apr 25 00:30:00 Ushas-MacBook-Air syslogd[40]: Configuration Notice:
ASL Module "com.apple.mkb.internal" sharing output destination "/private/var/log/keybagd.log" with ASL Module "com.apple.mkb".

Related

Failed to start Elasticsearch. Error opening log file '/gc.log': Permission denied

Dear StackOverflow community,
I was running Kibana/Elasticsearch without a problem until installing a Kibana plugin. Then the service failed and I noticed that the problem is that Elasticsearch stopped. I tried several ways to fix it, and then even reinstalled all. But the problem still avoiding to launch Elasticsearch, even with a fresh installation.
Installation on Debian 9 using apt install.
systemctl start elasticsearch.service
results on:
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
[0.000s][error][logging] Error opening log file '/gc.log': Permission denied
Full log with journalctl -xe
-- Unit elasticsearch.service has begun starting up.
Feb 07 14:09:06 Debian-911-stretch-64-minimal kibana[576]: {"type":"log","#timestamp":"2020-02-07T13:09:06Z","tags":["warning","elasticsearch","admin"],"pid":576,"message":"Unable to revive connection: http://localhost:9200/"}
Feb 07 14:09:06 Debian-911-stretch-64-minimal kibana[576]: {"type":"log","#timestamp":"2020-02-07T13:09:06Z","tags":["warning","elasticsearch","admin"],"pid":576,"message":"No living connections"}
Feb 07 14:09:06 Debian-911-stretch-64-minimal kibana[576]: {"type":"log","#timestamp":"2020-02-07T13:09:06Z","tags":["warning","elasticsearch","admin"],"pid":576,"message":"Unable to revive connection: http://localhost:9200/"}
Feb 07 14:09:06 Debian-911-stretch-64-minimal kibana[576]: {"type":"log","#timestamp":"2020-02-07T13:09:06Z","tags":["warning","elasticsearch","admin"],"pid":576,"message":"No living connections"}
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: output:
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: [0.000s][error][logging] Error opening log file '/gc.log': Permission denied
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: [0.000s][error][logging] Initialization of output 'file=/var/log/elasticsearch/gc.log' using options 'filecount=32,filesize=64m' failed.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: error:
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: Error: Could not create the Java Virtual Machine.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: Error: A fatal exception has occurred. Program will exit.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:118)
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:86)
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:92)
Feb 07 14:09:06 Debian-911-stretch-64-minimal systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Feb 07 14:09:06 Debian-911-stretch-64-minimal systemd[1]: Failed to start Elasticsearch.
-- Subject: Unit elasticsearch.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit elasticsearch.service has failed.
The mentioned gc.log file was not in that folder. And the permissions were:
drwxr-s--- 2 elasticsearch elasticsearch 4096 Jan 15 13:20 elasticsearch
I created the file and also played with permissions until having these:
-rwxrwxrwx 1 root elasticsearch 0 Feb 7 15:19 gc.log
...and even changed the ownership:
-rwxrwxrwx 1 root root 0 Feb 7 15:19 gc.log
But no success, I still having the same issue.
Thanks
Make sure you are running CMD as Administrator.
This error also happens if you are using docker & running the container as a different user. You have to add --group_add flag to docker command or set TAKE_FILE_OWNERSHIP environment variable as mentioned here
Using docker-compose:
user: 1007:1007
group_add:
- 0
Using docker:
--group-add 0
Firstly, I didn't know why gc.log file was not present. Have you changed the logs folder path or something? The gc.log path can be set in jvm.options file. By default ES logs and java garbage collection logs are fed into the logs folder inside $ES_HOME directory.
About user perspective, elastic search can't be run as root user. So from the ES directory details its showing you have an elasticsearch user created, and trying to run the cluster by that user.
The problem here can be solved by changing the permissions of files insdie the ES directory where all it belongs. Now the gc.log file is owned by root user and it cannot be accessed by the elasticsearch user.
Try this: sudo chown <user> <path/to/es/directory> -R
Here it becomes : sudo chown elasticsearch elasticsearch/ -R
If the issue still persists, check the jvm.options file whether its all configured correctly. Unless you change the -Xloggc:logs/gc.log option, the gc.log won't be pushing to /var/log.
Feb 09 17:09:02 server elasticsearch[2199]: Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Your log says, the option is given as file=/var/log/elasticsearch/gc.log. Correct any wrong configurations as per documentation : https://www.elastic.co/guide/en/elasticsearch/reference/master/jvm-options.html
sudo systemctl -l status elasticsearch.service
Returns this log:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/elasticsearch.service.d
└─override.conf
Active: failed (Result: exit-code) since Sun 2020-02-09 17:09:02 CET; 2min 48s ago
Docs: http://www.elastic.co
Process: 2199 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 2199 (code=exited, status=1/FAILURE)
Feb 09 17:09:02 server elasticsearch[2199]: Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Feb 09 17:09:02 server elasticsearch[2199]: Error: Could not create the Java Virtual Machine.
Feb 09 17:09:02 server elasticsearch[2199]: Error: A fatal exception has occurred. Program will exit.
Feb 09 17:09:02 server elasticsearch[2199]: at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:118)
Feb 09 17:09:02 server elasticsearch[2199]: at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:86)
Feb 09 17:09:02 server elasticsearch[2199]: at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
Feb 09 17:09:02 server elasticsearch[2199]: at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:92)
Feb 09 17:09:02 server systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Feb 09 17:09:02 server systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Feb 09 17:09:02 server systemd[1]: Failed to start Elasticsearch.
At this point I'm doing a fresh install. Not able to find the solution I need to continue working...

Is there any solution to the XFS lockup in linux?

Apparently there is a known problem of XFS locking up the kernel/processes and corrupting volumes under heavy traffic.
Some web pages talk about it, but I was not able to figure out which pages are new and may have a solution.
My company's deployments have Debian with kernel 3.4.107, xfsprogs 3.1.4, and large storage arrays.
We have large data (PB) and high throughput (GB/sec) using async IO to several large volumes.
We constantly experience these unpredictable lockups on several systems.
Kernel logs/dmesg show something like the following:
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986515] INFO: task Sr2dReceiver-5:46829 blocked for more than 120 seconds.
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986518] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986520] Sr2dReceiver-5 D ffffffff8105b39e 0 46829 7284 0x00000000
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986524] ffff881e71f57b38 0000000000000082 000000000000000b ffff884066763180
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986529] 0000000000000000 ffff884066763180 0000000000011180 0000000000011180
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986532] ffff881e71f57fd8 ffff881e71f56000 0000000000011180 ffff881e71f56000
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986536] Call Trace:
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986545] [<ffffffff814ffe9f>] schedule+0x64/0x66
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986548] [<ffffffff815005f3>] rwsem_down_failed_common+0xdb/0x10d
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986551] [<ffffffff81500638>] rwsem_down_write_failed+0x13/0x15
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986555] [<ffffffff8126b583>] call_rwsem_down_write_failed+0x13/0x20
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986558] [<ffffffff814ff320>] ? down_write+0x25/0x27
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986572] [<ffffffffa01f29e0>] xfs_ilock+0xbc/0x12e [xfs]
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986580] [<ffffffffa01eec71>] xfs_rw_ilock+0x2c/0x33 [xfs]
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986586] [<ffffffffa01eec71>] ? xfs_rw_ilock+0x2c/0x33 [xfs]
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986593] [<ffffffffa01ef234>] xfs_file_aio_write_checks+0x41/0xfe [xfs]
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986600] [<ffffffffa01ef358>] xfs_file_buffered_aio_write+0x67/0x179 [xfs]
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986603] [<ffffffff8150099a>] ? _raw_spin_unlock_irqrestore+0x30/0x3d
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986611] [<ffffffffa01ef81d>] xfs_file_aio_write+0x163/0x1b5 [xfs]
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986614] [<ffffffff8106f1af>] ? futex_wait+0x22c/0x244
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986619] [<ffffffff8110038e>] do_sync_write+0xd9/0x116
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986622] [<ffffffff8150095f>] ? _raw_spin_unlock+0x26/0x31
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986634] [<ffffffff8106f2f1>] ? futex_wake+0xe8/0xfa
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986637] [<ffffffff81100d1d>] vfs_write+0xae/0x10a
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986639] [<ffffffff811015b3>] ? fget_light+0xb0/0xbf
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986642] [<ffffffff81100dd3>] sys_pwrite64+0x5a/0x79
2016 Mar 24 04:42:34 hmtmzhbgb01-ssu-1 kernel: [2358750.986645] [<ffffffff81506912>] system_call_fastpath+0x16/0x1b
Lockups leave the system in a bad state. The processes in D state that hang cannot even be killed with signal 9.
The only way to resume operations is to reboot, repair XFS and then the system works for another while.
But occasionally after the lockup we cannot even repair some volumes, as they get totally corrupted and we need to rebuild them with mkfs.
As a last resort, we now run xfs-repair periodically and this reduced the frequency of lockups and data loss to a certain extent.
But the incidents still occur often enough, so we need some solution.
I was wondering if there is a solution for this with kernel 3.4.107, e.g. some patch that we may apply.
Due to the large number of deployments and other software issues, we cannot upgrade the kernel in the near future.
However, we are working towards updating our applications so that we can run on kernel 3.16 in our next releases.
Does anyone know if this XFS lockup problem was fixed in 3.16?
Some people have experienced this but it was not a problem with XFS it was because the kernel was unable to flush dirty pages within the 120s time period. Have a look here but please check the numbers they're using as default on your own system.
http://blog.ronnyegner-consulting.de/2011/10/13/info-task-blocked-for-more-than-120-seconds/
and here
http://www.blackmoreops.com/2014/09/22/linux-kernel-panic-issue-fix-hung_task_timeout_secs-blocked-120-seconds-problem/
You can see what you're dirty cache ratio is by running this
sysctl -a | grep dirty
or
cat /proc/sys/vm/dirty_ratio
The best write up on this I could find is here...
https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
Essentially you need to tune your application to make sure that it can write the dirty buffers to disk within the time period or change the timer period etc.
You can also see some interesting paramaters as follows
sysctl -a | grep hung
You could increase the timeout permanently using /etc/sysctl.conf as follows...
kernel.hung_task_timeout_secs = 300
Does anyone know if this XFS lockup problem was fixed in 3.16?
It is said so in A Short Guide to Kernel Debugging:
Searching for “xfs splice deadlock” turns up an email thread from 2011 that describes this
problem. However, bisecting the kernel source repository shows that
the bug wasn’t really addressed until April, 2014 (8d02076) for release in Linux 3.16.

Bots can't be created from Xcode Server hosted repository

After creating a new repository on my Xcode Server, I can't access it by ssh, but I can perform both the git clone command and the git push command by using the https protocol.
Furthermore I encounter the following error when I try to create a Xcode Bot:
Oct 25 12:43:46 mokii.com xcsbuildd[99898]: XCSCheckoutIntegrationStep.m:160 [XCSCheckoutIntegrationStep logUnderlyingErrorForError:]
[SourceControl, Error] SSL error: received early EOF (-1)
Oct 25 12:43:46 mokii.com xcsbuildd[99898]: XCSCheckoutIntegrationStep.m:119 [XCSCheckoutIntegrationStep enqueueOperations]
[SourceControl, Error] Error checkout/clone Error Domain=com.apple.dt.SourceControlErrorDomain Code=-1 "SSL error: received early EOF (-1)" UserInfo=0x7fcf244d3cd0 {com.apple.dt.sourcecontrol.UnderlyingErrorString=SSL error: received early EOF (-1), NSLocalizedDescription=SSL error: received early EOF (-1)}
Oct 25 12:43:46 mokii.com xcsbuildd[99898]: XCSIntegrationExecutor.m:229 [XCSIntegrationExecutor integrationStep:didFinishWithError:result:]
[BuildService, Error] XCSCheckoutIntegrationStep finished integration with an error: Error Domain=com.apple.dt.SourceControlErrorDomain Code=-1 "SSL error: received early EOF (-1)" UserInfo=0x7fcf23e117f0 {com.apple.dt.sourcecontrol.UnderlyingErrorString=SSL error: received early EOF (-1), NSLocalizedDescription=SSL error: received early EOF (-1), XCSErrorFixItType=scm-failure}
When I try to execute the git clone command the hosted repository in Terminal.app, another error occurs:
larryhou:repo larryhou$ git clone ssh://jason#mokii.com/git/HostedRepo.git
Cloning into 'HostedRepo'...
Password:
fatal: '/git/HostedRepo.git' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
And I can find additional information in the Console.app:
Marker - Oct 25, 2014, 12:25:13 PM
Oct 25 12:25:15 --- last message repeated 1 time ---
Oct 25 12:25:15 mokii com.apple.xpc.launchd[1] (com.openssh.sshd.4EA7979A-127B-452C-832D-3A9A7FCB5A04): Service instances do not support events yet.
Oct 25 12:25:16 mokii.com kdc[380]: AS-REQ jason#MOKII.COM from 127.0.0.1:62481 for krbtgt/MOKII.COM#MOKII.COM
Oct 25 12:25:16 --- last message repeated 1 time ---
Oct 25 12:25:16 mokii.com kdc[380]: Client sent patypes: REQ-ENC-PA-REP
Oct 25 12:25:16 mokii.com kdc[380]: user has no SRP keys
Oct 25 12:25:16 mokii.com kdc[380]: Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ
Oct 25 12:25:16 mokii.com kdc[380]: AS-REQ jason#MOKII.COM from 127.0.0.1:58943 for krbtgt/MOKII.COM#MOKII.COM
Oct 25 12:25:16 --- last message repeated 1 time ---
Oct 25 12:25:16 mokii.com kdc[380]: Client sent patypes: ENC-TS, REQ-ENC-PA-REP
Oct 25 12:25:16 mokii.com sandboxd[508] ([380]): kdc(380) deny file-read-data /private/etc/krb5.conf
Oct 25 12:25:16 mokii.com kdc[380]: ENC-TS pre-authentication succeeded -- jason#MOKII.COM
Oct 25 12:25:16 mokii.com kdc[380]: DSUpdateLoginStatus: Unable to synchronize login time for jason: 77009
Oct 25 12:25:17 mokii.com kdc[380]: Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96
Oct 25 12:25:17 mokii.com kdc[380]: Requested flags: forwardable
Oct 25 12:25:17 mokii.com kdc[380]: TGS-REQ jason#MOKII.COM from 127.0.0.1:60555 for host/mokii.com#MOKII.COM [canonicalize, forwardable]
Oct 25 12:25:17 mokii.com kdc[380]: TGS-REQ jason#MOKII.COM from 127.0.0.1:59504 for host/mokii.com#MOKII.COM [forwardable]
Oct 25 12:25:17 mokii.com kdc[380]: TGS-REQ jason#MOKII.COM from 127.0.0.1:49478 for ldap/mokii.com#MOKII.COM [canonicalize, forwardable]
Oct 25 12:25:17 mokii.com kdc[380]: TGS-REQ jason#MOKII.COM from 127.0.0.1:58173 for ldap/mokii.com#MOKII.COM [forwardable]
Oct 25 12:25:17 mokii.com sshd[61715]: Accepted keyboard-interactive/pam for jason from 192.168.2.3 port 58668 ssh2
Oct 25 12:25:17 mokii.com sshd[61722]: Received disconnect from 192.168.2.3: 11: disconnected by user
Oct 25 12:25:17 mokii com.apple.xpc.launchd[1] (com.openssh.sshd.4EA7979A-127B-452C-832D-3A9A7FCB5A04[61715]): Service exited with abnormal code: 255
You have got the SSH URL all wrong. You cannot use SSH simply by replacing the protocol and leave the URL in the same form as HTTPS. Here is a step by step guide for using SSH with Git on XCode Server and setting up the bots:
http://ikennd.ac/blog/2013/10/xcode-bots-common-problems-and-workarounds/
One more that is a bit newer and might be more accurate:
https://honzadvorsky.com/articles/2015-08-04-xcs_tutorials_1_getting_started/

What's the safest way to shut down MongoDB when running as a Windows Service?

I have a single instance of MongoDB 2.4.8 running on Windows Server 2012 R2. MongoDB is installed as a Windows Service. I have journalling enabled.
The MongoDB documentation suggests that the MongoDB service should just be shut down via the Windows Service Control Manager:
net stop MongoDB
When I did this recently, the following was logged and I ended up with a non-zero byte mongod.lock file on disk. (I used the --repair option to fix this but it turns out this probably wasn't necessary as I had journalling enabled.)
Thu Nov 21 11:08:12.011 [serviceShutdown] got SERVICE_CONTROL_STOP request from Windows Service Control Manager, will terminate after current cmd ends
Thu Nov 21 11:08:12.043 [serviceShutdown] now exiting
Thu Nov 21 11:08:12.043 dbexit:
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: going to close listening sockets...
Thu Nov 21 11:08:12.043 [serviceShutdown] closing listening socket: 1492
Thu Nov 21 11:08:12.043 [serviceShutdown] closing listening socket: 1500
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: going to flush diaglog...
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: going to close sockets...
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: waiting for fs preallocator...
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: lock for final commit...
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: final commit...
Thu Nov 21 11:08:12.043 [conn1333] end connection 127.0.0.1:51612 (18 connections now open)
Thu Nov 21 11:08:12.043 [conn1331] end connection 127.0.0.1:51610 (18 connections now open)
...snip...
Thu Nov 21 11:08:12.043 [conn1322] end connection 10.1.2.212:53303 (17 connections now open)
Thu Nov 21 11:08:12.043 [conn1337] end connection 127.0.0.1:51620 (18 connections now open)
Thu Nov 21 11:08:12.839 [serviceShutdown] shutdown: closing all files...
Thu Nov 21 11:08:14.683 [serviceShutdown] Progress: 5/163 3% (File Closing Progress)
Thu Nov 21 11:08:16.012 [serviceShutdown] Progress: 6/163 3% (File Closing Progress)
...snip...
Thu Nov 21 11:08:52.030 [serviceShutdown] Progress: 143/163 87% (File Closing Progress)
Thu Nov 21 11:08:54.092 [serviceShutdown] Progress: 153/163 93% (File Closing Progress)
Thu Nov 21 11:08:55.405 [serviceShutdown] closeAllFiles() finished
Thu Nov 21 11:08:55.405 [serviceShutdown] journalCleanup...
Thu Nov 21 11:08:55.405 [serviceShutdown] removeJournalFiles
Thu Nov 21 11:09:05.578 [DataFileSync] ERROR: Client::shutdown not called: DataFileSync
The last line is my main concern.
I'm also interested in how MongoDB is able to take longer to shut down than Windows normally allows for service shutdown? At what point is it safe to shut down the machine without checking the log file?

Gwan stops working every night

I have a arch 64bit VPS on digitalocean. I installed gwan and run it in deamon mode. It stopped running every midnight.
Here is the log file
[Wed Apr 24 06:10:28 2013 GMT] memory footprint: 3.78 MiB
[Thu, 25 Apr 2013 00:00:19 GMT] * child abort(8) coredump
[Thu, 25 Apr 2013 00:00:19 GMT] * child abort(8) coredump
[Thu, 25 Apr 2013 00:00:19 GMT] * child abort(8) coredump
[Thu, 25 Apr 2013 00:00:19 GMT] * child died 3 times within 3 seconds
[Thu Apr 25 12:39:39 2013 GMT] memory footprint: 3.77 MiB.
[Thu Apr 25 12:39:56 2013 GMT] loaded maintenance script/opt/gwan_linux64-bit/0.0.0.0_8080/#0.0.0.0/csp/crash.c 43.14 KiB MD5:820cf6b4-2152b838-08a13fcb-5f0dc4be
[Fri, 26 Apr 2013 00:00:10 GMT] * child abort(8) coredump
[Fri, 26 Apr 2013 00:00:10 GMT] * child abort(8) coredump
[Fri, 26 Apr 2013 00:00:10 GMT] * child abort(8) coredump
[Fri, 26 Apr 2013 00:00:10 GMT] * child died 3 times within 3 seconds
This problem does not happen on all platforms and so far all the user reports we received used hypervisors which alter the CPU and OS behavior in erratic and undocumented ways (not to cite the additional bugs they inject into the system).
UPDATE
That new problem for 4-years old code that worked fine so far is a platform issue, for which we have found a workaround, to be published with the next release in a few weeks.

Resources