Having trouble with rsync version on mac when using it with lsyncd - macos

I am trying to synchronize a folder on my mac with one on an ubuntu server 16.06 system that I built. The problem is with the rsync version. Here is what I did. I created a file in /var/log/lsyncd/ and called it lsyncd.conf.lua. The following are its contents:
settings {
logfile = "/var/log/lysncd/lsyncd.log", -- Sets the log file
statusFile = "/var/log/lsyncd/lsyncd-status.log" -- Sets the status log file
}
sync {
default.rsyncssh, -- uses the rsyncssh defaults Take a look here: https://github.com/axkibe/lsyncd/blob/master/default-rsyncssh.lua
delete = false, -- Doesn't delete files on the remote host eventho they're deleted at the source. This might be beneficial for some not for others
source="/Users/alixchristakaze/Desktop", -- Your source directory to watch
host="", -- The remote host (use hostname or IP)
targetdir="", -- the target dir on remote host, keep in mind this is absolute path
rsync = {
binary = "/usr/local/bin/rsync",
archive = true, -- use the archive flag in rsync
perms = true, -- Keep the permissions
owner = true, -- Keep the owner
_extra = {"-a"}, -- Sometimes permissions and owners isn't copied correctly so the _extra can be used for any flag in rsync
},
maxProcesses = 1 -- We only want to use a maximum of 1 rsync processes at same time
}
I get this error when I run this command: lsyncd /var/log/lsyncd/lsyncd.conf.lua
Warn: Using /dev/fsevents which is considered an OSX internal interface.
Warn: Functionality might break across OSX versions (This is for 10.5.X)
Warn: A hanging Lsyncd might cause Spotlight/Timemachine doing extra work.
Error: Cannot access /dev/fsevents monitor! (1:Operation not permitted)
As you can see I added this line binary = "/usr/local/bin/rsync", hoping it would tell system to use the rsync I installed through homebrew. However, I am getting the error above.
I have the latest mac version 10.12.3 and my rsync version is 3.1.2 . What am I doing wrong?

Related

File.exists() sometimes wrong on Windows 10 with Java 8.191

I have a bunch of unit tests which contain code like:
File file = new File("src/main/java/com/pany/Foo.java");
assertTrue("Missing file: " + file.getAbsolutePath(), file.exists());
This test is suddenly failing when running it with Maven Surefire and -DforkCount=0. With -DforkCount=1, it works.
Things I tried so far:
The file does exist. Windows Explorer, command line (copy & paste), text editors, Cygwin can all find it and show the contents. That's why I think it's not a permission problem.
It's not modified by the unit tests or anything else. Git shows no modifications for the last two months.
I've checked the file system, it's clean.
I've tried other versions of Java 8, namely 8u171 and 8u181. Same problem.
I've run Maven from within Cygwin and the command prompt. Same result.
Reboot :-) No effect :-(
More details:
When I see this problem, I start to see the "The forked VM terminated without properly saying goodbye. VM crash or System.exit called?" in other projects. That's why I tried forkCount=0 which often helps in this case to find out why the forked VM crashed.
This has started recently, maybe around the October 2018 update of Windows 10. Before that, the builds were rock solid for about three years. My machine was switched to Windows 10 late 2017, I think.
I'm using Maven 3.6 and can't easily try an older version because of an important bug that was fixed with it. I did see the VM crash above with Maven 3.5.2 as well.
It's always the same files which fail (so it's stable).
ulimit (from Cygwin) says:
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 2032
cpu time (seconds, -t) unlimited
max user processes (-u) 256
virtual memory (kbytes, -v) unlimited
I'm wondering if the "open files" limit of 256 only applied to Cygwin processes or whether that's something which Cygwin reads from Windows.
Let me know if you need anything else. I'm running out of ideas what I could try.
Update 1
Bernhard asked me to print absolute names. My answer was that I was already using absolute names but I was wrong. The actual code was:
File file = new File("src/main/java/com/pany/Foo.java");
if (!file.exists()) {
log.debug("Missing file {}", file.getAbsolutePath());
... fail ...
}
... do something with file...
I have now changed this to:
File file = new File("src/main/java/com/pany/Foo.java").getAbsoluteFile();
if (!file.exists()) {
log.debug("Missing file {}", file);
}
and that fixed the problem. I just can't understand why.
When Maven creates a forked VM to run the tests with Surefire, then it can change the current directory. So in this case, it would make sense that the tests work when forked but fail when running in the same VM (since the VM was created in the root folder of the multi-module build). But why is making the path absolute before the call to exists() fixing the issue?
Some background. Each process has a notion of "current directory". When started from the command line, then it's the directory in which the command was executed. When started from the UI, it's usually the folder in which the program (the .exe file) is.
In the command prompt or BASH, you can change this folder with cd for the process which runs the command prompt.
When Maven builds a multi-module project, it has to change this for each module (so that the relative path src/main/java/ always points to the right place). Unfortunately, Java doesn't have a "set current directory" method anywhere. You can only specify one when creating a new process and you can modify the system property user.dir.
That's why new File("a").exists() and new File("a").getAbsoluteFile().exists() work differently.
The latter will use new File(System.getProperty("user.dir"), "a") to determine the path and the former will use the Windows API function _wgetdcwd (docs) which in turn uses a field of the Windows process to get the current directory - in our case, that's always the folder in which Maven was originally started because Java doesn't update the field in the process when someone changes user.dir and Maven can only change this property to "simulate" changing folders.
WinNTFileSystem_md.c calls fileToNTPath(). That's defined in io_util_md.c and calls pathToNTPath(). For relative paths, it will call currentDirLength() which calls currentDir() which calls _wgetdcwd().
See also:
https://github.com/openjdk-mirror/jdk7u-jdk/blob/jdk7u6-b08/src/windows/native/java/io/WinNTFileSystem_md.c
https://github.com/openjdk-mirror/jdk7u-jdk/blob/jdk7u6-b08/src/windows/native/java/io/io_util_md.c
and here is the place where the Surefire plugin modifies the Property user.dir: https://github.com/apache/maven-surefire/blob/56d41b4c903b6c134c5e1a2891f9f08be7e5039f/maven-surefire-common/src/main/java/org/apache/maven/plugin/surefire/AbstractSurefireMojo.java#L1060
When not forking, it's copied into the current VM's System properties: https://github.com/apache/maven-surefire/blob/56d41b4c903b6c134c5e1a2891f9f08be7e5039f/maven-surefire-common/src/main/java/org/apache/maven/plugin/surefire/AbstractSurefireMojo.java#L1133
So I have checked via printing out the system properties with some simple tests.
During the tests via maven-surefire-plugin the user.dir will be changed to the root of the appropriate module in a multi module build.
But as I mentioned already there is a system property available basedir which can be used to correctly handle the location for tests which needs to access them via File...The basedir is pointed to the location of the pom.xml of the appropriate module.
But unfortunately the basedir property is not set by IDEA IntelliJ during the test run.
But this can be solved by a setup like this:
private String basedir;
#Before
public void before() {
this.basedir = System.getProperty("basedir", System.getProperty("user.dir", "Need to think about the default value"));
}
#Test
public void testX() {
File file = new File(this.basedir, "src/main/java/com/pany/Foo.java");
assertTrue("Missing file: " + file.getAbsolutePath(), file.exists());
}
This will work in Maven Surefire with -DforkCount=0 as well as -DforkCount=1 and in IDE's (checked only IDEA IntelliJ).
And yes that is an issue in Maven Surefire plugin changing the user.dir.
We might convince IDE's to support the basedir property as well ?
Aaron, we develop the Surefire. We can help you if you provide the path for this:
assertTrue("Missing file: " + file.getAbsolutePath(), file.exists());
Pls post the actual path, expected path and basedir where your POM resides.
The theory would not help here. We are testing all the spectrum of JDKs 7-12 but we do not have the combination Cygwin+Windows which must be considered.
The code setting user.dir in Surefire you mentioned exists a decade.

CentOS automatic installation fails with "Installation Destination" error

I'm trying to install the centos netboot install on ppc64le platform but automation installation(kickstart) fails with the following error:
Starting installer, one moment...
find_file: stat /proc/device-tree/chosen/bootpath, No such file or directory
anaconda 21.48.22.56-1 for CentOS AltArch 7 started.
* installation log files are stored in /tmp during the installation
* shell is available on TTY2
* when reporting a bug add logs from /tmp as separate text/plain attachments
Starting automated install..........
Generating updated storage configuration
storage configuration failed: failed to find a suitable stage1 device
================================================================================
================================================================================
Installation
1) [x] Language settings 2) [x] Timezone settings
(English (United States)) (UTC timezone)
3) [x] Installation source 4) [x] Software selection
(http://mirror.centos.org/altar (Custom software selected)
ch/7/os/ppc64le/) 6) [x] Kdump
5) [!] Installation Destination (Kdump is enabled)b | Help: F1
(Error checking storage configu
ration)
7) [x] Network configuration
(Wired (enp0s1) connected)
** (anaconda:1253): WARNING **: Could not open X display
The installation was stopped due to incomplete spokes detected while running in non-interactive cmdline mode. Since there cannot be any questions in cmdline mode, edit your kickstart file and retry installation.
The exact error message is:
The following mandatory spokes are not completed:
Installation Destination.
The installer will now terminate.
[root#llmtul01b qemu]#
And kickstart file looks like:
url --url="http://mirror.centos.org/altarch/7/os/ppc64le/"
install
keyboard us
rootpw --lock --iscrypted locked
timezone --isUtc --nontp UTC
selinux --enforcing
firewall --disabled
network --bootproto=dhcp --device=link --activate --onboot=on
reboot
bootloader --disable
lang en_US
# Repositories to use
repo --name="CentOS" --baseurl=http://mirror.centos.org/altarch/7/os/ppc64le/ --cost=100
## Uncomment for rolling builds
repo --name="Updates" --baseurl=http://mirror.centos.org/altarch/7/updates/ppc64le/ --cost=100
# Disk setup
zerombr
clearpart --all --initlabel
part / --fstype ext4 --size=3000
Looking for the option to select the installation destination through kickstart file.
This problem solved by adding one more partition which is specific for ppc64le pltform.
Update and working kickstart file looks like :
url --url="http://mirror.centos.org/altarch/7/os/ppc64le/"
install
keyboard us
rootpw --lock --iscrypted locked
timezone --isUtc --nontp UTC
selinux --enforcing
firewall --disabled
network --bootproto=dhcp --device=link --activate --onboot=on
reboot
bootloader --disable
lang en_US
# Repositories to use
repo --name="CentOS" --baseurl=http://mirror.centos.org/altarch/7/os/ppc64le/ --cost=100
## Uncomment for rolling builds
repo --name="Updates" --baseurl=http://mirror.centos.org/altarch/7/updates/ppc64le/ --cost=100
# Disk setup
zerombr
clearpart --all --initlabel
part / --fstype ext4 --size=3000
part prepboot --fstype "PPC PReP Boot" --size=10

Qt 5.3.1: macdeployqt tries to include everything on my hard drive; how to fix?

I'm running Qt 5.3.1 on Mac OS X 10.8 and 10.9 I'm trying to use the macdeployqt tool to bundle libraries and plugins with my executable, but it's apparently trying to include everything on my hard drive.
I invoke it with:
/Applications/Qt/5.3.1/5.3/clang_64/bin/macdeployqt /Users/adamwilt/Desktop/temp/DesktopPixie.app -verbose=2
-qmldir=/Users/adamwilt/Desktop/temp/DesktopPixie.app/Contents/Resources/qml/DesktopPixie
and get lots of normal, expected log messages like:
Log: copy: "/Applications/Qt/5.3.1/5.3/clang_64/qml/QtQuick/Controls/Styles/Base/images" "/Users/adamwilt/Desktop/temp/DesktopPixie.app/Contents/Resources/qml/QtQuick/Controls/Styles/Base/images"
...and all runs fine until I start getting messages like:
Log: copy: "" "/Users/adamwilt/Desktop/temp/DesktopPixie.app/Contents/Resources/qml/MessageBoxUI/Enums"
ERROR: file copy failed from "/do-gst"
ERROR: to "/Users/adamwilt/Desktop/temp/DesktopPixie.app/Contents/Resources/qml/MessageBoxUI/Enums/do-gst"
ERROR: file copy failed from "/rescuepro.properties"
ERROR: to "/Users/adamwilt/Desktop/temp/DesktopPixie.app/Contents/Resources/qml/MessageBoxUI/Enums/rescuepro.properties"
ERROR: file copy failed from "/rescuepro34act.lic"
and then the fun begins:
Log: copy: "/Applications" "/Users/adamwilt/Desktop/temp/DesktopPixie.app/Contents/Resources/qml/MessageBoxUI/Enums/Applications"
Log: copied: "/Applications/License.rtf"
Log: to "/Users/adamwilt/Desktop/temp/DesktopPixie.app/Contents/Resources/qml/MessageBoxUI/Enums/Applications/License.rtf"
Log: copy: "/Applications/0xED-1.0.7.app" "/Users/adamwilt/Desktop/temp/DesktopPixie.app/Contents/Resources/qml/MessageBoxUI/Enums/Applications/0xED-1.0.7.app"
It appears it's trying to copy everything (that is, "") into the app, but I ^C out of it while I still have disk space left!
It didn't do this at first; then it did so for a week; then it stopped; now it's back. This is a distributed development project; I'm doing the QML stuff and folks elsewhere are doing the C++. It's possible the other folks are changing a configuration somewhere that's causing this to occur, but I'm the only one doing Mac deployments, so I'm the only one who sees this.
Is there a config file of some sort for macdeployqt where this might be getting triggered? Or is it more likely a problem related to the C++ object MessageBoxUI, which defines several Q_ENUMs (apparently correctly, as it compiles without warnings or errors, and appears to run properly), since that's the target directory when it starts going mad?
Sussed it (well enough that I can go on):
The Qt app I'm trying to deploy lives in ~/Desktop/temp. If I set my current directory to ~/Desktop or ~/Desktop/temp before I run macdeployqt, it runs as expected.
If I run it from from my home directory ~, or ~/Documents, macdeployqt tries to pull in everything it finds in /Applications, /opt, and/or other directories unassociated with the app.
Setting cwd to most other paths, including ~/Downloads, /Applications, subdirectories of ~/Documents, etc. also seems to work; it's only running macdeployqt from ~ and ~/Documents that causes the problem.
No proper explanation as to why this is happening, but just a warning in case someone else sees this sort of weird behavior: try changing to the apps's location before running macdeployqt.

Automap library issue in Windows7 (with R 3.0.1)

I installed sp and automap libraries to my R 3.0.1 64-bit under Windows 7 (via install.packages command). Installation of them did not display any error and library(sp) works fine however when I try to execute library(automap) I get the following error:
> library(automap)
Error in gzfile(file, "rb") : cannot open the connection
In addition: Warning messages:
1: In read.dcf(file.path(p, "DESCRIPTION"), c("Package", "Version")) :
cannot open compressed file 'C:/Program Files/R/R-3.0.1/library/sp/DESCRIPTION', probable reason 'No such file or directory'
2: In gzfile(file, "rb") :
cannot open compressed file '', probable reason 'Invalid argument'
I looked from the path and indeed there is no DESCRIPTION file (or folder) in that path. However there is just libs folder under which folder x64 and inside it file sp.dll
Any idea what could cause this?
I would definitely try to run R as administrator, both for installing the packages and loading them. This could solve your problem.
This probably has to do with file permissions. When you install the packages as admin in a location where only admin can read/write, running R as a normal user means you do not have the file permissions needed to load the package. Running R as admin will solve this, as admin does have the correct permissions.
Alternatively, you could install your R packages in a location where a normal user has read/write persmissions, e.g. C:/Users/UserName (or something like that, I do not have my windows machine accesible right now).

Turning off iODBC tracing

I'm using iODBC on OS X 10.6.8 agains MySQL (mysql-connector-odbc-5.1.8) from a C program that I'm writing, but tracing of all ODBC library calls, which is supposed to be turned off by default, is turned on.
I have found a set of odbc.ini and odbcinst.ini files in /etc and in /Library/ODBC/, but none of them contains "Trace = yes", and adding an [ODBC] section with "Tracing = no" to any of these files doesn't seem to do anything. I also do not have any private .odbc.ini or .odbcinst.ini files in the working directory nor in my home directory nor anywhere else.
The only way I can turn tracing off is to call SQLSetConnectAttr() to set SQL_ATTR_TRACE to SQL_OPT_TRACE_OFF after allocating a connection handle, but at that point, the trace file, sql.log, has already been created in the working directory.
Any help with tracking down where tracing is turned on (it's supposed to be off by default), alternatively, how to turn it off so that the log file never gets created, would be appreciated.
I'm not sure why you would be using odbc instead of the standard connector, but have you tried setting the option for the TraceFile to /dev/null in odbc.ini. This may at least remove the file if you can't get the Trace = OFF to work by itself.
[ODBC]
Trace = OFF
TraceFile = /dev/null
Don't have my Mac at the office to test this, but seems like it should work.
The default settings files for ODBC on Mac OS X are found at --
/Library/ODBC/odbc.ini
/Library/ODBC/odbcinst.ini
/Users/*/Library/ODBC/odbc.ini
/Users/*/Library/ODBC/odbcinst.ini
The first two are for system-level settings and DSNs; the latter are for user-level.
Some buggy installers and libraries create files at --
/Users/*/.odbc.ini
/Users/*/.odbcinst.ini
/etc[/*]/.odbc.ini
/etc[/*]/.odbcinst.ini
/etc[/*]/odbc.ini
/etc[/*]/odbcinst.ini
These can lead to trouble. This command will reveal all potentially trouble-making files --
sudo find / \( -name '.odbc*.ini' -or -name 'odbc*.ini' \) -ls
Best is to --
blend the content of existing non-default files into the default locations, and drop the non-default files
create symlinks from the buggy .odbc[inst].ini locations to the default files
update the iODBC components on your Mac
enjoy trouble free ODBC use
(ObDisclaimer: I am an employee of OpenLink Software, who maintain and support the iODBC project, which is the ODBC Driver Manager chosen by Apple for Mac OS X, bundled since Jaguar (10.2.x).)

Resources