Trying to run java_test that runs docker inside ProcessBuilder.
To simplify the code of the test is as following:
#Test
public void testDockerExecutable(){
System.out.println("======== running docker ==============");
try {
Process p = new ProcessBuilder("docker","version")
.inheritIO()
.start();
} catch (IOException e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
Running docker version straight from shell gives that output:
Client:
Version: 17.03.1-ce
API version: 1.27
Go version: go1.7.5
Git commit: c6d412e
Built: Tue Mar 28 00:40:02 2017
OS/Arch: darwin/amd64
Server:
Version: 17.03.1-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: c6d412e
Built: Fri Mar 24 00:00:50 2017
OS/Arch: linux/amd64
Experimental: true
But running the tests gives that output:
WARNING: Streamed test output requested. All tests will be run locally, without sharding, one at a time.
INFO: Found 1 test target...
JUnit4 Test Runner
.======== here ==============
java.io.IOException: Cannot run program "docker": error=2, No such file or directory
I know that I need to somehow import docker to the runfiles environment (just like local_jdk does). But how do I do that? Also - unlike jdk that only requires read permission, docker needs write permissions to it's lib folder.
My env is mac os x sierra and bazel HEAD (68028317c1d3d831a24f90e2b25d1410ce045c54).
tried it with java_test. The "local" attribute did not affect the failure. (tried it with both True and False).
update: works on Linux
I tried running this in linux and it works well both with "local"=True and "local"=False. Seems like it's something related to mac.
Linux sandbox mounts some directories by default (citing the docs):
We currently also mount /bin, /etc, /usr (except /usr/local), and every directory starting with /lib, to allow running local tools. In the future, we are planning to provide a shell with a set of Linux utilities, and to require that all other tools are specified as inputs.
I assume the docker binary is in one of the standard locations, and bazel finds it here.
Maybe on mac the binary is somewhere else and only exporting PATH to the test environment reveals it?
All aside, the best practice would be to make the test explicitly depend on some 'docker' target. Then bazel will make sure the binary is there. You can use local_repository (or its new_ variant) rules to hook it up.
Seems like this does the trick:
$ bazel test --test_env=PATH < target >
It will be interesting to understand why it works in linux
Related
Starting 2020-12-09, VSCode's Rust Analyzer extension no longer loads for me. On launch, it prints out this error message:
Cannot activate rust-analyzer: bootstrap error. See the logs in "OUTPUT > Rust Analyzer Client" (should open automatically). To enable verbose logs use { "rust-analyzer.trace.extension": true }
Enabling extension tracing produces the following diagnostic just before failing:
INFO [12/10/2020, 10:03:22 AM]: Using server binary at c:\Users\<user>\AppData\Roaming\Code\User\globalStorage\matklad.rust-analyzer\rust-analyzer-windows.exe
DEBUG [12/10/2020, 10:03:22 AM]: Checking availability of a binary at c:\Users\<user>\AppData\Roaming\Code\User\globalStorage\matklad.rust-analyzer\rust-analyzer-windows.exe
DEBUG [12/10/2020, 10:03:22 AM]: c:\Users\<user>\AppData\Roaming\Code\User\globalStorage\matklad.rust-analyzer\rust-analyzer-windows.exe --version: {
status: 3221225506,
signal: null,
output: [ null, '', '' ],
pid: 1648,
stdout: '',
stderr: ''
}
where <user> is the name of the user account I use to log into the system1.
The status value reported in the error diagnostic (3221225506) translates to 0xC0000022 (STATUS_ACCESS_DENIED). Navigating to the binary from within VSCode's integrated terminal and trying to execute rust-analyzer-windows.exe --version doesn't produce any output, which seems to reinstate that running this executable from VSCode is somehow blocked.
It appears that something changed with respect to access rights executing the server binary from within VSCode. In between Rust Analyzer working and Rust Analyzer no longer working I didn't update Rust, nor rustup, nor VSCode, nor any extensions.
I did install 2020-12 Cumulative Update for Windows 10 Version 20H2 for x64-based Systems (KB4592438), though, and the time Rust Analyzer started failing coincides with the time the update got installed. That could literally just be a coincidence.
What additional steps can I take to get to the root cause of the issue, and how do I get Rust Analyzer working again?
Version information:
Rust Analyzer (stable): v0.2.408
Windows 10 Pro: Version 10.0.19042 Build 19042
VSCode: 1.51.1 (user setup)
1 This is also the user account VSCode runs under, including all of its spawned processes. Navigating to the path from a command prompt running under this account reveals that rust-analyzer-windows.exe is present, and executing rust-analyzer-windows.exe --version prints a version identifier, as expected.
Unfortunately, I didn't quite get to investigate the root cause of this.
A system reboot that was forced upon me appears to have restored World Peace.
Clearing proxy config works for me.
I'm not sure this covered all situation, but it might be related to the network.
mvn clean simply works fine from terminal. Even when I am executing the same from a bash file (.sh file) by double clicking, it working fine.
But when I trigger the same using crontab I'm getting an error mvn:command not found
bash(.sh) file have this code
#!/bin/bash
cd /Users/testautomation/Documents/Automation/TFS/Mem_Mobile
mvn clean
Output of crontab -l
0 14 * * * /Users/testautomation/Documents/Automation/Schedule/Execute.sh
Error
From testautomation#Tests-iMac.xxx.local Wed Jun 12 14:44:01 2019
Return-Path: <testautomation#Tests-iMac.xxx.local>
X-Original-To: testautomation
Delivered-To: testautomation#Tests-iMac.xxx.local
Received: by Tests-iMac.xxx.local (Postfix, from userid 501)
id 0BE233001CB411; Wed, 12 Jun 2019 14:44:00 +1000 (AEST)
From: testautomation#Tests-iMac.xxx.local (Cron Daemon)
To: testautomation#Tests-iMac.xxx.local
Subject: Cron <testautomation#Tests-iMac> /Users/testautomation/Documents/Automation/Schedule/Execute.sh
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=testautomation>
X-Cron-Env: <USER=testautomation>
X-Cron-Env: <HOME=/Users/testautomation>
Message-Id: <20190612044401.0BE233001CB411#Tests-iMac.xxx.local>
Date: Wed, 12 Jun 2019 14:44:00 +1000 (AEST)
/Users/testautomation/Documents/Automation/Schedule/Execute.sh: line 3: mvn: command not found
I have installed maven using homebrew.
mvn -version output :
Tests-iMac:~ testautomation$ mvn -version
Apache Maven 3.6.1 (d66c9c0b3152b2e69ee9bac180bb8fcc8e6af555; 2019-04-05T06:00:29+11:00)
Maven home: /usr/local/Cellar/maven/3.6.1/libexec
Java version: 1.8.0_212, vendor: Oracle Corporation, runtime: /Library/Java/JavaVirtualMachines/jdk1.8.0_212.jdk/Contents/Home/jre
Default locale: en_AU, platform encoding: UTF-8
OS name: "mac os x", version: "10.14.5", arch: "x86_64", family: "Mac"
Use the complete path of mvn while executing the script.
#!/bin/bash
cd /Users/testautomation/Documents/Automation/TFS/Mem_Mobile
/usr/local/Cellar/maven/3.6.1/bin/mvn clean
Or below script should also work
#!/bin/bash
cd /Users/testautomation/Documents/Automation/TFS/Mem_Mobile
/usr/local/bin/mvn clean
As in here, you might need to add environment variables (MAVEN_HOME) and complete the PATH (with the one for mvn)
Or at least do the same in your script.sh, meaning don't assume that you would inherit all of your user session environment in a cron session.
Cron runs a non-ineractive and non-login shell.
0 14 * * * . $HOME/.profile; /Users/testautomation/Documents/Automation/Schedule/Execute.sh
The above command is well suited if you expand your shell script with more shell profile variables. Loading the bash profile updates the env variables including the PATH variable to include mvns path (which was mostly added to it during mvn installation).
. is a synonym for source.
Crontab runs with a very limited shell environment. Some variables such as HOME, LOGNAME and SHELL are set. This is why we could use $HOME to call the respective profile.
PATH is supposed to contain the location of your mvn binary but is limited to main binary paths in the cron env; hence the absolute path to your mvn binary works as answered before.
A nice way can be declaring the path to mvn as a string value, and calling it using "$val":
mvn="path\to\mvn" --no spaces close to the equal sign
and then calling "$mvn"
I am using some earlier build for Windows 64-bit downloaded form here:
dl.dropboxusercontent.com/u/63393258/osm2pgsql_testRelease.zip
from this website:
awcull.com/2015/09/30/postgis-osm2pgsql-windows.html
but it is crashing when I am importing large pbf with whole Europe downloaded from download.geofabrik.de/
I'm tired of this shit... I tried slim and non-slim mode, I tried modifying cache size, nothing worked so far. Our server has 32 GB of RAM.
Where can I download latest osm2pgsql build for Windows 64-bit? Alternatively which compiler do you suggest to make my own build on Windows Server 2012 64-bit. Thanks.
The command I run osm2pgsql last time it crashed was:
PS C:\OSM\rendering> osm2pgsql -U postgres -m -d osm -p osm -E 3857 -s -C 25000 -S C:\OSM\osm2pgsql\default.style C:\OSM\Data\europe-latest.osm.pbf
It crashed with standard Windows dialog saying "the application stopped blablabla" with details:
Problem signature:
Problem Event Name: APPCRASH
Application Name: osm2pgsql.exe
Application Version: 0.0.0.0
Application Timestamp: 53ea21fd
Fault Module Name: ntdll.dll
Fault Module Version: 6.3.9600.18438
Fault Module Timestamp: 57ae642e
Exception Code: c00000fd
Exception Offset: 0000000000030d02
OS Version: 6.3.9600.2.0.0.272.7
Locale ID: 1033
Additional Information 1: 33ad
Additional Information 2: 33ad00700702b0ab4dc632df7667ec82
Additional Information 3: 2ebb
Additional Information 4: 2ebbf5b91303f76c5b7f75f6255100fa
Read our privacy statement online:
http://go.microsoft.com/fwlink/?linkid=280262
If the online privacy statement is not available, please read our privacy statement offline:
C:\Windows\system32\en-US\erofflps.txt
Now I'm trying without "-C" option but I bet it will crash again...
PS C:\OSM\rendering> osm2pgsql -U postgres -m -d osm -p osm -E 3857 -s -S C:\OSM\osm2pgsql\default.style C:\OSM\Data\europe-latest.osm.pbf
Necromancing.
The latest build (Continuous Integration) can always be found on AppVeyor.
You need to get the current build (or in history a historic build by the git-commit hash).
https://ci.appveyor.com/project/openstreetmap/osm2pgsql
=> Environment arch x64
=> Artifacts tab
=> Donwload osm2pgsql_Release_x64.zip
The link might break, so if it does, you need to google "appveyor osm2pgsql",
it should usually be the first result.
Download from gis.stackexchange
Here is the Github link
Here is the Hot-Installer reference.
I am experimenting with creating an EC2 instance to host a Perforce server. My instance is configured with the following user data:
#!/bin/bash
# Add a newline to the ec2-user prompt string
echo PS1=\"\\n\$PS1\" >> /home/ec2-user/.bashrc
# Update all packages
yum update –y
# Install Perforce packages
# The RHEL/7 part of the baseurl should be replaced with
# the latest RHEL version that both Amazon and Perforce support
rpm –import https://package.perforce.com/perforce.pubkey
cd /etc/yum.repos.d/
echo [perforce] > perforce.repo
echo name=Perforce >> perforce.repo
echo baseurl=http://package.perforce.com/yum/rhel/7/x86_64 >> perforce.repo
echo enabled=1 >> perforce.repo
echo gpgcheck=1 >> perforce.repo
yum install –y helix-p4d
# Make directories for the server, owned by new “perforce” user
cd /opt/perforce/servers/
mkdir danware
cd danware
mkdir danware-db danware-chkpts journal
chown –R perforce:perforce danware
I have tested each of the above commands, and know that they work when executed manually in this order. However, some aspect of Amazon's base64 encode/decode system seems to be getting in the way. When I go to "Actions > Instance Settings > View/Change User Data" from the EC2 Console after launching (and passing all system checks), I see the following user data. Note how almost every hyphen "-" has been replaced with some strange "a" character.
However, I'm not sure that this is the issue, because the log file at /var/log/cloud-init-output.log gives me the following output (I replaced some repetitive text with [...] to save space). Note the line that says Failed running /var/lib/cloud/instance/scripts/part-001 I have verified that this part-001 file actually does have the correctly displayed hyphen characters.
[...]
Cloud-init v. 0.7.6 running 'modules:final' at Fri, 09 Sep 2016 06:23:39 +0000. Up 86.66 seconds.
Loaded plugins: priorities, update-motd, upgrade-helper
No Match for argument: –y
No packages marked for update
RPM version 4.11.2
Copyright (C) 1998-2002 - Red Hat, Inc.
This program may be freely redistributed under the terms of the GNU GPL
Usage: rpm [-aKfgpqVcdLilsiv?] [-a|--all] [-f|--file] [-g|--group] [...]
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--> Running transaction check
---> [...]
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
helix-p4d x86_64 2016.1-1429894 perforce 24 k
Installing for dependencies:
helix-cli x86_64 2016.1-1429894 perforce 8.8 k
helix-cli-base x86_64 2016.1-1429894 perforce 1.4 M
helix-p4d-base x86_64 2016.1-1429894 perforce 3.1 k
helix-p4d-base-16.1 x86_64 2016.1-1429894 perforce 2.4 M
helix-p4dctl x86_64 2016.1-1429894 perforce 1.2 M
Transaction Summary
================================================================================
Install 1 Package (+5 Dependent packages)
Total download size: 5.0 M
Installed size: 13 M
Is this ok [y/d/N]: Exiting on user command
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2016-09-09.06-23.dRP_r2.yumtx
/var/lib/cloud/instance/scripts/part-001: line 22: cd: /opt/perforce/servers/: No such file or directory
chown: invalid user: ‘–R’
Sep 09 06:23:41 cloud-init[2517]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Sep 09 06:23:41 cloud-init[2517]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Sep 09 06:23:41 cloud-init[2517]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Cloud-init v. 0.7.6 finished at Fri, 09 Sep 2016 06:23:41 +0000. Datasource DataSourceEc2. Up 88.53 seconds
Even more annoying, I assumed that the early No Match for argument: –y line from the log file was referring to the yum update -y line from my user data. Sure enough, just running the example user data script from the EC2 documentation page, which also uses yum update -y, gives me this same error/warning! Amazon's own example script doesn't work!? So can anyone answer why A) AWS is not displaying the user data code correctly, and B) why my user data is yielding the errors shown above? The help is much appreciated!
For lines such as
yum update –y
The character you are using is a "EN DASH U+2013"
The usual character for a hyphen is "HYPHEN-MINUS U+002D"
Fix your user data source to use "hyphen minus" and have another go
I checked the character codes by cut n pasting into this online site http://www.fileformat.info/info/unicode/char/search.htm?q=-&preview=entity
Don't know if you can see the difference but this is your hyphen
yum update –y
and this is a "hyphen minus"
yum update -y
Here's what I see when I run cpan to install it.
cpan shell -- CPAN exploration and modules installation (v1.9800)
Enter 'h' for help.
cpan> install HTTP::Server::Brick
Database was generated on Fri, 13 Jul 2012 03:26:42 GMT
Running install for module 'HTTP::Server::Brick'
Running make for A/AU/AUFFLICK/HTTP-Server-Brick-0.1.4.tar.gz
Checksum for C:\strawberry\cpan\sources\authors\id\A\AU\AUFFLICK\HTTP-Server-Bri
ck-0.1.4.tar.gz ok
Scanning cache C:\strawberry\cpan\build for sizes
............................................................................DONE
CPAN.pm: Building A/AU/AUFFLICK/HTTP-Server-Brick-0.1.4.tar.gz
Created MYMETA.yml and MYMETA.json
Creating new 'Build' script for 'HTTP-Server-Brick' version '0.1.4'
Building HTTP-Server-Brick
AUFFLICK/HTTP-Server-Brick-0.1.4.tar.gz
C:\strawberry\perl\bin\perl.exe ./Build -- OK
Running Build test
t\00.load.t ....... 1/1 # Testing HTTP::Server::Brick v0.1.4
t\00.load.t ....... ok
t\pod-coverage.t .. skipped: Test::Pod::Coverage 1.04 required for testing POD c
overage
t\pod.t ........... skipped: Test::Pod 1.14 required for testing POD
t\serving.t ....... 1/281 #
#
# Using port: 85432 and host: 127.0.0.1 for test server.
# If these are not suitable settings on your machine, set the environment
# variables HSB_TEST_PORT and HSB_TEST_HOST to something suitable.
#
# Configuring server
# Starting server
t\serving.t ....... 4/281
It's really quite simple... Why port 85432? It's outside the 16 bit unsigned integer range! I can't even enter localhost:85432 in any URL bars, Chrome just sends me straight to Google search.
That is pretty strange. I took a look at the source for the test in question, and it has this:
my $port = $ENV{HSB_TEST_PORT} || 85432;
No idea why the author chose that as the default port number, since as you know it's invalid. I guess there's zero chance of the port being in use, though!
My suggestion would be to set the environment variable HSB_TEST_PORT to something more reasonable and try to install again, and file a bug report in the meantime.