I want to use Grunt and rsync deploy some code from my computer (Windows) to a server (Linux).
My Gruntfile.js is
module.exports = function(grunt) {
grunt.initConfig({
rsync: {
options: {
args: ['--verbose', '--chmod=777'],
exclude: ['*.git', 'node_modules'],
recursive: true
},
production: {
options: {
src: './bitzl.com',
dest: '/home/marcus/bitzl.com',
host: 'marcus#bitzl.com',
syncDest: true
}
}
}
});
grunt.loadNpmTasks('grunt-rsync');
}
Please note that I use the homedirectory of marcus and chmod=777 just to simplify testing.
Running grunt rsync fails:
Running "rsync:production" (rsync) task
rsyncing ./example.com >>> /home/marcus/bitzl.com
Shell command was: rsync ./bitzl.com marcus#bitzl.com:/home/marcus/bitzl.com --rsh ssh --recursive --delete --delete-exc
luded --exclude=*.git --exclude=node_modules --verbose --chmod=777
ERROR
Error: rsync exited with code 12
Warning: Task "rsync:production" failed. Use --force to continue.
Error: Task "rsync:production" failed.
at Task.<anonymous> (D:\git\bitzl.com\node_modules\grunt\lib\util\task.js:200:15)
at null._onTimeout (D:\git\bitzl.com\node_modules\grunt\lib\util\task.js:236:33)
at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
Aborted due to warnings.
However, running the rsync command from above without Grunt works fine:
rsync ./bitzl.com marcus#bitzl.com:/home/marcus/bitzl.com --rsh ssh --recursive --delete --delete-excluded --exclude=*.git --exclude=node_modules --verbose --chmod=777
Rsync promts for the passphrase of my public key (via Grunt it doesn't) and syncs like a breeze.
Authentication on the server works via public key (with passphrase). Password authentication would be fine, too.
My guess is that somehow the password promt breaks and rsync fails with a protocol error (that's exit code 12).
What I am missing to get grunt-rsync working on Windows?
Update:
From a Linux VM (Ubuntu 12.04 via Virtualbox/Vagrant) it works like on would expect.
It turned out as a bug in grunt-rsync. A pull request I've filled in for rsyncwrapper has been accepted, and with the recent version of grunt-rsync everything works as expected.
If you still experience problems, make sure you have at least grunt-rsync version 0.6.0 installed.
Related
I am trying to run the collection-examples-as near example but when I run yarn deploy it gives me the following error
$ near dev-deploy --wasmFile="./contract.wasm"
Starting deployment. Account id: dev-1637744501224-6323200, node:
https://rpc.testnet.near.org, helper: https://helper.testnet.near.org, file:
./contract.wasm
An error occurred
Error: ENOENT: no such file or directory, open './contract.wasm'
[Error: ENOENT: no such file or directory, open './contract.wasm'] {
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: './contract.wasm'
}
error Command failed with exit code 1.
I think the error is it can't find the path to ./contract.wasm so I run yarn build the I tried to deploy it again using yarn deploy but I got another error which is:
$ near dev-deploy --wasmFile="./contract.wasm"
Starting deployment. Account id: dev-1637744501224-6323200, node:
https://rpc.testnet.near.org, helper: https://helper.testnet.near.org, file:
./contract.wasm
An error occurred
Error: Can not sign transactions for account dev-1637744501224-6323200 on network
default, no matching key pair found in
InMemorySigner(MergeKeyStore(UnencryptedFileSystemKeyStore(/home/rasha/.near-
credentials), UnencryptedFileSystemKeyStore(/home/rasha/collection-examples-
as/neardev))).
{
type: 'KeyNotFound',
context: undefined
}
any help or suggestions?
There's an issue with old versions of near-cli and dev-deploy.
As a workaround, you can try running the latest cli directly from your terminal:
near dev-deploy --wasmFile="./contract.wasm" -f
Just check that you installed the latest version of near-cli, currently 2.2.0. You can check your version with near --version.
Be sure to run yarn build first, so you'll have the compiled contract.wasm file.
Note: When you run yarn deploy, it uses the old near-cli version defined in package.json. (probably some old version like 1.6.0)
You might also want to check this GitHub issue (dev-deploy error): https://github.com/near/create-near-app/issues/1408
The introduction
I am currently trying to build a docker image with all of my node project dependencies, so I can use it to run the tests on Bitbucket Pipelines.
The reason I decided to create an image was due to the fact I want to be in control of what version of the dependencies I have, and to control when to upgrade them accordingly.
The implementation
After having built the image using the following dockerfile:
FROM selenium/standalone-chrome-debug
LABEL name="nodejs-chrome-java"
USER root
# Install Java 8
RUN set -x \
&& apt-get update \
&& apt-get install -y \
ca-certificates-java \
openjdk-8-jre-headless \
openjdk-8-jre \
openjdk-8-jdk-headless \
openjdk-8-jdk \
&& apt-get clean
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME
# Install node 10 and npm
RUN set -x \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get update \
&& apt-get install -y nodejs \
&& npm install -g npm#latest \
&& apt-get clean
# Make node available
RUN set -x \
&& touch ~/.bashrc \
&& echo 'alias nodejs=node' > ~/.bashrc
# Install PhantomJS
RUN set -x \
&& apt-get update \
&& apt-get install -y \
phantomjs \
&& apt-get clean
# Set PhantomJS to run headless
ENV QT_QPA_PLATFORM offscreen
RUN mkdir /logs
RUN touch /logs/selenium.log
I executed the docker image using the following command:
docker run -it --entrypoint /bin/bash -v /my/project:/project -w /project <DOCKER_IMAGE_ID>
And I realised that in order to have selenium running, I would have to run the following command:
/opt/bin/start-selenium-standalone.sh
Which yields the following output:
22:14:13.034 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.59, revision: e82be7d358
22:14:13.304 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
2020-06-29 22:14:13.466:INFO::main: Logging initialized #1128ms to org.seleniumhq.jetty9.util.log.StdErrLog
22:14:14.228 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
22:14:14.547 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
However, because I do not want the command line within the container to get stuck, I tried the following instead:
/opt/bin/start-selenium-standalone.sh > /logs/selenium.log 2>&1 &
Which indeed outputs the same content as stated before, into the log file I defined (/logs/selenium.log) when creating the docker image. So, so far so good 👍
And if I then run my tests using the npm test command, all tests pass successfully. 🎉
Given this outcome, and because when I use this image within the Bitbucket Pipelines I wouldn't be able to run this command manually, I decided to include the line that starts the standalone selenium server in the background, on my bash script that gets executed when I call npm test, like so:
#!/bin/bash
printf "Starting Selenium Server"
/opt/bin/start-selenium-standalone.sh > /logs/selenium.log 2>&1 &
PROCESS_ID=$!
retry=0
maxRetries=10
until [ ${retry} -ge ${maxRetries} ]
do
grep "Selenium Server is up and running on port 4444" /logs/selenium.log > /dev/null \
&& echo \
&& break;
retry=$[${retry}+1]
printf . ;
sleep 1
done
if [ ${retry} -ge ${maxRetries} ]; then
echo "Failed after ${maxRetries} attempts!"
exit 1
fi
printf "Running UI tests...\n"
CONFIG_FILE_PATH='../../config_test.json' ./node_modules/.bin/nightwatch
The problem
When the command gets included into the bash script to be executed from there, it seems like the command cannot be executed on the same way as when it's done so manually. And I get the following on the log file:
22:29:12.814 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.59, revision: e82be7d358
22:29:13.093 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
2020-06-30 22:29:13.253:INFO::main: Logging initialized #1602ms to org.seleniumhq.jetty9.util.log.StdErrLog
22:29:14.058 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
22:29:14.381 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
22:29:21.121 INFO [ActiveSessionFactory.apply] - Capabilities are: {
"acceptSslCerts": true,
"browserName": "chrome",
"chromeOptions": {
"w3c": false,
"args": [
"headless",
"no-sandbox"
]
},
"javascriptEnabled": true,
"name": "Route Subdomain Dashboard Test"
}
22:29:21.128 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
/my/project/node_modules/chromedriver/lib/chromedriver/chromedriver: 1: /my/project/node_modules/chromedriver/lib/chromedriver/chromedriver: Syntax error: ")" unexpected
22:29:41.214 ERROR [OsProcess.checkForError] - org.apache.commons.exec.ExecuteException: Process exited with an error: 2 (Exit value: 2)
And the following output from my running tests:
⠴ Connecting to 127.0.0.1 on port 4444...
Response 500 POST /wd/hub/session (20425ms)
{
value: {
error: [
"Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:25:53'",
"System info: host: '75136e9ec116', ip: '172.17.0.2', os.name: 'Linux', os.arch: 'amd64', os.version: '4.19.76-linuxkit', java.version: '1.8.0_252'",
'Driver info: driver.version: unknown'
],
message: 'Timed out waiting for driver server to start.'
},
status: 13
⚠ Error connecting to 127.0.0.1 on port 4444.
_________________________________________________
TEST FAILURE: 1 error during execution; 0 tests failed, 0 passed (24.402s)
✖ route-subdomain-dashboard-test
An error occurred while retrieving a new session: "Timed out waiting for driver server to start."
Error: An error occurred while retrieving a new session: "Timed out waiting for driver server to start."
at endReadableNT (_stream_readable.js:1224:12)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
Error: An error occurred while retrieving a new session: "Timed out waiting for driver server to start."
at endReadableNT (_stream_readable.js:1224:12)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
The question
Is there anything I am missing that would allow the command to be successfully executed through the bash script? Why am I seeing such disparate results?
The appreciation
Sorry for the long post, but I think I needed to provide as much context as possible, since it seems a very tricky problem that I've been struggling for quite some time now.
Many thanks in advance 🙏
Updates
02/10/2020 - I realised today that once inside the docker image, if I execute the npm test command in order to run the tests, I have the issue described under the section named The problem. However, if I run the bash script that I created and that npm test command is calling under the hood, I have the tests successfully executed. Could it be the way that npm executes the scripts?
While trying to install puppeteer with: npm i puppeteer getting this error. My Node version is v12.16.3. I'm on a windows machine.
<!-- language: lang-none -->
ERROR: Failed to set up Chromium r756035! Set "PUPPETEER_SKIP_DOWNLOAD" env variable to skip download.
Error: Client network socket disconnected before secure TLS connection was established
at connResetException (internal/errors.js:608:14)
at TLSSocket.onConnectEnd (_tls_wrap.js:1514:19)
at Object.onceWrapper (events.js:416:28)
at TLSSocket.emit (events.js:322:22)
at endReadableNT (_stream_readable.js:1187:12)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
-- ASYNC --
at BrowserFetcher.<anonymous> (C:\Users\Red Viper\AppData\Roaming\npm\node_modules\puppeteer\lib\helper.js:116:19)
at fetchBinary (C:\Users\Red Viper\AppData\Roaming\npm\node_modules\puppeteer\install.js:148:8)
at download (C:\Users\Red Viper\AppData\Roaming\npm\node_modules\puppeteer\install.js:54:9) {
code: 'ECONNRESET',
path: null,
host: 'storage.googleapis.com',
port: 443,
localAddress: undefined
}
use --unsafe-perm
It works for me!
npm i puppeteer -g --unsafe-perm
Run following command from terminal:
Mac:
export PUPPETEER_SKIP_DOWNLOAD='true'
Windows:
SET PUPPETEER_SKIP_DOWNLOAD='true'
Both will set environment variable and you should be fine with execution
For me, the reason was that this address was banned in my country. I had to use a proxy to avoid this error.
Probably you're behind a proxy, may be you can contact your network/security admin in your organization.
This was the case for me, and the issue didn't appear in my home network.
I have generated a Yocto image to be used on all my target devices. When that image is running on target devices, it must be able to be updated using a rpm remote repository through https protocol.
To try doing that, I have added a dnf bbappend to my custom layer:
$ cat recipes-devtools/dnf/dnf_%.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += " \
file://yocto-adv-rpm.repo \
"
do_install_append () {
install -d ${D}/etc/yum.repos.d
install -m 0600 ${WORKDIR}/yocto-adv-rpm.repo ${D}/etc/yum.repos.d/yocto-adv-rpm.repo
}
FILES_${PN} += "/etc/yum.repos.d"
This is the content of repository configuration file included by dnf bbappend recipe:
$ cat recipes-devtools/dnf/files/yocto-adv-rpm.repo
[yocto-adv-rpm]
name=Rocko Yocto Repo
baseurl=https://storage.googleapis.com/my_repo/
gpgkey=https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
enabled=1
gpgcheck=1
This repository configuration breaks the build process of the image. When I try to build myimage recipe, I always get this error:
ERROR: myimage-1.0-r0 do_rootfs: [log_check] myimage: found 1 error message in the logfile:
[log_check] Failed to synchronize cache for repo 'yocto-adv-rpm', disabling.
ERROR: myimage-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /home/yocto/yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/log.do_rootfs.731
ERROR: Task (/home/yocto/yocto/sources/meta-mylayer/recipes-images/myimage.bb:do_rootfs) failed with exit code '1'
However, when I replace the "https" by "http" in "baseurl" variable:
baseurl=http://storage.googleapis.com/my_repo/
Then the myimage recipe is built fine.
The host machine can download files from the https repository using wget:
$ wget https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
Previous commands works fine, so the problem is not related with the host machine, I think it must be something related with google certificates and yocto stuff.
I found some relevant information inside this file:
yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/dnf.librepo.log
The relevant part:
15:56:41 lr_download: Downloading started
15:56:41 check_transfer_statuses: Transfer finished: repodata/repomd.xml (Effective url: https://storage.googleapis.com/my_repo/repodata/repomd.xml)
15:56:41 check_finished_transfer_status: Fatal error - Curl code (77): Problem with the SSL CA cert (path? access rights?) for https://storage.googleapis.com/my_repo/repodata/repomd.xml [error setting certificate verify locations:
CAfile: /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/ca-certificates.crt
CApath: none]
15:56:41 lr_yum_download_repomd: repomd.xml download was unsuccessful
Can some of you provide any useful advice to try to fix this?
Thank you in advance for your time! :-)
I finally fixed my issue removing completely my dnf bbappend recipe from my custom layer and adding this variable to my distro.conf file:
PACKAGE_FEED_URIS = "https://storage.googleapis.com/my_repo/"
After that, at the end of the build process the image contains a valid /etc/yum.d/oe-remote-repo file and all the necesary stuff to manage it. There is no need to copy "ca-certificates.crt" manually at all.
Also, it's important to execute this command after finishing the build of the image:
$ bitbake package-index
This command generates a "repodata" directory within the package feed needed by the target device once it uses the repo to update packages using dnf client.
I found a temporal hack to fix my issue:
$ cp /etc/ssl/certs/ca-certificates.crt /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/
After that, I was finally able to build the image using the "https" repo.
Now I am in the process of fixing this issue in the right way. I'll come back with the final solution.
Im trying to use gulp to handle my rysnc task from a local dev environment to a running vagrant machine.
The gulp task is set up like this:
var rsync = require('rsyncwrapper').rsync;
var secrets = require('./secrets.json');
// ###Rsync
// Ran from gulp
gulp.task('deploy', function() {
rsync({
ssh: true,
src: './website/',
dest: secrets.servers.dev.rsyncDest,
recursive: true,
syncDest: true,
exclude: ['node_modules'],
args: ['--verbose'],
privateKey: './.vagrant/machines/default/virtualbox/private_key',
onStdout: function (data) {
console .log (data.toString ());
}
},function (error,stdout,stderr,cmd) {
if ( error ) {
// failed
console.log(error.message);
} else {
// success
console.log("folder synced!");
}
});
});
The secrets.json contains the path to my destination vagrant machine:
{
"servers": {
"dev": {
"rsyncDest": "vagrant#192.168.2.101:/opt/webiste"
}
}
}
The rest of my gulp file works without issue, a normal vagrant rsync also works to transfer the file across.
However, when I run my task deploy, I simply get: Rsync Exited with code 12.
After some googling I found that this means the protocol stream has failed but I am unsure as even where to begin trying to fix this issue.
Any help would be greatly appreciated!
I encountered the same problem. It occurred when I destroyed and rebuilt my vagrant machine, but I left the old key in ~/.ssh/known_hosts file. The error message gulp-rsync isn't helpful. If you try to login directly you'll get an error like this:
>> ssh vagrant#192.168.50.104
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
87:9b:39:02:b8:96:6b:21:01:fa:b5:42:5f:0a:0b:f7.
Please contact your system administrator.
Add correct host key in /Users/me/.ssh/known_hosts to get rid of this message.
Offending RSA key in /Users/me/.ssh/known_hosts:36
RSA host key for 192.168.50.104 has changed and you have requested strict checking.
Host key verification failed.
To fix it, edit the file ~/.ssh/known_hosts. Remove the line for the host key of your vagrant machine. In my case, it was line beginning with 192.158.50.104.
The next time you run gulp-rsync, you'll have to type yes at the prompt.