I am trying to upload a project to Firebase hosting. I recently reinstalled macOS Sierra, so have fresh installs of Node and firebase-tools.
However, I get the following error when running firebase deploy --debug
[2017-08-10T21:54:40.043Z] --------------------------------------------
--------------------------
[2017-08-10T21:54:40.046Z] Command:
/usr/local/bin/node/usr/local/bin/firebase deploy --debug
[2017-08-10T21:54:40.047Z] CLI Version: 3.9.2
[2017-08-10T21:54:40.047Z] Platform: darwin
[2017-08-10T21:54:40.047Z] Node Version: v8.3.0
[2017-08-10T21:54:40.048Z] Time: Thu Aug 10 2017 16:54:40 GMT-
0500 (CDT)
[2017-08-10T21:54:40.048Z] --------------------------------------------
--------------------------
[2017-08-10T21:54:40.058Z] > command requires scope:["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform”][2017-08-10T21:54:40.059Z] > authorizing via signed-in user
[2017-08-10T21:54:40.061Z] >>> HTTP REQUEST GET https://admin.firebase.com/v1/projects/dornfeld-design
Thu Aug 10 2017 16:54:40 GMT-0500 (CDT)
[2017-08-10T21:54:40.471Z] <<< HTTP RESPONSE 200 server=nginx, date=Thu, 10 Aug 2017 21:54:40 GMT, content-type=application/json; charset=utf-8, content-length=121, connection=close, x-content-type-options=nosniff, strict-transport-security=max-age=31536000; includeSubdomains, cache-control=no-cache, no-store
[2017-08-10T21:54:40.472Z] >>> HTTP REQUEST GET https://admin.firebase.com/v1/database/name-of-project/tokens
Thu Aug 10 2017 16:54:40 GMT-0500 (CDT)
[2017-08-10T21:54:41.061Z] <<< HTTP RESPONSE 200 server=nginx, date=Thu, 10 Aug 2017 21:54:40 GMT, content-type=application/json; charset=utf-8, content-length=429, connection=close, x-content-type-options=nosniff, strict-transport-security=max-age=31536000; includeSubdomains, cache-control=no-cache, no-store
Whats the best way to proceed forward? I have tried reinstalling firebase-tools with 3 different versions of Node.
Additional comments:
had to install firebase-tools with --unsafe-perm command to get it to install.
firebase init only created .firebaserc file and a firebase.json file with {}
Manually created firebase.json file with proper public folder, etc.
Steps taken to correct problem
Uninstall Node cd /usr/local/lib
sudo rm -rf node_modules
Install Homebrew /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Install Yarn brew install yarn
Install Firebase CLI yarn add firebase-tools
Go to project folder, firebase login, firebase init
a. init creates .firebaserc file correctly
b. firebase.json still blank, i.e. {}
Create firebase.json manually:
{
"hosting": {
"public": "build/unbundled",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [{
"source": "**",
"destination": "/index.html"
}]
}
}
firebase deploy now works properly.
Related
This is not a duplicates, at least I have the right version of Composer, I have also read the 3 stackflow composer questions.
I tried to publish definition, the Basic-Sample-Network package came from the git respository, the Basic-sample-network.bna archive file, here's the command :
composer network deploy -p hlfv1.json -a basic-sample-network.bna -i admin -s adminpw
The error (seems to be a generic error message):
Identifier: basic-sample-network#0.1.3
Description: The Hello World of Hyperledger Composer samples
events.js:160
throw er; // Unhandled 'error' event
^
Error: event message must be properly signed by an identity from the same organization as the peer: [failed deserializing event creator: [Expected MSP ID Org1MSP, received ]]
at ClientDuplexStream._emitStatusIfDone (/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._receiveStatus (/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:169:8)
at /usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:634:14
Here're my environments :
composer -v :
composer-cli v0.10.0
composer-admin v0.10.0
composer-client v0.10.0
composer-common v0.10.0
composer-runtime-hlf v0.10.0
composer-runtime-hlfv1 v0.10.0
npm c 3.10.10
hlfv1.json profile :
{
"type": "hlfv1",
"orderers": [
{ "url" : "grpc://localhost:7050" }
],
"ca": { "url": "http://localhost:7054",
"name": "ca.org1.example.com"
},
"peers": [
{
"requestURL": "grpc://localhost:7051",
"eventURL": "grpc://localhost:7053"
}
],
"keyValStore": "${HOME}/.composer-credentials",
"channel": "composerchannel",
"mspID": "Org1MSP",
"timeout": "300"
}
Note I used the fabric-tool script the start the fabric and to generate the profile (hlfv1.json).
docker ps:
570ae25a586e hyperledger/fabric-peer:x86_64-1.0.0 "peer node start -..." 30 minutes ago Up 30 minutes 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
513431e5d217 hyperledger/fabric-ca:x86_64-1.0.0 "sh -c 'fabric-ca-..." 31 minutes ago Up 31 minutes 0.0.0.0:7054->7054/tcp ca.org1.example.com
2e7bf444481d hyperledger/fabric-couchdb:x86_64-1.0.0 "tini -- /docker-e..." 31 minutes ago Up 31 minutes 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb
5d5ba67cc602 hyperledger/fabric-orderer:x86_64-1.0.0 "orderer" 31 minutes ago Up 30 minutes 0.0.0.0:7050->7050/tcp orderer.example.com
If you follow the developer tutorial you will see that the id PeerAdmin must be used to deploy the network.
https://hyperledger.github.io/composer/tutorials/developer-guide.html
Problem fixed.
The confusion was providing the connectionProfileName, the -p option, based on reading the document, I thought I have to provide a file with connection information (even thought the provided profile information contain all necessary information), I didn't know it was actually picking up the profile file from ~/.composer-connection-profiles folder based on the name provided in the -p option.
I'm trying to setup the development environment for composer by following the steps in the tutorial. I was able to generate the .bna file successfully and use it in the online playground. But when I try to deploy the .bna file to the fabric V1.0 running in my local, I get the below error.
ubuntu#ip-172-31-8-83:~/fabric-tools/my-network$ cd dist
ubuntu#ip-172-31-8-83:~/fabric-tools/my-network/dist$ composer network
deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString
Deploying business network from archive: my-network.bna
Business network definition:
Identifier: my-network#0.0.1
Description: The Hello World of Hyperledger Composer samples
events.js:160
throw er; // Unhandled 'error' event
^
Error: event message must be properly signed by an identity from the same organization as the peer: [failed deserializing event creator: [Expected MSP ID Org1MSP, received ]]
at ClientDuplexStream._emitStatusIfDone (/home/ubuntu/.nvm/versions/node/v6.11.0/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._receiveStatus (/home/ubuntu/.nvm/versions/node/v6.11.0/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:169:8)
at /home/ubuntu/.nvm/versions/node/v6.11.0/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:634:14
My docker images are as follows:
ubuntu#ip-172-31-8-83:~/fabric-tools/my-network/dist$ docker ps -a
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
80c9949edf73 hyperledger/fabric-peer:x86_64-1.0.0-beta "peer
node start -..." 19 minutes ago Up 19 minutes 0.0.0.0:7051-
>7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
126f6381cc90 hyperledger/fabric-couchdb:x86_64-1.0.0-beta "tini --
/docker-e..." 19 minutes ago Up 19 minutes 4369/tcp, 9100/tcp,
0.0.0.0:5984->5984/tcp couchdb
924081546fa1 hyperledger/fabric-ca:x86_64-1.0.0-beta "sh -c
'fabric-ca-..." 19 minutes ago Up 19 minutes 0.0.0.0:7054-
>7054/tcp ca.example.com
d13f2c8e8421 hyperledger/fabric-orderer:x86_64-1.0.0-beta "orderer"
19 minutes ago Up 19 minutes 0.0.0.0:7050->7050/tcp
orderer.example.com
Node version is: 4.2.6
npm version is : 3.5.2
Docker version 17.06.0-ce, build 02c1d87
Can someone tell me how to resolve this?
you need to be at the latest Node level - you're currently at 4.2.6 - you need to be at node version 6.x
https://hyperledger.github.io/composer/unstable/installing/development-tools.html
I assume you're using Ubuntu 14.04 or 16.0x LTS ..
I am trying to send an email from my terminal (bash). I did:
echo "text" | mail -vs "subject" "myself#address.com"
The verbose flag returns
Mail Delivery Status Report will be mailed to <Me>
But neither the Mail Delivery Status Report nor the email is received in my inbox. The Mail Delivery Status Report seems to be save in the file /var/mail/Me. Here is the last report I received:
--EAB82C90F6D.1445818910/Remis-MacBook-Pro.local
Content-Description: Delivery report
Content-Type: message/delivery-status
Reporting-MTA: dns; Remis-MacBook-Pro.local
X-Postfix-Queue-ID: EAB82C90F6D
X-Postfix-Sender: rfc822; remi#Remis-MacBook-Pro.local
Arrival-Date: Sun, 25 Oct 2015 17:21:49 -0700 (PDT)
Final-Recipient: rfc822; myself#address.com
Action: delayed
Status: 4.4.1
Diagnostic-Code: X-Postfix; delivery temporarily suspended: connect to
alt2.gmail-smtp-in.l.google.com[173.194.219.26]:25: Connection refused
--EAB82C90F6D.1445818910/Remis-MacBook-Pro.local
Content-Description: Message Headers
Content-Type: text/rfc822-headers
Return-Path: <remi#Remis-MacBook-Pro.local>
Received: by Remis-MacBook-Pro.local (Postfix, from userid 501)
id EAB82C90F6D; Sun, 25 Oct 2015 17:21:49 -0700 (PDT)
To: myself#address.com
Message-Id: <20151026002149.EAB82C90F6D#Remis-MacBook-Pro.local>
Date: Sun, 25 Oct 2015 17:21:49 -0700 (PDT)
From: remi#Remis-MacBook-Pro.local (Remi)
--EAB82C90F6D.1445818910/Remis-MacBook-Pro.local--
What is going wrong?
I am on Mac OSX El Capitano Version 10.11
Mixing these instructions and these instructions I added,
relayhost = [smtp.gmail.com]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_use_tls = yes
smtp_sasl_mechanism_filter = plain
to /etc/postfix/main.cf
I also created /etc/postfix/sasl_passwd and wrote
[smtp.gmail.com]:587 username#gmail.com:password
in it. I then ran
sudo chmod 600 /etc/postfix/sasl_passwd
sudo postmap /etc/postfix/sasl_passwd
sudo launchctl stop org.postfix.master
sudo launchctl start org.postfix.master
and changed my google account settings to allow "less secure apps" (see instructions here). And it worked!
I tried running Hubot on Heroku, but I gave up because I'd prefer not to give out my credit card number.
Instead, I tried running Hubot on my mac. It gave an error, like this:
$ ./bin/hubot
hubot-sample> [Fri Jun 05 2015 11:41:52 GMT+0900 (JST)] ERROR hubot-heroku-alive included, but missing HUBOT_HEROKU_KEEPALIVE_URL. `heroku config:set HUBOT_HEROKU_KEEPALIVE_URL=$(heroku apps:info -s | grep web_url | cut -d= -f2)`
[Fri Jun 05 2015 11:41:52 GMT+0900 (JST)] INFO Using default redis on localhost:6379
I think this error occured because Hubot is looking for Heroku. How do I remove this?
Try this.
cd <your-hubot-project-dir>
npm uninstall hubot-heroku-keepalive --save
Then find and remove the line that contains "hubot-heroku-keepalive" from file "external-scripts.json".
Run "bin/hubot" again.
Some of our Docker images require downloading larger binaries from a Nexus server or from the Internet, which is responsible for distributing Java, Node.js, Mobile (Android and iOS) apps. For instance, using either the ADD or the RUN instruction to download.
RUN curl -o docker https://get.docker.com/builds/Linux/x86_64/docker-latest
Considering that the command "docker build" will be looking at the instructions and caching depending on the mtime of the file, what's the approach that takes advantage of the caching mechanism while building those images, avoiding the re-download an entire binary? https://stackoverflow.com/a/26612694/433814.
Another question is if the resource changes, Docker will not be downloading the latest version.
Solution
Docker will NOT look at any caching mechanism before downloading using "RUN curl" nor ADD. It will repeat the step of downloading. However, Docker invalidates the cache if the mtime of the file has been changed https://stackoverflow.com/a/26612694/433814, among other things. https://github.com/docker/docker/blob/master/pkg/tarsum/versioning.go#L84
Here's a strategy that I've been working on to solve this problem when building Dockerfiles with dependencies from File storage or repository such as Nexus, Amazon S3 is to retrieve the ETag from the resource, caching it, and modifying the mdtime of a cache-flag file. (https://gist.github.com/marcellodesales/721694c905dc1a2524bc#file-s3update-py-L18). It follows the approach performed in Python (https://stackoverflow.com/a/25307587), Node.js (http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/) projects.
Here's what we can do:
Get the ETag of the resource and save it outside of Dockerfile
Use an ADD instruction to add the cacheable file prior to download
Docker will check the mtime metadata of the file to whether invalidate the cache or not.
Use a RUN instruction as usual to download the content
If the previous instruction was invalidated, Docker will re-download the file. If not, the cache will be used.
Here's a setup to demo this strategy:
Example
Create a Web Server that handles HEAD requests and return an ETag header, usually returned by servers.
This simulates the Nexus or S3 storage of files.
Build an image and verify that the dependent layer will download the resource for the first time
Caching the current value of the ETag
Rebuild the image and verify that the dependent layer will use the Cached value.
Changing the ETag value returned by Web Server handler to simulate a change.
In addition, persist the change IFF the file has changed. In this cause yes...
Rebuild the image and verify that the dependent layer will be invalidated, triggering a download.
Rebuild the image again and verify that the cache was used.
1. Node.js server
Suppose you have the following Node.js server serving files. Let's implement a HEAD operation and return a value.
// You'll see the client-side's output on the console when you run it.
var restify = require('restify');
// Server
var server = restify.createServer({
name: 'myapp',
version: '1.0.0'
});
server.head("/", function (req, res, next) {
res.writeHead(200, {'Content-Type': 'application/json; charset=utf-8',
'ETag': '"{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8555}}"'});
res.end();
return next();
});
server.get("/", function (req, res, next) {
res.writeHead(200, {'Content-Type': 'application/json; charset=utf-8',
'ETag': '"{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8555}}"'});
res.write("The file to be downloaded");
res.end();
return next();
});
server.listen(80, function () {
console.log('%s listening at %s', server.name, server.url);
});
// Client
var client = restify.createJsonClient({
url: 'http://localhost:80',
version: '~1.0'
});
client.head('/', function (err, req, res, obj) {
if(err) console.log("An error ocurred:", err);
else console.log('HEAD / returned headers: %j', res.headers);
});
Executing this will give you:
mdesales#ubuntu [11/27/201411:10:49] ~/dev/icode/fuego/interview (feature/supportLogAuditor *) $ node testserver.js
myapp listening at http://0.0.0.0:8181
HEAD / returned headers: {"content-type":"application/json; charset=utf-8",
"etag":"\"{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8555}}\"",
"date":"Thu, 27 Nov 2014 19:10:50 GMT","connection":"keep-alive"}
2. Build an image based on ETag value
Consider the following build script that caches the ETag Header in a file.
#!/bin/sh
# Delete the existing first, and get the headers of the server to a file "headers.txt"
# Grep the ETag to a "new-docker.etag" file
# If the file exists, verify if the ETag has changed and/or move/modify the mtime of the file
# Proceed with the "docker build" as usual
rm -f new-docker.etag
curl -I -D headers.txt http://192.168.248.133:8181/ && \
grep -o 'ETag[^*]*' headers.txt > new-docker.etag && \
rm -f headers.txt
if [ ! -f docker.etag ]; then
cp new-docker.etag docker.etag
else
new=$(cat docker.etag)
old=$(cat new-docker.etag)
echo "Old ETag = $old"
echo "New ETag = $new"
if [ "$old" != "$new" ]; then
mv new-docker.etag docker.etag
touch -t 200001010000.00 docker.etag
fi
fi
docker build -t platform.registry.docker.corp.intuit.net/container/mule:3.4.1 .
3. Rebuilding and using cache
Building this would result as follows, considering I'm using the current cache.
mdesales#ubuntu [11/27/201411:54:08] ~/dev/github-intuit/docker-images/platform/mule-3.4 (master) $ ./build.sh
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8555}}"
Date: Thu, 27 Nov 2014 19:54:16 GMT
Connection: keep-alive
Old ETag = ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8555}}"
New ETag = ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8555}}"
Sending build context to Docker daemon 51.71 kB
Sending build context to Docker daemon
Step 0 : FROM core.registry.docker.corp.intuit.net/runtime/java:7
---> 3eb1591273f5
Step 1 : MAINTAINER Marcello_deSales#intuit.com
---> Using cache
---> 9bb8fff83697
Step 2 : WORKDIR /opt
---> Using cache
---> 3e3c96d96fc9
Step 3 : ADD docker.etag /tmp/docker.etag
---> Using cache
---> db3f82289475
Step 4 : RUN cat /tmp/docker.etag
---> Using cache
---> 0d4147a5f5ee
Step 5 : RUN curl -o docker https://get.docker.com/builds/Linux/x86_64/docker-latest
---> Using cache
---> 6bd6e75be322
Successfully built 6bd6e75be322
4. Simulating the ETag change
Changing the value of the ETag on the server and restarting the server to simulate the new update will result in updating the cache-flag file and invalidation of the Cache. For instance, the Etag was changed to "465fb0d9b9f143ad691c7c3bcf3801b47284f8333". Rebuilding will trigger a new download because the ETag file was updated, and Docker will verify that during the "ADD" instruction. Here, step #5 will run again.
mdesales#ubuntu [11/27/201411:54:16] ~/dev/github-intuit/docker-images/platform/mule-3.4 (master) $ ./build.sh
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8333}}"
Date: Thu, 27 Nov 2014 19:54:45 GMT
Connection: keep-alive
Old ETag = ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8333}}"
New ETag = ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8555}}"
Sending build context to Docker daemon 50.69 kB
Sending build context to Docker daemon
Step 0 : FROM core.registry.docker.corp.intuit.net/runtime/java:7
---> 3eb1591273f5
Step 1 : MAINTAINER Marcello_deSales#intuit.com
---> Using cache
---> 9bb8fff83697
Step 2 : WORKDIR /opt
---> Using cache
---> 3e3c96d96fc9
Step 3 : ADD docker.etag /tmp/docker.etag
---> ac3b200c8cdc
Removing intermediate container 4cf0040dbc43
Step 4 : RUN cat /tmp/docker.etag
---> Running in 4dd38d30549a
ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8333}}"
---> 4fafbeac2180
Removing intermediate container 4dd38d30549a
Step 5 : RUN curl -o docker https://get.docker.com/builds/Linux/x86_64/docker-latest
---> Running in de920c7a2e28
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 13.5M 100 13.5M 0 0 1361k 0 0:00:10 0:00:10 --:--:-- 2283k
---> 95aff324da85
Removing intermediate container de920c7a2e28
Successfully built 95aff324da85
5. Reusing the Cache again
Considering that the ETag hasn't changed, the cache-flag file will continue being the same and Docker will do a super fast build using the cache.
mdesales#ubuntu [11/27/201411:54:56] ~/dev/github-intuit/docker-images/platform/mule-3.4 (master) $ ./build.sh
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8333}}"
Date: Thu, 27 Nov 2014 19:54:58 GMT
Connection: keep-alive
Old ETag = ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8333}}"
New ETag = ETag: "{SHA1{465fb0d9b9f143ad691c7c3bcf3801b47284f8333}}"
Sending build context to Docker daemon 51.71 kB
Sending build context to Docker daemon
Step 0 : FROM core.registry.docker.corp.intuit.net/runtime/java:7
---> 3eb1591273f5
Step 1 : MAINTAINER Marcello_deSales#intuit.com
---> Using cache
---> 9bb8fff83697
Step 2 : WORKDIR /opt
---> Using cache
---> 3e3c96d96fc9
Step 3 : ADD docker.etag /tmp/docker.etag
---> Using cache
---> ac3b200c8cdc
Step 4 : RUN cat /tmp/docker.etag
---> Using cache
---> 4fafbeac2180
Step 5 : RUN curl -o docker https://get.docker.com/builds/Linux/x86_64/docker-latest
---> Using cache
---> 95aff324da85
Successfully built 95aff324da85
This strategy has been used to build Node.js, Java and other App servers or pre-built dependencies.
I use a similar but simpler approach:
Let's say I want to add a binary named mybin that can be downloaded from: http://www.example.com/pub/mybin
I do the following in my Jenkins job
wget -N http://www.example.com/pub/mybin
And in my Docker File I have:
COPY mybin /usr/local/bin/
The option -N downloads the binary only when it has changed on the server. The second time I run the wget job I get:
...
Length: 12262118 (12M) [application/octet-stream]
Server file no newer than local file ‘mybin’ -- not retrieving.
And docker build uses the cache.
If the binary changes on the server (when the time stamp changes), wget downloads the binary again which invalidates the cache for the COPY command.