I was wondering if since the 0.15.0 release and switching to cards someone has figured out how to access the same network both locally via the CLI and in the Playground and with the same Fabric runtime.
So far, I have been able to install my network's runtime, start and ping it on the Playground's fabric after creating PeerAdmin card using the script that came with Playground.
However, importing the newly deployed network's admin card fails in the Playground. If, however, I deploy the network via the Playground, export the admin card, download/import the admin card from the Playground and then try to composer ping it, it just sits there timing out after a while. This is MacOS High Sierra. So what gives and what can be done?
Thanks much!
If I understood your issue correctly, this is how you can solve it:
Create your business network in Playground
Export business network card from Playground (download button on card) which produces {nameOfUser}.card file.
Now you transfer this card to wherever you have installed fabric/playground
Run command: composer card import -f {nameOfUser}.card
Now your business card should appear under location {usersHome}/.composer/cards/user#network-name
Inside /cards folder, you should see 2 folders. One is "PeerAdmin" which was created if you followed setup and another one is your imported one
Copy connection.json from "PeerAdmin" to your new card and replace it. (this is most important step)
Run command: composer-rest-server and use as network card: user#network-name - folder that you copied
With all this steps I successfully created and ran server. Now you can access it on port IP:3000/explorer
You can share the Business Network Cards between Playground and the CLI. However it can be a bit more difficult if you are running Playground in a Docker Container.
With the CLI you connect to your Fabric servers on localhost and Docker deals with the Port fowarding into the Containers for the Fabric.
The Fabric Containers (and the Playground if you start it in a container) connect with eachother on 'fake' addresses managed by docker-compose e.g. orderer.example.com:7050
So if you start composer-playground using the CLI any Card you export will have localhost as the addresses of Fabric servers and other CLI commands will be able to utilise it. If however you are using Playground in a container the Card will use the fake addresses and you will not be able to connect from the CLI straightaway.
I assume you are using Playground in a Container and hence having the problem. If you find the connection.json in a location similar to: ~/.composer/cards/admin#*xxxxxx*/connection.json and edit the addresses of the fabric server to be localhost you should be able to use the CLI as expected.
Related
I added my laravel project to docker as it appears in the first picture.
enter image description here
And I push my images to a repository in the docker hub as it appears in the second picture.
enter image description here
Now I want to run my application on another pc, I trying to pull my images but I don't know how to run the project in the browser.
Another question is there a way to test if the project will work on another pc from my pc something like a virtual machine.
If you want your project to be accessed globally then the easiest way is to use NGROK. It requires node>=10.19.0
Two simple steps to share your locally running app with others:
//Installing ngrok globally
npm install ngrok -g
//Expose a locally running application at port 8080 to the internet.
ngrok http 8000
If you want to share locally then both the PC's should be connected to the same network and you can just type the http://{IP address of PC running APP}:{PORT} to access the application.
To check the IP address use ifconfig command. Let's say it gives you 192.168.100.1, and the app is running on port 8000. To access the app on the other PC, you have to enter the following URL in the browser:
http://192.168.100.1:8000
I am getting a connection refused error while trying to deploy a new business network on the Hyperledger Composer playground which is running locally on the environment. The environment is behind the corporate proxy and I have setup both the proxies (http and https) in the system.
The exact error is mentioned below:
{"error":
{"code":"ECONNREFUSED","errno":"ECONNREFUSED","syscall":"connect","address":"151.101.0.162","port":443}}
I have followed the links given below to setup the Hyperledger Fabric and the corresponding docker containers.
https://hyperledger.github.io/composer/installing/development-tools.html
https://hyperledger.github.io/composer/tutorials/developer-guide.html
At the end of the setup, I notice that there are five docker containers running in the system -
composer, peer0.org1.example.com, orderer.example.com, ca.org1.example.com and couchdb
When I follow the developer guide and try deploying the new business network on the Online BlueMix Composer Playground platform, it works. I was able to see the transactions, commodities and etc for the sample example application.
However, I notice the above error when I tried deploying the new business network on the local composer playground.
The development environment on which the Composer and the fabric is hosted are Ubuntu 14.04, python 2.7, node v6.11.0, npm5.0.4, docker-compose v1.13.0.
Can somebody kindly suggest a workaround / direct me to certain links to resolve this problem?
I'm running an Ubuntu Server on an Amazon EC2 Service. And I'm using the Node-RED to create an IOT project on the cloud.
I succeeded configuring one machine in a way that it works for my project. My problem is when I clone this machine (creating an Amazon Machine Image of my original server and launching it as a new machine). I don't know why all the nodes that I've created on the graphical interface with the Node-RED disappear when I clone my Ubuntu Server. On my cloned server I just see a blank page when I access the Node-RED as if I had never created any node on the original server:
I think this is a problem with the Node-RED because I'm also running a Kibana instance on the same server and all Kibana's graphical configurations are preserved with the cloned server.
Does anyone know why this is happening? Is there a specific configuration on the Node-RED that I have to change to allow its graphical interface to be cloned?
OBS: I know I could just export everything that I did on the original server to my cloned server using the Node-RED import/export tools... But I'm planning to clone my original server many times, so it'd be better if everything were exactly the same when I clone the machine, without the need of manual work.
Node-RED stores the flow in a file in the ~/.node-red/ directory of the user running that instance, the file name is based on the host name of the machine.
e.g. on a raspberry pi the default flow file is called:
/home/pi/.node-red/flows_raspberrypi.json
So assuming that the host name gets changed when you "clone" the machine, Node-RED will not be able to find a flow file that matches the host name and as such start with an empty flow.
There are a few of ways to work round this.
if you start Node-RED manually from the command line you can specify the flow file as the last argument: node-red flow.json
if you are running Node-RED as a service then you can edit the ~/.node-red/settings.js to include a flowFile key that holds the name of the flow to use.
After a long time of using LAMP and WAMP, I've decided to try out Docker (buying new hard drives today, so why not?).
I've managed to create containers for my website and everything works fine.
Content is updated, database is saved to the folder (so kind of persistent), however, I've read that it is possible to automatically start the project containers using integration inside the PhpStorm.
And here are the problems:
I am using Windows 10 Professional with Hyper-V enabled
Docker running as a service
Docker in Windows using NPIPE (Named Pipes)
PhpStorm only works with tcp:// unix:// URI
Tried to use socat to map pipe to tcp and failed (either device is busy, or unable to send 'send' command, or any other error, you name it)
Tried to start the Docker daemon using the configuration file with hosts set to pipes and tcp - failed again (guess it is only works for Azure)
Can someone give me a link to the detailed configuration of the Docker on the Windows or should I just fallback to WAMP, because I REALLY don't want to install VMWare or VBox on my machine, neither I want to use out-of-the-box solutions for hosting local WAMP server (XAMPP, Open Server, Denver, etc), I just don't trust them.
Here's what we have:
1) https://www.jetbrains.com/help/phpstorm/docker.html
2) https://www.jetbrains.com/help/phpstorm/docker-2.html
3) https://confluence.jetbrains.com/display/PhpStorm/Docker+Support+in+PhpStorm
4) https://github.com/JetBrains/phpstorm-workshop - you can checkout docker branch. This project contains some examples/tutorials you can try right inside IDE
If that doesn't help at all - please attach/describe an error message you're getting in IDE.
My computer is OSX. I'm logged into an ssh connection (Ubuntu), and from there I'm ssh'ed into an OpenStack instance of Ubuntu 14.04. From this OpenStack instance I've been following a Docker-Compose tutorial from the Docker docs : https://docs.docker.com/compose/gettingstarted/
I'm on Step 4, and I'm successfully running a server that is running on http://0.0.0.0:5000/
However, I don't know how to view a GUI Google Chrome browser from my Macbook. Because whenever I go to http://0.0.0.0:5000/ it says server not found, which makes sense because it's not on my computer.
I read something about port forwarding, but I'm not sure that's right here. I'm fairly new, so please help!
Also, is this the right way to use an OpenStack machine? That you use your computer's web browser to view the web app?
I solved it myself. Turns out on OpenStack, you need to create a security group and then add it to your instance. When you create a security group, you can add a port that you want to provide public access to. And then you can view the web app on any computer by typing in your floating IP on OpenStack, colon separated by the public port address.