I've followed the documentation on how to install the Hyperledger Composer & Playground locally and it works. However, if I reboot my computer and want to restart the Hyperledger Composer playground, I don't see how to do it, other than re-downloading the docker images and starting over from scratch.
How can you restart the playground it as to pick back up where you left off?
If you are using Mac then just follow these steps:
Open terminal
Type this command and press enter: ./composer.sh
That's it. Now Hyperledger composer will locally re-started.
You need to change the docker-compose.yml in order to start where you left off,
otherwise your data will be lost after reboot. See http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html and the section called "A Note on Data Persistence" which provides more detail on this.
After reboot you just need to go to dir fabric-tools:
run ./startFabric.sh
run composer-playground
If you had followed the tutorial, your system would have already downloaded all the required docker images.
docker images
command would show you all the images downloaded.
You need to just go to fabric-dev-servers folder and then start the network using
./teardownFabric.sh
./startFabric.sh
./createPeerAdminCard.sh
Then verify that all dockers are running using command
docker ps
Then you can simply start composer playgroup by
composer-playground
Head to http://localhost:8080
Related
Environment: Windows 11 + Docker Desktop 4.12.0
I've been digging this the entire morning. There doesn't seem to be a way of installing Laravel in a Docker image. You must curl it in your WSL2 distro. Trying the command curl -s https://laravel.build/example-app | bash in a Docker container command-line immediately returns the dreaded Docker is not running error message
Some suggest that I need to turn on my "WSL2 integration" checkbox in Docker Desktop settings, but that didn't help.
So what if I download the official Ubuntu image from Docker Hub, run it as a container. Can I download (curl) Laravel in that container?
And while we are here, how does the Bitnami Laravel image differ from the standard procedure given in Laravel documentation? I like it because I can download it as a normal Docker image and create as many containers as I want, but I'm unsure how this connects or contrasts with the official Laravel method.
If it helps anyone, the curl -s https://laravel.build/example-app | bash command downloads several Docker images including MariaDB, Redis, mailhog etc. and therefore needs Docker to be running on the host machine (which is not available inside a container, that's why you can't run the curl command there). Once downloaded, it creates a new container containing (no puns intended) one container for each of these images. You can also customize the list of images/containers that your Laravel application needs by passing the list of services in the curl command like this: curl -s https://laravel.build/example-app?with=mysql,redis. Thanks #apokryfos for the helpful comment. Once these containers are running, you can use VSCode (together with GIT) to connect to them and do your development work.
Of course, you can still use the old-school method of Laravel development. Just install XAMPP or one of its cousins on your machine and then use composer create-project command from the terminal to create it on your local file system. Then host your database and website on locally running instances of Apache and MariaDB.
I have yet to check out Bitnami Laravel image and how it works.
I have entered into the WSL terminal the following command:
docker compose build --no-cache && docker compose up
This is what happened:
I have not downloaded anything outside of Docker on this computer and I have cloned this "backend" from the repository.
I have no experience in Docker or Laravel.
What methods should I start with to fix this?
The option -g in the groupadd command needs to be numerical, you can't use use the word sail.
See Ubuntu's documentation about that command and option here.
When I first install docker I can choose to go to the Quick Start Guide interactive window (or once done the quick start guide I can also go there again by right clicking on docker icon and selecting Quick Start Guide on the menu). As you can see I can do anything docker in this windows/environment as shown:
as you can see I can pull containers, run containers right there on that quick start guide command line, however, when I open a terminal (be it PS or Git or normal windows CMD) I can't seem to run docker there as shown:
So not so sure what am I missing. Thanks for any feedback!
I found the solution at least to my situation, problem is that there was this environment variable DOCKER_HOST (left over from Docker Tools) that was being set, which is not set when I do the quick start guide. Solution was to remove this environment variable from system/user settings and voila I could run docker on the terminal.
I installed docker with the instructions here, downloading from docker-hub
https://docs.docker.com/docker-for-mac/install/
But when I run docker-compose I get this error
pyenv: docker-compose: command not found
The `docker-compose' command exists in these Python versions:
3.6.5/envs/myenv
Also, docker-compose is available under /Users
which docker-compose
/Users/<username>/.pyenv/shims/docker-compose
In this link says, docker-compose for mac need not be installed explicitly as it is part of docker for desktop mac.
https://docs.docker.com/compose/install/
Is something wrong with my installation?
I ran into the same issue on macOS today. Turned out that you need to run the installed app once, it does some additional downloading and setup. That setup includes setting up your path variables.
docker-compose is a utility that is now a parameter in mac docker
so instead of docker-compose up, its now docker compose up
if you install docker from official website then docker-compose will come along with that for mac so need to either upgrade and documentation is present there.
I am trying to use Cloud9 IDE on a server. I added the SSH key and once I try to SSH into the server the error message
Could not execute node.js on root#xxx.xxx.xxx.xxx
appears.
I have nodejs installed on the server, v0.10.25
You need to install the package "nodejs-legacy".
apt-get install nodejs-legacy
The SSH dialog allows you to set the path to the Node.js binary. That should solve your issue.
I found it is necessary to scroll down and click on the advanced tab. Then I entered /usr/bin/nodejs from my "which nodejs" output in my SSH session. This worked for me even though the documentation states AWS tries to do this by default. That left me in the AWS file work space on the server I ssh'ed into as desired.
nodejs was not installed on my EC2 instance, so I installed using the instructions from the following link, and it work perfectly. Tutorial: Setting Up Node.js on an Amazon EC2 Instance
sudo apt-get install nodejs worked for me