Can not ping business network after restart - hyperledger-composer

I followed the instructions on https://hyperledger.github.io/composer/latest/installing/development-tools.html and https://hyperledger.github.io/composer/latest/tutorials/developer-tutorial.html. After a restart of the Ubuntu server I started the fabric again and tried to ping the network:
cd ~/fabric-dev-servers
export FABRIC_VERSION=hlfv12
./startFabric.sh
$ composer network ping --card admin#tutorial-network
Error: Error trying to ping. Error: make sure the chaincode tutorial-network has been successfully instantiated and try again: getccdata composerchannel/tutorial-network responded with error: could not find chaincode with name 'tutorial-network'
Command failed

As advised by #david_k, I also installed and started the network, which works well:
$ cd ~/fabric-dev-servers/fabric-scripts/hlfv12/composer
$ docker-compose stop
$ docker-compose stop
$ cd ~/tutorial-network
$ composer network install --card PeerAdmin#hlfv1 --archiveFile tutorial-network#0.0.1.bna
$ composer network start --networkName tutorial-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
$ composer network ping --card admin#tutorial-network
$ composer-rest-server

Related

Laravel Dusk running in Ubuntu 20.04 Error Chrome failed to start: exited abnormally

When I run
$ php artisan dusk tests/Browser/ExampleTest.php
It prompt the error
Tests\Browser\ExampleTest::testExample
Facebook\WebDriver\Exception\UnknownServerException: unknown error:
Chrome failed to start: exited abnormally (unknown error:
DevToolsActivePort file doesn't exist) (The process started from
chrome location /snap/bin/chromium is no longer running, so
ChromeDriver is assuming that Chrome has crashed.) (Driver info:
chromedriver=2.45.615279
(12b89733300bd268cff3b78fc76cb8f3a7cc44e5),platform=Linux
5.4.0-107-generic x86_64)
I check the chrome driver, it is install correctly.
$ php artisan dusk:chrome-driver
ChromeDriver binary successfully installed for version 100.0.4896.60.
Then I google the error. They said to check the chrome version
$ /usr/bin/chromium-browser --version
/usr/bin/chromium-browser: 12: xdg-settings: not found cannot create
user data directory: /home/shiro/snap/chromium/1952: Permission
denied
My goal is need to run Laravel Dusk in Ubuntu 20.04.
***Make sure your chromium-browser need to REMOVE and INSTALL snap stable version MATCH with your Laravel Dusk Chrome Driver
Below is the step, I run:-
Next fix the Chromium issue by install via snap, then next error
$ sudo snap refresh --edge chromium
error: cannot communicate with server: Post
http://localhost/v2/snaps/chromium: dial unix /run/snapd.socket:
connect: no such file or directory
to solve the error, need to update the packages.
$ sudo add-apt-repository ppa:saiarcot895/chromium-beta
$ sudo apt-get update
$ sudo apt-get install chromium-browser
Finally, it show the version. However, does not match my Laravel Chrome driver. Didn't solve my first issue. Still can't run dusk
$ /usr/bin/chromium-browser --version
Chromium 97.0.4692.20 Ubuntu 20.04
Next install snap version of Chrome and remove the chromium-browser
$ systemctl start snapd.service
$ sudo snap install chromium
$ sudo apt remove chromium-browser
IMPORTANT NOTE
$ /usr/bin/chromium-browser --version
-bash: /usr/bin/chrome: No such file or directory
Boom~ It works~

installing fbprophet in docker file docker build on M1 Mac does not work

Does anyone know how to install fbprophet in the docker file for the Mac M1 pro ? As I am unable to install it citing
below is instructions saved in a text file named "dockerfile":
FROM jupyter/scipy-notebook:python-3.9.7
RUN pip install fbprophet
ENV PYTHONPATH "${PYTHONPATH}:/home/jovyan/work"
RUN echo "export PYTHONPATH=/home/jovyan/work" >> ~/.bashrc
WORKDIR /home/jovyan/work
Followed by entering the command line "docker build -t fbprophet-notebook:latest ."
the information shows:
Successfully built pymeeus
#6 18.13 Failed to build fbprophet pystan
#6 18.57 Installing collected packages: setuptools-git, pymeeus, korean-lunar-calendar, ephem, pystan, hijri-converter, convertdate, LunarCalendar, holidays, cmdstanpy, fbprophet
#6 18.97 Running setup.py install for pystan: started
#6 22.85 Running setup.py install for pystan: finished with status 'error'
I tried enter this command line:
docker run -dit --rm --name adv_dsi_lab_2 -p 8888:8888 -e JUPYTER_ENABLE_LAB=yes -v "$PWD":/home/jovyan/work fbprophet-notebook:latest .
The result was:
(base) user#user-MacBook-Pro adv_dsi_lab_2 % docker run -dit --rm --name adv_dsi_lab_2 -p 8888:8888 -e JUPYTER_ENABLE_LAB=yes -v "$PWD":/home/jovyan/work fbprophet-notebook:latest .
Unable to find image 'fbprophet-notebook:latest' locally
docker: Error response from daemon: pull access denied for fbprophet-notebook, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
Where did I go wrong?

Error running systemctl to start service in Amazon Linux 2

I am trying to build a simple Apache/PHP server using the Amazon Linux 2 image. I have the following:
Dockerfile
FROM amazonlinux:2
RUN amazon-linux-extras install epel -y &&\
amazon-linux-extras install php7.4 -y &&\
yum update -y &&\
yum install httpd -y
COPY --chown=root:root docker/script/startup /startup
ENTRYPOINT /startup
startup
#!/usr/bin/env bash
mkdir -p /run/dbus # Added this based on other SO question
dbus-daemon --system # Added this based on other SO question
systemctl enable dbus # Added this based on other SO question
systemctl start dbus # Added this based on other SO question
systemctl status dbus # Added this based on other SO question
systemctl enable httpd
systemctl start httpd
systemctl status httpd
/bin/bash
docker-compose.yml
web:
build: .
container_name: "${APP_NAME}-app"
environment:
VIRTUAL_HOST: "${WEB_HOST}"
env_file:
- ./.env-local
working_dir: "/${APP_NAME}/app"
restart: "no"
privileged: true # Added this based on other SO question
volumes:
- "./app:/${APP_NAME}/app:ro"
- ./docker:/docker
- "./conf:/${APP_NAME}/conf:ro"
- "./vendor:/${APP_NAME}/vendor:ro"
- "./conf:/var/www/conf:ro"
- "./web:/var/www/html/"
depends_on:
- composer
I run this with the following command:
docker run -it web bash
And this is what it gives me:
Failed to get D-Bus connection: Operation not permitted
Failed to get D-Bus connection: Operation not permitted
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service, pointing to /usr/lib/systemd/system/httpd.service.
Failed to get D-Bus connection: Operation not permitted
Failed to get D-Bus connection: Operation not permitted
I don't understand why I'm getting this or how to resolve?
Suggesting to avoid systemd service units in a docker image.
Instead use cronttab script with #boot directive/selector.
In addition dbus is centrally managed by kernel and not allowed at container level.
If Docker service is up then you probably have dbus active and running.
You can add capabilities to the root user running in the container. Read more here.
As last resort try to disable SELinux in your docker image.
I was running into the same issue trying to run systemctl from within the Amazon Linux 2 docker image
Dockerfile:
FROM amazonlinux:latest
# update and install httpd 2.4.53, php 7.4.28 with php extensions
RUN yum update -y; yum clean all
RUN yum install -y httpd amazon-linux-extras
RUN amazon-linux-extras enable php7.4
RUN yum clean metadata
RUN yum install -y php php-{pear,cli,cgi,common,curl,mbstring,gd,mysqlnd,gettext,bcmath,json,xml,fpm,intl,zip}
# update website files
WORKDIR /var/www/html
COPY phpinfo.php /var/www/html
RUN chown -R apache:apache /var/www
CMD ["/usr/sbin/httpd","-DFOREGROUND"]
EXPOSE 80
EXPOSE 443
$ docker build -t azl1
$ docker run -d -p 8080:80 --name azl1_web azl1
pointing a browser to the IP:8080/phpinfo.php brought up the normal phpinfo page as expected pointing to a successful php 7.4.28 installation.

Installing Composer with Vagrant

I have successfully installed Vagrant along with some boxes on my Windows PC. I got to say it works awesome, creating and destroying VM's with different configurations on the fly.
The only problem I'm facing now is that I want to install composer. But composer requires you to point to the php.exe to do so. I don't want to install PHP on my computer, otherwhise there is no point using Vagrant, right. How do I tackle this problem?
I've seen some articles about using Puppet, but I couldn't make much sense out of them.
Thanks in advance.
You just need to install PHP (and curl) in your vagrant box. For instance, execute vagrant ssh to get SSH access to your box and execute the following commands:
$ sudo apt-get install -y php5-cli curl
$ curl -Ss https://getcomposer.org/installer | php
$ sudo mv composer.phar /usr/bin/composer
Now you're ready to use the composer command in your vagrant box.
You can improve this by making this part of provisioning, the step where a box is set up when running vagrant up. To do this, put the above commands in a shell file (for instance project/vagrant/provision.sh):
sudo apt-get install -y php5-cli curl > /dev/null
curl -Ss https://getcomposer.org/installer | php > /dev/null
sudo mv composer.phar /usr/bin/composer
Now, configure this shell file as a provision step in your VagrantFile:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/trusty64"
# configure the shell file as a provision step:
config.vm.provision :shell, path: "vagrant/provision.sh"
end
Now, when running vagrant init, the shell file is executed and php & composer are installed.
You can also choose to use a box with php and composer pre-installed, like laravel/homestead.
There is also a vagrant box with composer pre-installed. Here is the Github for this box: https://github.com/Swader/homestead_improved.
With Git Bash for windows, navigate to the folder where /homestead_improved was installed.
Run vagrant up;, vagrant ssh to get inside the VM machine.
Once inside the virtual machine cd inside the /Code dir. You can now use composer, for example composer global require "laravel/installer=~1.1" to install the Laravel installer.
Commands to be followed when you are in vagrant homestead in order to update the composer:
vagrant ssh
cd code (where my laravel projects are)
composer selfUpdate --2 [which means composer selfUpdate --versionnumber]

Boot2Docker: can't get ports forwarding to work

I'm playing with boot2docker (docker 1.6) on windows 8.1. I wanted to make myself machine container to play with ruby and I want to be able to connect to rails server from my windows host. To start with small steps first I want to connect to my container from my boot2docker VM. I attach my docker file below, it builds without a problem and I can run a container from it. I do it like so:
docker run -it -p 3000:3000 3564860f7afd /bin/bash
Then in this container I say:
cd ~/myapp && bundle exec rails server -d
And to see if everything is working I do:
~/myapp$ sudo apt-get install wget && wget localhost:3000
and I get http 500, which is ok, I just wanted to check if server is running. Then I exit using ctrl+p, ctrl+q. But then on boot2docker machine I do agin
wget localhost:3000
and get
Connecting to localhost:3000 (127.0.0.1:3000)
wget: error getting response: Connection reset by peer
So it seems like port 3000 is not correctly forwarded to boot2docker VM. What have I done wrong? What did I miss? I googled extensively and tried couple of things like explicitly exposing port from dockerfile of or adding -P switch to run, but I always end up the same way - it's not working.
Any help will be greatly appreciated.
UPDATE 02.05.2015
I have also tried things described in comment from Markus W Mahlberg and reponse from VonC. My VM configuration seems to be ok, I also checked in GUI of VirtualBox and it seems fine. Some more info: When I start
boot2docker ssh -vnNTL 3000:localhost:3000
and then open localhost:3000 on my windows host I see in trace logs in boot2docker console, they look like this:
debug1: channel 1: free: direct-tcpip: listening port 3000 for localhost port 3000, connect from 127.0.0.1 port 50512 to 127.0.0.1 port 3000, nchannels 3
Chrome tells me that the response was empty. From checking the logs on container I know that request never got to it.
End of update
Update 03.05.2015
I thing that my problem have not so much to do with boot2docker or docker as with my computer configuration. I've been over my docker/boot2docker configuration so many times, that it is rather unlikely that I've made a mistake there.
Desperately I've reinstalled boot2docker and VirtualBox, still no effects. Any ideas how to debug what can be wrong with my configuration? Only other idea I have is to try doing the same on another machine. But even if this works my original problem is no less annoying.
End of update
Here is my dockerfile:
FROM ubuntu
MAINTAINER anonymous <anonymous#localhost.com>
LABEL Description="Ruby container"
# based on https://gorails.com/setup/ubuntu/14.10
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers \
&& groupadd anonymous \
&& useradd anonymous -m -g anonymous -g sudo
ENV HOME /home/anonymous
USER anonymous
RUN git clone https://github.com/sstephenson/rbenv.git ~/.rbenv
RUN echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
RUN echo 'eval "$(rbenv init -)"' >> ~/.bashrc
RUN exec $SHELL
RUN git clone https://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
RUN echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
RUN exec $SHELL
RUN git clone https://github.com/sstephenson/rbenv-gem-rehash.git ~/.rbenv/plugins/rbenv-gem-rehash
ENV PATH "$HOME/.rbenv/bin:$HOME/.rbenv/plugins/ruby-build/bin:$PATH"
RUN rbenv install 2.2.1
RUN rbenv global 2.2.1
ENV PATH "$HOME/.rbenv/shims:$PATH"
RUN echo 'gem: --no-ri --no-rdoc' > ~/.gemrc
RUN gem install bundler
RUN git config --global color.ui true
RUN git config --global user.name "mindriven"
RUN git config --global user.email "3dcreator.pl#gmail.com"
RUN ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa -C "3dcreator.pl#gmail.com"
RUN sudo apt-get -qy install software-properties-common python-software-properties
RUN sudo add-apt-repository ppa:chris-lea/node.js
RUN sudo apt-get -y install nodejs
RUN gem install rails -v 4.2.0
RUN ~/.rbenv/bin/rbenv rehash
RUN rails -v
RUN sudo apt-get -qy install mysql-server mysql-client
RUN sudo apt-get install libmysqlclient-dev
RUN rails new ~/myapp -d mysql
RUN sudo /etc/init.d/mysql start && cd ~/myapp && rake db:create
See Boot2docker workarounds:
You can use VBoxManage.exe commands to open those ports on the boot2docker VM level, in order for your actual VM host to access them.
By default, only the port 2222 is open, for boot2docker ssh to work and open an interactive ssh boot2docker session.
Just make sure VirtualBox is in your PATH.
VBoxManage modifyvm: works when the boot2docker VM isn't started yet, or after a boot2docker stop,
VBoxManage controlvm: works when the boot2docker VM is running, after a boot2docker start.
Let's say your Docker container exposes the port 8000 and you want access it from your other computers on your LAN. You can do it temporarily, using ssh:
Run following command (and keep it open):
$ boot2docker ssh -vnNTL 8000:localhost:8000
or you can set up a permanent VirtualBox NAT Port forwarding:
$ VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port8000,tcp,,8000,,8000";
If the vm is already running, you should run this other command:
$ VBoxManage controlvm "boot2docker-vm" natpf1 "tcp-port8000,tcp,,8000,,8000";
Now you can access your container from your host machine under
localhost:8000
That way, you don't have to mess around with the VirtualBox GUI, selecting the computer called boot2docker-vm from the list on the left, choosing Settings from the Machine menu (or press Command-S on a Mac), selecting the Network icon at the top, and finally clicking the Port Forwarding button.
boot2docker on Windows (and OSX) is running a VirtualBox VM with Linux in it. By default it exposes only the ports necessary to ssh into the VM. You'll need to modify the VM to get it to expose more ports.
Adding ports to the VM is more about configuring VirtualBox and less about boot2docker (it is a property of the VM, not the software running inside it). Please see the VirtualBox documentation for "port forwarding" and other network configuration. https://www.virtualbox.org/manual/ch06.html
Yes you need to open the ports in the Virtualbox machines:
enter image description here

Resources