Unable to pull hyperledger/cello-api-engine image - hyperledger-cello

Setup of Hyperledger-cello:
Cloned Hyperledger-cello 0.9.0
sudo SERVER_PUBLIC_IP=xx.xx.xx.xx make start
I am facing the following issue:
ERROR: pull access denied for hyperledger/cello-api-engine, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Makefile:211: recipe for target 'start-docker-compose' failed
I tried changing name of image in docker-compose.yml file, but stuck with the same issue.
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]n
ERROR: pull access denied for hyperledger/cello-api-engine, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Makefile:211: recipe for target 'start-docker-compose' failed
make[1]: *** [start-docker-compose] Error 1

Hey the problem is that on master is not the most stable release, so, in orderer it to work they have changed a little bit. After cloning you need to do the following.
make docker
This will build all the images, on the documentation they say that for now this is mandatory.
And after that is finished do the following:
make start
This will start the network.
If you are looking for a more stable build you can checkout to the tag 0.9.0.
Take on account that this project is still on incubator and they are preparing for the 1.0.0 release, on the master branch you may see that there is missing documentation and more.

There is no cello-api-engine in docker hub.
You can check list here
cello docker hub
If you change from cello-api-engine to cello-engine, that error will not appear. But there is also no cello-dashboard in docker hub images for next step. there are only cello-operator-dashboard and cello-user-dashboard.

No need to change any image name in .yaml files,after running setp-master, it is going to create images in your mahcine . Done with images part.
If anyone face issue let me knwow :)

Related

I am trying to build using docker desktop on Mac

But it seems to get the following error. I have cloned the package and have access rights to it.
failed to load cache key: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
EDIT
This error seems to be due to docker on M1 chip and not because of any repo access. The error is same when I run getting-started guide. I can't build anything using Docker for Desktop on Mac.
EDIT 2
I tried building with sudo and it worked fine. The error seems to be with Repository builds as they don't use sudo and I have been updating DOCKER_BUILD_SUDO=sudo but the issue persists.
The problem is not the access to the repository, but to the image registry:
pull access denied ... insufficient_scope
If you have the credentials, prior to building issue a docker login command. If not, you'll need to either ask for them the maintainer or you're out of luck unless you find an alternative (public) image and replace it in the Dockerfile.

How can I solve "We were unable to install a project on your server" in forge laravel?

If I install repository in forge, there exist error like this :
Cloning into 'testshop.co.id'...
GitLab: The project you were looking for could not be found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
The full error like this :
How can I solve this error?
Double-check your actual ssh URL and see if:
GitLab recognizes you (display a "Welcome" message)
ssh -T git#your.gitlab.server
GitLab has you listed as the owner or member of the project you want to access (for that, you need to go to the GitLab web pages interface)
Are you sure that your repository link is correct? The error is that it cannot find your repository.
It should be something along the lines of UserName/ProjectName .
To ensure what it is go to whatever source control that you use and look in the URL so for my github project:
https://github.com/KyleWardle/RomanNumerals
The link you would put in forge would be
KyleWardle/RomanNumerals
My Forge:

How to push container to Google Container Registry (unable to create repository)

EDIT: I'm just going to blame this on platform inconsistencies. I have given up on pushing to the Google Cloud Container Registry for now, and have created an Ubuntu VM where I'm doing it instead. I have voted to close this question as well, for the reasons stated previously, and also as this should probably have been asked on Server Fault in the first place. Thanks for everyone's help!
running $ gcloud docker push gcr.io/kubernetes-test-1367/myapp results in:
The push refers to a repository [gcr.io/kubernetes-test-1367/myapp]
595e622f9b8f: Preparing
219bf89d98c1: Preparing
53cad0e0f952: Preparing
765e7b2efe23: Preparing
5f2f91b41de9: Preparing
ec0200a19d76: Preparing
338cb8e0e9ed: Preparing
d1c800db26c7: Preparing
42755cf4ee95: Preparing
ec0200a19d76: Waiting
338cb8e0e9ed: Waiting
d1c800db26c7: Waiting
42755cf4ee95: Waiting
denied: Unable to create the repository, please check that you have access to do so.
$ gcloud init results in:
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
[core]
account = <my_email>#gmail.com
disable_usage_reporting = True
project = kubernetes-test-1367
Your active configuration is: [default]
Note: this is a duplicate of Kubernetes: Unable to create repository, but I tried his solution and it did not help me. I've tried appending :v1, /v1, and using us.gcr.io
Edit: Additional Info
$ gcloud --version
Google Cloud SDK 116.0.0
bq 2.0.24
bq-win 2.0.18
core 2016.06.24
core-win 2016.02.05
gcloud
gsutil 4.19
gsutil-win 4.16
kubectl
kubectl-windows-x86_64 1.2.4
windows-ssh-tools 2016.05.13
+
$ gcloud components update
All components are up to date.
+
$ docker -v
Docker version 1.12.0-rc3, build 91e29e8, experimental
The first image push requires admin rights for the project. I had the same problem trying to push a new container to GCR for a team project, which I could resolve by updating my permissions.
You might also want to have a look at docker-credential-gcr. Hope that helps.
What version of gcloud and Docker are you using?
Looking at your requests, it seems as though the Docker client is not attaching credentials, which would explain the access denial.
I would recommend running gcloud components update and seeing if the issue reproduces. If it still does, feel free to reach out to us on gcr-contact at google.com so we can help you debug the issue and get your issue resolved.
I am still not able to push a docker image from my local machine, but authorizing a compute instance with my account and pushing an image from there works. If you run into this issue, I recommend creating a Compute Engine instance (for yourself), authorizing an account with gcloud auth that can push containers, and pushing from there. I have my source code in a Git repository that I can just pull from to get the code.
Thanks for adding your Docker version info. Does downgrading Docker to a more stable release (e.g. 1.11.2) help at all? Have you run 'docker-machine upgrade'?
It seems like you're trying to run gcloud docker push from an Google Compute Engine instance without a proper security scope of read/write access to Google Cloud Storage (it's where Google Container Registry stores the images of your containers behind the scene).
Try to create another instance, but this time with proper access scopes, i.e.:
gcloud compute --project "kubernetes-test-1367" instances create "test" --zone "us-east1-b" --machine-type "n1-standard-1" --network "default" --scopes default="https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management","https://www.googleapis.com/auth/devstorage.full_control" --image "/debian-cloud/debian-8-jessie-v20160629" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "test-1"
Once you create new instance, ssh into it and then try to re-run the gcloud docker push gcr.io/kubernetes-test-1367/myapp command
I checked for
gcloud auth list
to see my application is the active account and not my personal Google account. After setting
gcloud config set account example#gmail.com
I was able to push
gcloud docker -- push eu.gcr.io/$PROJECT_ID/my-docker:v1
So I can continue http://kubernetes.io/docs/hellonode/
I had a similar issue and it turned out that I had to enable billing for the project. When you have a new Google Cloud account you can enable only so many projects with billing. Once I did that it worked.
Also this could be the cause of this problem (was in my case):
Important: make sure the Compute Engine API is enabled for your project on the
Source: https://pinrojas.com/2016/09/12/your-personal-kubernetes-image-repo-in-a-few-steps-gcr-io/
If anyone is still having this problem while trying to push a docker image to gcr, even though they've authenticated an account that should have the permission to do so, try running gcloud auth configure-docker and pushing again.

Failed to initialize central HHBC repository: Failed to initialize schema

The complete error is:
Failed to initialize central HHBC repository:
Failed to initialize schema in /home/shreeram/.hhvm.hhbc:
I am trying to configure hhvm ana apache2.
For that i am following this link how-to-setup-hhvm-on-ubuntu-14-04-server-with-apache-2-4-part-1/
In above link i am stuck in the step when i put this command in the terminal:
curl -sS https://getcomposer.org/installer | php
The result of that command is the error mentioned above.
The shreeram directory has both read and write permission.
Could anyone help me to understand what i am missing there?
Are you sure permissions are correct on /home/shreeram, and that /home/shreeram/.hhvm.hhbc is readable and writable by the user running php? This issue really does sound like a permissions problem.
As the same user that was running php, does touch /home/shreeram/.hhvm.hhbc work? What about echo > /home/shreeram/.hhvm.hhbc?
If that's all fine, try rm /home/shreeram/.hhvm.hhbc and then try to install Composer again. Although it's typically a permission error, there are cases when the repo can become corrupt (particularly if the enclosing directory is on NFS or some other network filesystem) and you can just remove it and start over.

Docker - pull from docker repo fails (EOF / 403) but download from RH repo works

System info :
RHEL 7.1 (fresh install)
Docker 1.6.2
We're using the Docker rpm provided by RH in their "bonus" dvd's.
Issue :
When I pull an image through docker, it only works when it's on the Red Hat repo.
# docker pull openshift3/mysql-55-rhel7
Trying to pull repository registry.access.redhat.com/openshift3/mysql-55-rhel7
...
bb8bf2124de9: Download complete
65de4a13fc7c: Download complete
85400654aa47: Download complete
c537da9944e0: Download complete
6d97b1e161bb: Download complete
0d0dc8d923d6: Download complete
e4ba106b746b: Download complete
Status: Downloaded newer image for registry.access.redhat.com/openshift3/mysql-55-rhel7:latest
When I pull an image from Docker repo... it fails. But - which is imho really weird - with different errors.
So first I pull httpd
# docker pull httpd
Trying to pull repository registry.access.redhat.com/httpd ... not found latest: Pulling from docker.io/httpd
64e5325c0d9d: Pulling fs layer
bf84c1d84a8f: Download complete
6c1a7f5286ab: Download complete
…
ee4d515e8896: Download complete
de94ed779434: Download complete
de94ed779434: **Error pulling image (latest) from docker.io/httpd, ApplyLayer exit status 1 stdout: stderr: unexpected EOF**
FATA[0040] Error pulling image (latest) from docker.io/httpd, ApplyLayer exit status 1 stdout: stderr: unexpected EOF
But, pulling the hello-world gives
# docker pull hello-world
Trying to pull repository registry.access.redhat.com/hello-world ... not found
latest: Pulling from docker.io/hello-world
a8219747be10: Pulling fs layer
a8219747be10: Error pulling dependent layers
91c95931e552: Error pulling image (latest) from docker.io/hello-world, Server error: Status 403 while fetching image layer (a821974FATA[0010] Error pulling image (latest) from docker.io/hello-world, Server error: Status 403 while fetching image layer (a8219747be10611d65b7c693f48e7222c0bf54b5df8467d3f99003611afa1fd8)
I'm on a corporate network and applied what's in this blog concerning proxies and certificate to get it running.
service docker stop
rm -r /var/lib/docker/*
service docker start
Worked for me. Note that this will very likely clear docker
There are couple of things you can do to mitigate this :
If you're seeing this error while pulling large images from private
repo, It could mean that the private repo is busy, as someone else
might be pulling at the same time.
If the size of the image which you're pulling is huge and your machine is under-powered you still
could see this issue, even though you could be pulling from a public
repo.
If none of the above applies to you, then I think you've hit a
strange bug in docker 1.7, Your best bet is to upgrade the client
version to 1.11 or recent.
I also received unexpected EOF when pulling a large image (>2GB). However, the answer for us was to increase the size of the file system where the incoming images were being cached before being moved on to where the docker daemon stored them.
Sounds irrelevant to the real issue of this question but it might be useful for someone else to check in future.

Resources