Copy docker image to heroko registry through Azure devops - heroku

I have a build pipeline(in Azure DevOps) that pushes an image to the docker hub. I would also like to push the same image to the Heroku hub.
I tried to follow the Heroku document. But it asks for a login. I didn't find any way to login to Heroku through the Azure pipeline. Is there any way to login to Heroku using a token? Is there any other way through which I can push the docker image to Heroku?
Azure pipeline: https://dev.azure.com/abhishekgoenkapublic/github-projects/_build?definitionId=3
Docker image: https://hub.docker.com/r/abhishek1950/mean
GitHub Project: https://github.com/abhishekgoenka/mean

Heroku provides an environment variable to add an access token to execute its commands.
The HEROKU_API_KEY variable is used to assign access token. It is possible to generate the token through the panel in Heroku:
https://devcenter.heroku.com/articles/authentication
In order to pass the token to our agent job, we have to configure this variable in our pipeline. For this, in the Variables tab we will create a new key as shown in the image below.
Having configured the environment variable, we were able to add the necessary steps and commands.
Heroku Container Login: Log in via the heroku CLI .
Docker Push: Push the docker image to Heroku.

Related

heroku CLI auth by token

Every time after building and pushing docker image from Gitlab registry to Heroku registry I need to execute heroku container:release web to Heroku run image (release), but I wanna automate this
I added heroku CLI tool installation into gitlab-ci-yml, but I can't auth heroku CLI by token
When I try to set HEROKU_API_KEY=token and run heroku login I get an error Error: Cannot log in with HEROKU_API_KEY set
Also tried to do this with HEROKU_DEBUG on, but debugger info couldn't help me
I can't use ~/.netrc
Any way to auth heroku CLI or automate releasing docker images in heroku?
current gitlab-ci.yml:
before_script:
- apt install snapd
- snap install --classic heroku
- HEROKU_API_KEY=$HEROKU_API_TOKEN heroku login
- docker login -u $REGISTRY_UNAME -p $REGISTRY_PWD registry.gitlab.com
- docker login --username=_ --password=$HEROKU_PWD registry.heroku.com
script:
# a lot of tag & push lines
- heroku container:release web
If you have set the HEROKU_API_KEY environment variable, you don't have to log in again. The API key will be used for the Heroku CLI commands if present.
Make sure to use heroku authorizations:create to create never-expiring tokens. Check this out for a detailed explanation.
Ref: https://github.com/heroku/cli/issues/502#issuecomment-309099883
Note that the git commands like git push heroku master won't use the API key. See this for more info.
problem solved by changing account password that causes tokens changing and re-creating new token
And then run again HEROKU_API_KEY=token heroku container:release web with success

Unable to push image to GCR from Jenkins Pipeline

I am running a VM in google cloud that runs a Jenkins server (within a docker container). I am trying to build a Docker image for my application and push it out to Google Container Registry using Jenkins pipeline.
I installed all the required Jenkins plugins:
Google OAuth Credentials Plugin,
Docker Pipeline Plugin,
Google Container Registry Auth Plugin
Created a service account + key with Storage Admin and Object Viewer roles. Downloaded the json file.
Created a credential in Jenkins using the google project name as the id and the json key.
My pipeline code for build looks like this:
stage('Build Image') {
app = docker.build("<gcp-project-id>/<myproject>")
}
My pipeline code for build looks like this:
stage('Push Image') {
docker.withRegistry('https://us.gcr.io', 'gcr:<gcp-project-id>') {
app.push("${commit_id}")
app.push("latest")
}
}
However, the build fails at the last step with this error:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I have spent several hours trying to figure this out. Any help would be greatly appreciated!
Create a service account in GCP with permission to push image and then copy the credential json fie and save it as credentials inside Jenkins; call in the credentials id inside your pipeline like below and it should push images to gcr
withCredentials([file(credentialsId: 'gcr', variable: 'GC_KEY')]){
sh "cat '$GC_KEY' | docker login -u _json_key --password-stdin https://eu.gcr.io"
sh "gcloud auth activate-service-account --key-file='$GC_KEY'"
sh "gcloud auth configure-docker"
GLOUD_AUTH = sh (
script: 'gcloud auth print-access-token',
returnStdout: true
).trim()
echo "Pushing image To GCR"
sh "docker push eu.gcr.io/${google_projectname}/${image_name}:${image-tag}"
}
Additionally i have defined some variables used above
I have an identical problem. I found out that Jenkins doesn't seem to use those credentials: Under usage it says 'This credential has not been recorded as used anywhere.' . When used with gcloud util, the service account and key work fine, so the problem is somewhere in Jenkins.

Automate Heroku CLI login

I'm developing a bash script to automatic clone some projects and another task in dev VM's, but we have one project in Heroku and repository is in it. In my .sh file I have:
> heroku login
And this prompt to enter credentials, I read the "help" guide included on binary and documentation but I can't found anything to automatic insert username and password, I want something like this:
> heroku login -u someUser -p mySecurePassword
Exist any way similar to it?
The Heroku CLI only uses your username and password to retrieve your API key, which it stores in your ~/.netrc file ($HOME\_netrc on Windows).
You can manually retrieve your API key and add it to your ~/.netrc file:
Log into the Heroku web interface
Navigate to your Account settings page
Scroll down to the API Key section and click the Reveal button
Copy your API key
Open your ~/.netrc file, or create it, with your favourite text editor
Add the following content:
machine api.heroku.com
login <your-email#address>
password <your-api-key>
machine git.heroku.com
login <your-email#address>
password <your-api-key>
Replace <your-email#address> with the email address registered with Heroku, and <your-api-key> with the API key you copied from Heroku.
This should manually accomplish what heroku login does automatically. However, I don't recommend this. Running heroku login does the same thing more easily and with fewer opportunities to make a mistake.
If you decide to copy ~/.netrc files between machines or accounts you should be aware of two major caveats:
This file is used by many other programs; be careful to only copy the configuration stanzas you want.
Your API key offers full programmatic access to your account. You should protect it as strongly as you protect your password.
Please be very careful if you intend to log into Heroku using any mechanism other than heroku login.
You can generate a non-expiring OAuth token then pass it to the CLI via an environment variable. This is useful if you need to run Heroku CLI commands indefinitely from a scheduler and you don't want the login to expire. Do it like this (these are not actual Tokens and IDs, BTW):
$ heroku authorizations:create
Creating OAuth Authorization... done
Client: <none>
ID: 80fad839-876b-4ea0-a41e-6a9a2fb0cf97
Description: Long-lived user authorization
Scope: global
Token: ddf4a0e5-9294-4c5f-8820-b51c52fce4f9
Updated at: Fri Aug 02 2019 21:26:09 GMT+0100 (British Summer Time) (less than a minute ago)
Get the token (not the ID) from that authorization and pass it to your CLI:
$ HEROKU_API_KEY='ddf4a0e5-9294-4c5f-8820-b51c52fce4f9' heroku run ls --app my-app
Running ls on ⬢ my-app... up, run.2962 (Hobby)
<some file names>
$
By the way this also solves the problem of how to use the Heroku CLI when you have MFA enabled on your Heroku account but your machine doesn't have a web browser e.g., if you are working on an EC2 box via SSH:
$ heroku run ls --app my-app
heroku: Press any key to open up the browser to login or q to exit:
› Error: quit
$ HEROKU_API_KEY='ddf4a0e5-9299-4c5f-8820-b51c52fce4f9' heroku run ls --app my-app
Running ls on ⬢ my-app... up, run.5029 (Hobby)
<some file names>
$
EDIT: For Windows Machines
After you run heroku authorizations:create, copy the "Token", and run the following commands:
set HEROKU_API_KEY=ddf4a0e5-9299-4c5f-8820-b51c52fce4f9
heroku run ls --app my-app
If your goal is just to get the source code, you could use a simple git client. You just need the api key.
Steps to get api key
Log into the Heroku web interface
Navigate to your Account settings page
Scroll down to the API Key section and click the Reveal button
Copy your API key
Download source code using git
Use this url template for git clone
https://my_user:my_password#git.heroku.com/name_of_your_app.git
In my case the user value was my email without domain.
Example :
if mail is **duke#gmail.com**
user for heroku auth will be **duke**
Finally just clone it like any other git repositories:
git clone https://duke:my_password#git.heroku.com/name_of_your_app.git
I agree that Heroku should have by now provided a way to do this with their higher level CLI tool.
You can avoid extreme solutions (and you should, just like Chris mentioned in his answer) by simply using curl and the Heroku API. Heroku allow you to use your API Token (obtainable through your user settings / profile page on the Heroku dashboard).
You can then use the API to achieve whatever it is you wanted to do with their command line tool.
For example, if I wanted to get all config vars for an app I would write a script that did something like the following:
-H "Accept: application/vnd.heroku+json; version=3" \
-H "Authorization: Bearer YOUR_TOKEN```
If *YOUR_APP_NAME* had only one config variable called *my_var* the response of the above call would be
{
"my_var": some_value
}
I've found using this all the time in CI tools that need access to *Heroku* information / resources.

How to push container to Google Container Registry (unable to create repository)

EDIT: I'm just going to blame this on platform inconsistencies. I have given up on pushing to the Google Cloud Container Registry for now, and have created an Ubuntu VM where I'm doing it instead. I have voted to close this question as well, for the reasons stated previously, and also as this should probably have been asked on Server Fault in the first place. Thanks for everyone's help!
running $ gcloud docker push gcr.io/kubernetes-test-1367/myapp results in:
The push refers to a repository [gcr.io/kubernetes-test-1367/myapp]
595e622f9b8f: Preparing
219bf89d98c1: Preparing
53cad0e0f952: Preparing
765e7b2efe23: Preparing
5f2f91b41de9: Preparing
ec0200a19d76: Preparing
338cb8e0e9ed: Preparing
d1c800db26c7: Preparing
42755cf4ee95: Preparing
ec0200a19d76: Waiting
338cb8e0e9ed: Waiting
d1c800db26c7: Waiting
42755cf4ee95: Waiting
denied: Unable to create the repository, please check that you have access to do so.
$ gcloud init results in:
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
[core]
account = <my_email>#gmail.com
disable_usage_reporting = True
project = kubernetes-test-1367
Your active configuration is: [default]
Note: this is a duplicate of Kubernetes: Unable to create repository, but I tried his solution and it did not help me. I've tried appending :v1, /v1, and using us.gcr.io
Edit: Additional Info
$ gcloud --version
Google Cloud SDK 116.0.0
bq 2.0.24
bq-win 2.0.18
core 2016.06.24
core-win 2016.02.05
gcloud
gsutil 4.19
gsutil-win 4.16
kubectl
kubectl-windows-x86_64 1.2.4
windows-ssh-tools 2016.05.13
+
$ gcloud components update
All components are up to date.
+
$ docker -v
Docker version 1.12.0-rc3, build 91e29e8, experimental
The first image push requires admin rights for the project. I had the same problem trying to push a new container to GCR for a team project, which I could resolve by updating my permissions.
You might also want to have a look at docker-credential-gcr. Hope that helps.
What version of gcloud and Docker are you using?
Looking at your requests, it seems as though the Docker client is not attaching credentials, which would explain the access denial.
I would recommend running gcloud components update and seeing if the issue reproduces. If it still does, feel free to reach out to us on gcr-contact at google.com so we can help you debug the issue and get your issue resolved.
I am still not able to push a docker image from my local machine, but authorizing a compute instance with my account and pushing an image from there works. If you run into this issue, I recommend creating a Compute Engine instance (for yourself), authorizing an account with gcloud auth that can push containers, and pushing from there. I have my source code in a Git repository that I can just pull from to get the code.
Thanks for adding your Docker version info. Does downgrading Docker to a more stable release (e.g. 1.11.2) help at all? Have you run 'docker-machine upgrade'?
It seems like you're trying to run gcloud docker push from an Google Compute Engine instance without a proper security scope of read/write access to Google Cloud Storage (it's where Google Container Registry stores the images of your containers behind the scene).
Try to create another instance, but this time with proper access scopes, i.e.:
gcloud compute --project "kubernetes-test-1367" instances create "test" --zone "us-east1-b" --machine-type "n1-standard-1" --network "default" --scopes default="https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management","https://www.googleapis.com/auth/devstorage.full_control" --image "/debian-cloud/debian-8-jessie-v20160629" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "test-1"
Once you create new instance, ssh into it and then try to re-run the gcloud docker push gcr.io/kubernetes-test-1367/myapp command
I checked for
gcloud auth list
to see my application is the active account and not my personal Google account. After setting
gcloud config set account example#gmail.com
I was able to push
gcloud docker -- push eu.gcr.io/$PROJECT_ID/my-docker:v1
So I can continue http://kubernetes.io/docs/hellonode/
I had a similar issue and it turned out that I had to enable billing for the project. When you have a new Google Cloud account you can enable only so many projects with billing. Once I did that it worked.
Also this could be the cause of this problem (was in my case):
Important: make sure the Compute Engine API is enabled for your project on the
Source: https://pinrojas.com/2016/09/12/your-personal-kubernetes-image-repo-in-a-few-steps-gcr-io/
If anyone is still having this problem while trying to push a docker image to gcr, even though they've authenticated an account that should have the permission to do so, try running gcloud auth configure-docker and pushing again.

Amazon Web Services secret key & id not being found in my bash profile when pushing cloned Rails 4 S3 app to Heroku

I have an application I built that pushes images to Amazon S3 using Carrierwave. I forked and cloned that repo on Github. When trying to push it as a new application to Heroku, Heroku keeps throwing an error that it can't find my Amazon S3 secret key and ID and aborts the push. I have the original Secret Key and ID in my bash profile and pushing to heroku worked with that setup. I can't figure out why it isn't working with my clone of the application.
You need to set configuration varibles via heroku config:set KEY=VALUE for them to be available on Heroku
Read more at https://devcenter.heroku.com/articles/config-vars

Resources