Couldn't connect to GitHub while authenicating to github on heroku - heroku

I had created a app on heroku on authenticating to github it say
Error: remote was closed, authorization was denied, or an authentication message otherwise not received before the window closed..
How can i fix that

I was also facing the same error in Google Chrome.
Error: remote was closed, authorization was denied, or an authentication message otherwise not received before the window closed.`
I tried opening in Chrome's Incognito mode and it worked for me.
Again, I tried using Firefox and it also worked! 🎉

I guess it's for pushing an app on heroku, so if it's that you can use a action on the GitHub marketplace to do so :
You have this one : heroku deploy
You will have to set you heroku key in secrets on repository settings before and set your workflow just like this:
name: Deploy
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: akhileshns/heroku-deploy#v3.0.4 # This is the action
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: "YOUR APP's NAME" #Must be unique in Heroku
heroku_email: "YOUR EMAIL"
buildpack: "SOME BUILDPACK" #OPTIONAL
branch: "YOUR_BRANCH" #OPTIONAL and DEFAULT - 'HEAD' (a.k.a your current branch)
dontuseforce: false #OPTIONAL and DEFAULT - false
usedocker: false #OPTIONAL and DEFAULT - false
appdir: "" #OPTIONAL and DEFAULT - "". This is useful if the api you're deploying is in a subfolder
I hope it helps.

Related

Github actions - Workflow fails to access env variable

I have the .env file saved with my firebase web API key saved in my local working directory.
I use it in my project as
const firebaseConfig = {
apiKey : process.env.REACT_APP_API_KEY,
}
Now I set up firebase hosting and use GitHub actions to automatically deploy when changes are pushed to GitHub.
However the deployed app by github gives me an error in my console saying Your API key is invalid, please check you have copied it correctly .And my app won't work as it is missing api key.
It seems like github actions is unable to access process.env.REACT_APP_API_KEY
I feel the problem is
As .env file is not pushed to github repo that's why my the build server is unable to access the command process.env.REACT_APP_API_KEY
How can I tackle this issue ?
Github workflow was automatically setup while setting up firebase hosting.
Is this the error or there is something else to take care ?
Below is my firebase-hosting-merge.yml
# This file was auto-generated by the Firebase CLI
# https://github.com/firebase/firebase-tools
name: Deploy to Firebase Hosting on merge
'on':
push:
branches:
- master
jobs:
build_and_deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm ci && npm run build --prod
- uses: FirebaseExtended/action-hosting-deploy#v0
with:
repoToken: '${{ secrets.GITHUB_TOKEN }}'
firebaseServiceAccount: '${{ secrets.FIREBASE_SERVICE_ACCOUNT_EVENTS_EASY }}'
channelId: live
projectId: myprojectname
env:
FIREBASE_CLI_PREVIEWS: hostingchannels
How can I make my .env file variables accessible to github build server ?
Do I need to change my firebaseConfig ? Or there is any way so that I can make my .env file available to build server and later delete it once the build finishes ?
const firebaseConfig = {
apiKey : process.env.REACT_APP_API_KEY,
}
a quick solution here could be having a step in github actions to manually create the .env file before you need it.
- name: Create env file
run: |
touch .env
echo API_ENDPOINT="https://xxx.execute-api.us-west-2.amazonaws.com" >> .env
echo API_KEY=${{ secrets.API_KEY }} >> .env
cat .env

Using github actions to publish documentation

What I considered:
github offers github pages to host documentation in either a folder on my master branch or a dedicated gh-pages branch, but that would mean to commit build artifacts
I can also let readthedocs build and host docs for me through webhooks, but that means learning how to configure Yet Another Tool at a point in time where I try to consolidate everything related to my project in github-actions
I already have a docu-building process that works for me (using sphinx as the builder) and that I can also test locally, so I'd rather just leverage that instead. It has all the rules set up and drops some static html in an artifact - it just doesn't get served anywhere. Handling it in the workflow, where all the other deployment configuration of my project is living, feels better than scattering it over different tools or github specific options.
Is there already an action in the marketplace that allows me to do something like this?
name: CI
on: [push]
jobs:
... # do stuff like building my-project-v1.2.3.whl, testing, etc.
release_docs:
steps:
- uses: actions/sphinx-to-pages#v1 # I wish this existed
with:
dependencies:
- some-sphinx-extension
- dist/my-project*.whl
apidoc_args:
- "--no-toc"
- "--module-first"
- "-o docs/autodoc"
- "src/my-project"
build-args:
- "docs"
- "public" # the content of this folder will then be served at
# https://my_gh_name.github.io/my_project/
In other words, I'd like to still have control over how the build happens and where artifacts are dropped, but do not want to need to handle the interaction with readthedocs or github-pages.
###Actions that I tried
❌ deploy-to-github-pages: runs the docs build in an npm container - will be inconvenient to make it work with python and sphinx
❌ gh-pages-for-github-action: no documentation
❌ gh-pages-deploy: seems to target host envs like jekyll instead of static content, and correct usage with yml syntax not yet documented - I tried a little and couldn't get it to work
❌ github-pages-deploy: looks good, but correct usage with yml syntax not yet documented
✅ github-pages: needs a custom PAT in order to trigger rebuilds (which is inconvenient) and uploads broken html (which is bad, but might be my fault)
✅ deploy-action-for-github-pages: also works, and looks a little cleaner in the logs. Same limitations as the upper solution though, it needs a PAT and the served html is still broken.
The eleven other results when searching for github+pages on the action marketplace all look like they want to use their own builder, which sadly never happens to be sphinx.
In the case of managing sphinx using pip (requirements.txt), pipenv, or poetry, we can deploy our documentation to GitHub Pages as follows. For also other Python-based Static Site Generators like pelican and MkDocs, the workflow works as same. Here is a simple example of MkDocs. We just add the workflow as .github/workflows/gh-pages.yml
For more options, see the latest README: peaceiris/actions-gh-pages: GitHub Actions for GitHub Pages 🚀 Deploy static files and publish your site easily. Static-Site-Generators-friendly.
name: github pages
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Upgrade pip
run: |
# install pip=>20.1 to use "pip cache dir"
python3 -m pip install --upgrade pip
- name: Get pip cache dir
id: pip-cache
run: echo "::set-output name=dir::$(pip cache dir)"
- name: Cache dependencies
uses: actions/cache#v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: python3 -m pip install -r ./requirements.txt
- run: mkdocs build
- name: Deploy
uses: peaceiris/actions-gh-pages#v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site
I got it to work, but there is no dedicated action to build and host sphinx docs on either github pages or readthedocs as of yet, so as far as I am concerned there is quite a bit left to be desired here.
This is my current release_sphinx job that uses the deploy-action-for-github-pages action and uploads to github-pages:
release_sphinx:
needs: [build]
runs-on: ubuntu-latest
container:
image: python:3.6
volumes:
- dist:dist
- public:public
steps:
# check out sources that will be used for autodocs, plus readme
- uses: actions/checkout#v1
# download wheel that was build and uploaded in the build step
- uses: actions/download-artifact#v1
with:
name: distributions
path: dist
# didn't need to change anything here, but had to add sphinx.ext.githubpages
# to my conf.py extensions list. that fixes the broken uploads
- name: Building documentation
run: |
pip install dist/*.whl
pip install sphinx Pallets-Sphinx-Themes
sphinx-apidoc --no-toc --module-first -o docs/autodoc src/stenotype
sphinx-build docs public -b dirhtml
# still need to build and set the PAT to get a rebuild on the pages job,
# apart from that quite clean and nice
- name: github pages deploy
uses: peaceiris/actions-gh-pages#v2.3.1
env:
PERSONAL_TOKEN: ${{ secrets.PAT }}
PUBLISH_BRANCH: gh-pages
PUBLISH_DIR: public
# since gh-pages has a history, this step might no longer be necessary.
- uses: actions/upload-artifact#v1
with:
name: documentation
path: public
Shoutout to the deploy action's maintainer, who resolved the upload problem within 8 minutes of me posting it as an issue.

Run a command on CircleCI only once for a branch

I'm configuring my Circleci config right now to try to deploy a staging build for every new branch and then sends a message to my slack team with the link to the staging demo.
I've done the sending message to slack part, but now CircleCI sends a message to slack every time I push. I would like to limit that to happen only once for a specific branch. I know there's CIRCLE_BRANCH env that I can use to identify the current branch, but how do I save that variable in some kind of cache so I can run a conditional check on that variable to avoid running the same command twice?
I've checked the CircleCI docs and they offered cache on files but didn't mention anything about saving a variable as cache.
My config.yml file for CircleCI looks like this:
slackMessage:
docker:
- image: circleci/node
working_directory: ~/client
steps:
- attach_workspace:
at: ~/client
# - run: echo "$CIRCLE_BRANCH" > _branch_check
# - restore_cache:
# keys:
# - pr-{{ checksum "_branch_check" }}
- run:
command: |
PR_NUMBER=${CIRCLE_PULL_REQUEST##*/}
# yolo=pr-`echo -n $CIRCLE_PULL_REQUEST | md5sum`
# if [ -f "$yolo" ]; then
# touch $yolo
curl -X POST <Slack API webhook curl url>
# fi
# - save_cache:
# key: pr-{{ checksum "_branch_check" }}
# paths:
# - pr-{{ checksum "_branch_check" }}
The lines that are commented are the saving to cache part. With these lines commented CircleCI would send a message to Slack on every push. Without the comment the expected behavior is for CircleCI to send the slack message only once for each branch name.
Very cool question. I'll share some ideas that might work.
So, is the staging site URL created based on the branch name? Would it always be generated the same way?
If so, on CircleCI I would check for the existence of the staging URL first. If it's there, it was already created, which means this branch has already posted to Slack, and the execution can end right there.
Another idea would be to touch .staging-created a file into the FS, and save it into CircleCI cache using the branch name ({{ .Branch }}) as part of the key. Then, before sending the message to Slack, check for that file after cache has been restored.

How to deploy a Go application to Bluemix?

I am using Bluemix to run app, I can deploy Java app to Bluemix, does anyone know how to deploy a Go App to Bluemix?
You can deploy a Go application to Bluemix, but need to supply -b with the Go Buildpack URL.
There is a sample application you can take a look:
https://github.com/acostry/Go-on-Bluemix
You need to use a custom buildpack to deploy a Go web application. So, login to your cloud and run the cf command below from the root folder of your application:
cf push appname -b https://github.com/cloudfoundry/cloudfoundry-buildpack-go
Actually, Bluemix now includes the Cloud Foundry Go buildpack https://github.com/cloudfoundry/go-buildpack in its catalog. Hence, it should be unnecessary to resort to use of the BYOB feature.
API endpoint: https://api.ng.bluemix.net (API version: 2.19.0)
mbp:utils cbf$ cf buildpacks
Getting buildpacks...
buildpack position enabled locked filename
liberty-for-java 1 true false buildpack_liberty-for-java_v1.15-20150402-1422-yp.zip
sdk-for-nodejs 2 true false buildpack_sdk-for-nodejs_v1.15-20150331-2231-yp.zip
noop-buildpack 3 true false noop-buildpack-20140311-1519.zip
java_buildpack 4 true false java-buildpack-v2.6.zip
ruby_buildpack 5 true false ruby_buildpack-offline-v1.2.0.zip
nodejs_buildpack 6 true false nodejs_buildpack-offline-v1.1.1.zip
go_buildpack 7 true false go_buildpack-offline-v1.1.1.zip
python_buildpack 8 true false python_buildpack-offline-v1.1.1.zip
php_buildpack 9 true false php_buildpack-offline-v1.0.2.zip
liberty-for-java_v1-14-20150319-1159 10 true false buildpack_liberty-for-java_v1.14-20150319-1159-yp.zip
sdk-for-nodejs_v1-14-20150309-1555 11 true false buildpack_sdk-for-nodejs_v1.14-20150309-1555-yp.zip
Just to be clear the full command would be the following.
cf push appname -b https://github.com/cloudfoundry/go-buildpack.git
You have to include "-b" option while pushing your go app on Bluemix cloud.
cf push app_name -b buildpack_URL
For any other types of app, you can refer below link for pushing app on bluemix:
https://www.ng.bluemix.net/docs/#starters/byob.html.
Issue the following command with the -b option to deploy your application with your own buildpack, in which buildpack_URL is the URL of the buildpack:
$ cf push app_name -b buildpack_URL
More specifically
cf push app_name -b https://github.com/cloudfoundry/go-buildpack.git
more info is below:
https://www.ng.bluemix.net/docs/#starters/byob.html.
Deploying the app to Bluemix is pretty much the same as deploying any other app, with the exception of a command-line flag to set the custom buildpack the platform should use to provision the runtime.
Log in to your Bluemix account and run this command from the root folder of your application, where appname represents a unique name for your Bluemix-hosted app:
cf push appname -b url
More details regrading creating/pushing/deploying/connecting can be found at http://www.ibm.com/developerworks/cloud/library/cl-bluemix-go-app/

Travis gem deployment failing "Directory nonexistent"

I don't understand why the deployment is not working. I'm getting the following error in the build console:
Preparing deploy
Found gem
/usr/lib/git-core/git-stash: 186: /usr/lib/git-core/git-stash: cannot create /home/travis/build/prismicio/ruby-kit/.git/logs/refs/stash: Directory nonexistent
Build: https://travis-ci.org/prismicio/ruby-kit/jobs/40767391
My .travis.yml:
language: ruby
rvm:
- 2.1.1
- 2.1.0
- 2.0.0
- 1.9.3
- 1.9.2
- jruby-19mode
script: bundle exec rspec spec
notifications:
email:
- example#example.com
addons:
code_climate:
repo_token: X
deploy:
provider: rubygems
api_key:
secure: XXX
gemspec: prismic.gemspec
on:
tags: true
all_branches: true
What's wrong with the build?
The error:
/usr/lib/git-core/git-stash: 186: /usr/lib/git-core/git-stash: cannot create /home/travis/build/prismicio/ruby-kit/.git/logs/refs/stash: Directory nonexistent
may be related to how you're deploying your files to your provider and it's triggered by git stash and its DPL::Provider#cleanup process (see: releases.rb). By default Deployment Provider will deploy the files from the latest commit. This is not supported for all providers, therefore this just means that the "releases" provider needs to skip the cleanup, so it should deploy from the current file state instead (see #BanzaiMan comment) by adding this line:
skip_cleanup: true
This is because every provider has slightly different flags and these are documented in the Deployment section (or for the latest documentation check on GitHub for supported providers).
Further more, the above error is basically related to Travis CI bug (GH #1648) where basically File#basename is stripping the directory part (as per #BanzaiMan comment) and it's not clear why this cannot manifest in the CLI case. So this is something to fix in the near future.

Resources