Bitrise Google Play Deploy start failing, which used to work fine - bitrise

The logs around the error is this below. It used to work but after a few days ago, this start failing. I didn't not change deploy flow.
The only change is that we started to pay. Before that we used it as a trial member.
Though I don't think that is the cause...
Does anyone know how to get over this error?
Using app from: /Users/vagrant/deploy/app-production-[REDACTED]-bitrise-signed.apk
Configuration read successfully
Authenticating
Authenticated client created
Create new edit
editID: xxxxxxx
Edit insert created
Upload apks or app bundles
Uploading /Users/vagrant/deploy/app-production-[REDACTED]-bitrise-signed.apk 1/1
Uploaded apk version: 36
Done uploading of 1 apps
New version codes to upload: 36
Applications uploaded
Update track
Release version codes are: [36]
alpha track will be updated.
updated track: alpha
Track updated
Committing edit
Failed to commit edit, error: googleapi: Error 403: You cannot rollout this [REDACTED] because it does not allow any existing users to upgrade to the newly added APKs., forbidden
| |
+---+---------------------------------------------------------------+----------+
| x | google-play-deploy#3.6 (exit code: 1) | 19.44 sec|
+---+---------------------------------------------------------------+----------+
| Issue tracker: https://github.com/bitrise-io/steps-google-play-deploy/issues |
| Source: https://github.com/bitrise-io/steps-google-play-deploy |
+---+---------------------------------------------------------------+----------+

Related

Caddy not working in api-platfrom 2.6.4 distribution - panic: proto: file "pb.proto" is already registered

When I try us api-platform version 2.6.4 I am not able to run it when i build adn strat containers and check logs caddy is not working i get an error like this. Any idea? Caddy version is 2.3.0
caddy_1 | panic: proto: file "pb.proto" is already registered
caddy_1 | See https://developers.google.com/protocol-buffers/docs/reference/go/faq#namespace-conflict
tureality_caddy_1 exited with code 2
Other people have reported having this bug and I had it too.
Fortunately, the bug as just been fixed by Dunglas itself. :)
https://github.com/api-platform/api-platform/issues/1881#issuecomment-822663193
The repair was done at the mercure level and not in the api platform source code itself so you can keep your current version.
You just have to docker-compose up and it will work.

Google App Engine Flexible environment deployment failed using Google Cloud SDK from WINDOWS OS 10

My production site developed in CodeIgniter Frame work, It has more than 10k files, I deployed last week successfully without any issues. Today My deployment got failed, I just corrected one query in script.
I got the below issues
C:\myproject>gcloud app deploy --version 13 app.yaml
Services to deploy:
descriptor: [C:\myproject\app.yaml]
source: [C:\myproject]
target project: [xyz]
target service: [uat]
target version: [13]
target url: [https://uat-dot-xyz.appspot.com]
Do you want to continue (Y/n)? Y
Beginning deployment of service [uat]...
#============================================================#
#= Uploading 0 files to Google Cloud Storage =#
#============================================================#
File upload done.
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: This deployment has too many files. New versions are limited to 10000 files for this app.
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: This deployment has too many files. New versions are limited to 10000
files for this app.
field: version.deployment.files[...]
I referred the below SO posting
gcloud app deploy failed because deployment has too many files for PHP CodeIgnitor files
gcloud app deploy : This deployment has too many files
Approaches for overcoming 10000 file limit on Google App Engine?
Communicating between google app engine services
I enabled .gcloudignore file as per the below and created in myproject
https://cloud.google.com/sdk/gcloud/reference/topic/gcloudignore
How to include files in .gcloudignore that are ignored in .gitignore
C:\>gcloud config set gcloudignore/enabled true
Updated property [gcloudignore/enabled].
C:\>gcloud config list
[accessibility]
screen_reader = False
[compute]
region = region-name
zone = zone-name
[core]
account = xyz#domainname.com
disable_usage_reporting = True
project = xyz
[gcloudignore]
enabled = true
My current Cloud SDK version is: 320.0.0
Installing components from version: 320.0.0
I could not find the solution. Why suddenly My deployment failed Using Gcloud SDK to GAE flexible environment ( Note: My project has more than 10 k files, Upto Last week I didnt get this issue)
Plz, Help me to solve this issue, If I miss anything in this,
Thanks in Advance.
It looks that there was a change on the way App Engine deploy the files.
Running the command gcloud config set app/trigger_build_server_side false solved the issue
Now on Dec 15 this change seems to be reverted and normal deploys should be working as before

Deploying on Netlify throws an error with my GraphQL/Gatsby/Contentful query, demands needless query parameter

At first I was getting this error on my local build server, but I managed to fix it there... the query is still the same, but gatsby isn't throwing any errors with the query. But every time I try to deploy on Netlify it fails with the following message:
toFormat seems to be empty, we need a fileExtension to set it.
1 | fragment GatsbyContentfulFluid_tracedSVG on ContentfulFluid {
> 2 | tracedSVG
| ^
3 | aspectRatio
4 | src
5 | srcSet
6 | sizes
7 | }
failed during stage 'building site': Build script returned non-zero exit code: 1
8 |
9 | query optbuildreposrccomponentsshopProductsJs2136335468 {
10 | products: allContentfulProduct {
11 | edges {
12 | node {
Shutting down logging, 22 messages pending
File path: /opt/build/repo/src/components/shop/Products.js
Plugin: none
This is the same error I was getting locally and I have no idea why it is occurring. There should be no reason that toFormat is a required parameter. This is using the standard gatsby-source-contentful plugin API request which has always served the image without issue in the past. If I change the request to 'fixed' instead of 'fluid' the problem goes away, but I need fluid images for this part of the site.
I emailed the Netlify staff a few days ago, but am yet to receive a reply. Any help would be greatly appreciated.
For Those who are facing the same issue I came up with a simple solution.
Remove from all your file places that you used this extension _tracedSVG.
eg.
GatsbyContentfulFixed_tracedSVG
to
GatsbyContentfulFixed
Stop your gatsby server and use the follow command:
gatsby clean && gatsby develop
Commit and push your changes (in case you are using Github)
On Netlify find the option: Clear cache and deploy site
It should fix your Deployment on Netlify as well errors on your console :)
Two suggestions:
Local: Double check your content for any image references that do not append a suffix of .png or .jpg
Netlify: Clear cache and deploy site

Octopus Deploy : "Skipping this step as no matching targets were found"

We have several projects in Octopus Deploy Cloud and we have never had issues deploying to the existing targes.
As of today, all deployment steps get skipped with this message in the logs:
Skipping this step as no matching targets were found
This has affected all projects, all channels and all environments (we have five different environments in five different AWS accounts).
== Skipped: Step 1: Create or Update IIS Website == 10:40:29
Verbose | Searched for targets: 10:40:29 Verbose | *
specifically matching these ids: Machines-534 and Machines-535
| * that are enabled
| * with no id exclusions
| * with no environment exclusions
| * has a role that overlaps: APII
| * with no tenant exclusions
| * has a health status of: Healthy or HasWarnings 10:40:29 Info | Skipping this step as no
matching targets were found
The above is part of the raw log.
All deployment targets in the given environment and channel are Healthy (green). Any idea what can be the cause of this?
You have encountered this bug. This issue can be resolved by upgrading to 2019.10.12 or later.

Issue installing openwhisk with incubator-openwhisk-devtools

I have a blocking issue installing openwhisk with docker
I typed make quick-start right after a git pull of the project incubator-openwhisk-devtools. My OS is Fedora 29, docker version is 18.09.0, docker-compose version is 1.22.0. JDk 8 Oracle.
I get the following error:
[...]
adding the function to whisk ...
ok: created action hello
invoking the function ...
error: Unable to invoke action 'hello': The server is currently unavailable (because it is overloaded or down for maintenance). (code ciOZDS8VySDyVuETF14n8QqB9wifUboT)
[...]
[ERROR] [#tid_sid_unknown] [Invoker] failed to ping the controller: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for health-0: 30069 ms has passed since batch creation plus linger time
[ERROR] [#tid_sid_unknown] [KafkaProducerConnector] sending message on topic 'health' failed: Expiring 1 record(s) for health-0: 30009 ms has passed since batch creation plus linger time
Please note that controller-local-logs.log is never created.
If I issue a touch controller-local-logs.log in the right directory the log file is always empty after I try to issue make quick-start again.
http://localhost:8888/ping gives me the right answer: pong.
http://localhost:9222 is not reacheable.
Where am I wrong?
Thank you in advance

Resources