Heroku docker:release: Expected response to be successful, got 422 - heroku

heroku docker:release
...
Successfully built b240d9d7bf11
extracting slug from container...
creating remote slug...
language-pack: heroku-docker (ojobot_conda)
remote process types: { web: 'cd /app/user && node server.js' }
uploading slug [====================] 100% of 578 MB, 0.0s
releasing slug...
! Error: Expected response to be successful, got 422
Not sure what to do next at this point. Any help is welcome.
Thanks,
Pat.

From Heroku, 422 means:
422 Unprocessable Entity invalid_params
request failed, validate parameters try again
422 Unprocessable Entity verification_needed
request failed, enter billing information in the Heroku Dashboard before utilizing resources.
So double-check your parameters embedded in your slug (the snapshot of your application code that is ready to run on the platform)

Related

Error response from daemon: No such image: localstack/localstack:0.14.0

Recently I had this error
Error response from daemon: No such image: localstack/localstack:0.14.0
when setting up gnomock.
I was getting the error on this line
gmock, err = gnomock.Start(preset, gnomock.WithDebugMode(), gnomock.WithUseLocalImagesFirst())
The test is passing when I test on my machine, but when ran on gitlab's runners it was throwing the error I mention above.
The solution was to clear the cache of the runners.
The internet do not say much about this error. My theory is that because the mock was used in a sub package of the project (wrapped around), labstack got an update, and somehow that did not prop the docker images correctly.

Pre signing AWS S3 files

I have a bucket that allows for open files. I have uploaded a test file called test.gsm and have tried to presign the file by doing
root#server2:~# aws s3 presign s3://dovid-ft/test.gsm --expires-in 604800
https://dovid-ft.s3.amazonaws.com/test.gsm?AWSAccessKeyId=AKIAJSDPJKCCGAZ257VQ&Signature=0zbBU2B%2FKVrqgOXFQNTGh3gme%2Fo%3D&Expires=1625658191
root#server2:~#
If I then try to grab that file I get a 403.
root#server2:~# wget 'https://dovid-ft.s3.amazonaws.com/test.gsm?AWSAccessKeyId=AKIAJSDPJKCCGAZ257VQ&Signature=0zbBU2B%2FKVrqgOXFQNTGh3gme%2Fo%3D&Expires=1625658191'
--2021-06-30 07:49:21-- https://dovid-ft.s3.amazonaws.com/test.gsm?AWSAccessKeyId=AKIAJSDPJKCCGAZ257VQ&Signature=0zbBU2B%2FKVrqgOXFQNTGh3gme%2Fo%3D&Expires=1625658191
Resolving dovid-ft.s3.amazonaws.com (dovid-ft.s3.amazonaws.com)... 52.217.88.204
Connecting to dovid-ft.s3.amazonaws.com (dovid-ft.s3.amazonaws.com)|52.217.88.204|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2021-06-30 07:49:21 ERROR 403: Forbidden.
root#server2:~#
I also tried to decode the HTML of the key to see if that would help and it did not.
root#server2:~# wget 'https://dovid-ft.s3.amazonaws.com/test.gsm?AWSAccessKeyId=AKIAJSDPJKCCGAZ257VQ&Signature=0zbBU2B/KVrqgOXFQNTGh3gme/o=&Expires=1625658191'
--2021-06-30 07:49:37-- https://dovid-ft.s3.amazonaws.com/test.gsm?AWSAccessKeyId=AKIAJSDPJKCCGAZ257VQ&Signature=0zbBU2B/KVrqgOXFQNTGh3gme/o=&Expires=1625658191
Resolving dovid-ft.s3.amazonaws.com (dovid-ft.s3.amazonaws.com)... 52.217.32.100
Connecting to dovid-ft.s3.amazonaws.com (dovid-ft.s3.amazonaws.com)|52.217.32.100|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2021-06-30 07:49:37 ERROR 403: Forbidden.
root#server2:~#
Is there any way to get logs or see what the issue is and why my request is being rejected? As of now the only way to be able to get the file is to make it publicly available which I don't want to do.
It turns out I was using the wrong credentials to presign the file. Why amazon didn't throw an error when I tried to presign them with the wrong credentials is beyond me.
The solution is in wget. after recreating the scenario I wasn't able to download a file via wget also.
wget -O test.gsm "https://yourURL" # and will do.
reference: Amazon AWS S3 signed URL via Wget

unable to uprade marklogic data hub framework using gradle

I am trying to follow the recommendation to upgrade the DHF using gradle but I am running through an issue that I cannot het my head around.
The build succeeds but the redeployment fails
Any idea on how to fix this ?
note that the login info is provided properly in the gradle.properties
> Task :hubDeploySecurity FAILED
Task ':hubDeploySecurity' is not up-to-date because:
Task has not declared any outputs despite executing actions.
Deploying app DHF with config dirs: [/src/main/hub-internal-config, /src/main/ml-config]
Executing command [com.marklogic.appdeployer.command.security.DeployPrivilegesCommand] with sort order [5]
Will read and merge resource files in each config path before saving any resources
Processing files in directory: /src/main/hub-internal-config/security/privileges
Checking to see if Configuration Management API is available at: /manage/v3
Sending JSON POST request as user 'tkadmin' (who should have the 'manage-admin' and 'security' roles) to path: /manage/v3
Error occurred while sending POST request to /manage/v3; logging request body to assist with debugging: {}
Processing file: /src/main/hub-internal-config/security/privileges/dhf-internal-data-hub.json
Processing file: /src/main/hub-internal-config/security/privileges/dhf-internal-entities.json
Processing file: /src/main/hub-internal-config/security/privileges/dhf-internal-mappings.json
Processing file: /src/main/hub-internal-config/security/privileges/dhf-internal-trace-ui.json
Processing files in directory: /src/main/ml-config/security/privileges
Checking to see if Configuration Management API is available at: /manage/v3
Sending JSON POST request as user 'tkadmin' (who should have the 'manage-admin' and 'security' roles) to path: /manage/v3
Error occurred while sending POST request to /manage/v3; logging request body to assist with debugging: {}
Merging payloads that reference the same resource
Checking to see if Configuration Management API is available at: /manage/v3
Sending JSON POST request as user 'tkadmin' (who should have the 'manage-admin' and 'security' roles) to path: /manage/v3
Error occurred while sending POST request to /manage/v3; logging request body to assist with debugging: {}
Checking for existence of resource: dhf-internal-data-hub
Sending XML GET request as user 'tkadmin' (who should have the 'manage-admin' and 'security' roles) to path: /manage/v2/privileges
Logging HTTP response body to assist with debugging: {"errorResponse": {"statusCode":401,
"status":"Unauthorized",
"message":"401 Unauthorized"
}
}
:hubDeploySecurity (Thread[Execution worker for ':',5,main]) completed. Took 0.01 secs.
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':hubDeploySecurity'.
> 401 Unauthorized: [{"errorResponse": {"statusCode":401,
"status":"Unauthorized",
"message":"401 Unauthorized"
}
}]
Assuming you have followed the DHF upgrade matrix:
https://docs.marklogic.com/datahub/5.2/upgrade.html
You probably ran the Gradle with an incorrect Admin interface & Security user. As such, the hubUpdate REST API requests will fail.
Try below see if it works:
Step 2
gradle hubUpdate -i -PmlUsername=admin -PmlPassword={admin-password} -Penvironment={env-name}
Step 4
gradle mlRedeploy -i -PmlUsername=admin -PmlPassword={admin-password} -Penvironment={env-name}

GitLab CI Error: Uploading artifacts to coordinator - failed - responseStatus 400 Bad Request

I'm working with GitLab (free edition) pipelines and started receiving the error below on a pipeline that was working.
This is a minimal example from my .gitlab-ci.yml that reproduce the error (Although I don't think it is related to my code):
default:
image: node:10-alpine
stages:
- build
build:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
only:
- Staging
.
.
.
Error's log:
64 packages are looking for funding
run `npm fund` for details
Running after_script
Saving cache
Uploading artifacts for successful job
Uploading artifacts...
node_modules/: found 62788 matching files
WARNING: Uploading artifacts to coordinator... failed id=512111 responseStatus=400 Bad Request status=400 Bad Request token=4Dwaaa
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts to coordinator... failed id=512111 responseStatus=400 Bad Request status=400 Bad Request token=4Dwaaa
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts to coordinator... failed id=512111 responseStatus=400 Bad Request status=400 Bad Request token=4Dwaaa
FATAL: invalid argument
ERROR: Job failed: exit code 1
Found this thread in Stackoverfow but it is related to different status error.
There are multiple threads (1 , 2 , 3 ) on this issue on Gitlab forum but it is hard to understand the cause of the problem and how to resolve it.
Any help will be highly appriciated.

Returns 503 service unavailable when uploading files to heroku

Apollo server setup in Heroku and tried to upload files using Altair but it returns 503 service unavailable.
But when uploading files in my local it uploads successfully so I dont know what am i missing.
This is my mutation :
mutation($img1: Upload, $img2: Upload){
uploadDocument(Image1: $img1, Image2: $img2, Type: 1){
Code
Message
}
}
The output is 503 service unavailable and this is what I get from the output of altair.
https://prnt.sc/p1z2c1

Resources