Heroku Postgres with Spring boot - spring

I have a spring boot api that is deployed in heroku and I added the heroku addon for postgres and followed the steps they provided in Heroku Springboot deployment guide.
However when I deploy the application and run a query to register the user i get this error from the api
"timestamp": "2019-12-11T05:28:28.219+0000",
"status": 500,
"error": "Internal Server Error",
"message": "could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not extract ResultSet",
"path": "/register"
}
For registering the user all I am doing is
if(userRepository.findByEmail(user.getEmail()).size() == 0){
userRepository.save(user);
return SuccessCodes.REGISTRATION_SUCCESS;
}else{
return ErrorCodes.REGISTRATION_ERROR_EMAIL_EXISTS;
}
Any way on how I can fix this?

Related

Getting 403 when trying to restart Heroku app though API

I get this error when trying to "restart all dynos" through python heroku3 library (app.restart())
{
"id": "forbidden",
"message": "Restarts are currently disabled. Please try again later."
}

Getting “Method Not Allowed” error when POST /actuator/bus-refresh for Spring Cloud Config server when service deployed in Google Cloud

I'm using Spring Cloud Config Server to get the configuration from Git and I've deployed my service in google cloud.
When I run the service in my local and invoke POST http://localhost:8887/actuator/bus-refresh, it runs successfully.
But when I invoke the same for the service deployed in Google Cloud, it gives - Request method POST not supported,
{
"timestamp": "2020-05-31T13:42:56.641+0000",
"status": 405,
"error": "Method Not Allowed",
"message": "Request method 'POST' not supported",
"path": "/actuator/bus-refresh"
}
Steps I followed:
Installed Rabbit MQ in google cloud, exposes it as service.
Updated the Spring Cloud Config - with rabbit mq server details
Build and pushed docker image to GC
Deployed and exposes Config server in GC
When hitting POST for http://<exyternalip>:8887/actuator/bus-refresh getting Request method 'POST' not supported.
When hitting GET for http://<exyternalip>:8887/actuator/bus-refresh - response gives me the application.properties in git
From local by pointing to GC rabbit mq with POST, it gives success.
Below is the configuration in my Config server
spring.application.name=my-config-server
server.port=8887
spring.cloud.config.server.git.skip-ssl-validation = true
management.endpoints.web.exposure.include=bus-refresh
spring.cloud.bus.enabled=true
spring.cloud.config.server.git.uri=****
spring.cloud.config.server.git.username=****
spring.cloud.config.server.git.password=****
spring.rabbitmq.host=34.68.237.224 #GC rabbit MQ
spring.rabbitmq.port=5672
spring.rabbitmq.username=rabbit
spring.rabbitmq.password=rabbit
what am I doing wrong?
I used the following dependency :
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bus-amqp</artifactId>
instead of this :
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>

Google Cloud Storage Repository Plugin

I have a K8 cluster on GCP running elasticsearch. Now I need to create a backup.
I've installed the GCS-plugin on my pods in stateful-set and tried setting it up with the following documentation:
https://github.com/elastic/elasticsearch/blob/master/docs/plugins/repository-gcs.asciidoc
When I try to configure a repository to use credentials stored in keystore I get the following response back:
{
"error": {
"root_cause": [
{
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
}
],
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
},
"status": 500
}
Any lead would be helpful, thanks!
I think the problem is that I can't install the plugin on the nodes, so I’ve installed it on the pods instead. And that the installation is not persistent after I restart the pods. So to make the installation persist on K8 I needed to build a custom image that installs the plugin. A bit tricky, but the plugin seems to be intended for GCE. So I decided to move from K8 to a managed instance group on GCE instead.

Executing maven unit tests on a Google Cloud SQL environment

I have a Jenkins pod running in GCP's Kubernetes Engine and I'm trying to run a maven unit test that connects to a google cloud SQL database to perform said test. My application.yaml for my project looks like this:
spring:
cloud:
gcp:
project-id: <my_project_id>
sql:
database-name: <my_database_name>
instance-connection-name: <my_instance_connection_name>
jpa:
database-platform: org.hibernate.dialect.MySQL55Dialect
hibernate:
ddl-auto: create-drop
datasource:
continue-on-error: true
driver-class-name: com.mysql.cj.jdbc.Driver
username: <my_cloud_sql_username>
password: <my_cloud_sql_password>
The current Jenkinsfile associated with this project is:
Jenkinsfile:
pipeline {
agent any
tools{
maven 'Maven 3.5.2'
jdk 'jdk8'
}
environment {
IMAGE = readMavenPom().getArtifactId()
VERSION = readMavenPom().getVersion()
DEV_DB_USER = "${env.DEV_DB_USER}"
DEV_DB_PASSWORD = "${env.DEV_DB_PASSWORD}"
}
stages {
stage('Build docker image') {
steps {
sh 'mvn -Dmaven.test.skip=true clean package'
script{
docker.build '$IMAGE:$VERSION'
}
}
}
stage('Run unit tests') {
steps {
withEnv(['GCLOUD_PATH=/var/jenkins_home/google-cloud-sdk/bin']) {
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
}
}
}
}
}
}
My problem is when the Pipeline actually tries to run the mvn test using the above configuration (in my application.yaml) I'm getting this error:
Caused by:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403
Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Insufficient Permission: Request had insufficient authentication scopes.",
"reason" : "insufficientPermissions"
} ],
"message" : "Insufficient Permission: Request had insufficient authentication scopes."
}
I have two Google Cloud projects:
One that has the Kubernetes Cluster where the Jenkins pod is running.
Another project where the K8s Cluster contains my actual Spring Boot Application and the Cloud SQL database that I'm trying to access.
I also created the service account only in my Spring Boot Project for Jenkins to use with three roles: Cloud SQL Editor, Kubernetes Engine Cluster Admin and Project owner (to verify that the service account is not at fault).
I enabled the Cloud SQL, Cloud SQL admin and Kubernetes APIs in both projects and I double checked my Cloud SQL credentials and they are ok. In addition, I authenticated the Jenkins pipeline using the json file generated when I created the service account, following the recommendations discussed here:
Jenkinsfile (extract):
...
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
...
I don't believe the GCP Java SDK relies on gcloud CLI at all. Instead, it looks for an environment variable GOOGLE_APPLICATION_CREDENTIALS that points to your service account key file and GCLOUD_PROJECT (see https://cloud.google.com/docs/authentication/getting-started).
Try adding the following:
sh("export GOOGLE_APPLICATION_CREDENTIALS=${GC_KEY}")
sh("export GCLOUD_PROJECT=<project_id>")
There are a couple of different things you should verify to get this working. I'm assuming you are using the Cloud SQL JDBC SocketFactory for Cloud SQL.
You should create a testing service account and give it whatever permissions are needed to execute the tests. To connect to Cloud SQL, it needs at a minimum the "Cloud SQL Client" role for the same project as the Cloud SQL instance.
The Cloud SQL SF uses the Application Default Credentials (ADC) strategy for determining what authentication to use. This means the first place it looks for credentials is the GOOGLE_APPLICATION_CREDENTIALS env var, which should be a path to the key for the testing service account.

Getting an error while calling GET /system/ping

I am getting an error while calling GET /system/ping
{
"error": {
"statusCode": 500,
"name": "Error",
"message": "error trying login and get user Context. Error: error trying to enroll user. Error: Enrollment failed with errors [[{\"code\":400,\"message\":\"Authorization failure\"}]]",
}
}
I have made the participant
Blockchain Participant
{
'$class': 'org.optum.blockchainv5.Participant',
ParticipantId: 'ParticipantId:2',
Name: 'Vipul Bajaj'
}
Then issued an identity to the participant
System Identity
{
userID: 'ParticipantId:2',
userSecret: 'dPJbJBsaOLaf'
}
And then added that identity to default wallet
Wallet Identity
{
enrollmentID: 'ParticipantId:2',
enrollmentSecret: 'dPJbJBsaOLaf',
id: 3
}
And then set this wallet identity default by calling the POST /wallets/1/identities/3/setDefault
Got response code 204
And after calling GET system/ping gave me error.
just following up - if you're still getting this error, could you attach a trace log setting export DEBUG=composer:* ` then re-running the rest server - the log file is in a 'logs' directory (from where you start the composer-rest-server). Then we can see what's going on with the POST.
I had a similar issue.
I was trying to deploy a composer hlfv1 network instance locally. I was running the ./createComposerProfile.sh script. This script has this line cp "${DIR}"/hlfv1/composer/creds/* ~/.hfc-key-store
This copies all the credentials on your creds folder and overrides the ones created by composer identity import on your ~/.hfc-key-store
You could copy the credentials from ~/.hfc-key-store to the creds folder or comment out this line.

Resources