Execute sql file on every startup using liquibase configuration - spring

I want to create a SQL initialization file to populate test data every time on Spring startup:
databaseChangeLog:
- changeSet:
id: 0001
author: test
dbms: postgres
changes:
- sqlFile:
- relativeToChangelogFile: true
- path: data.sql
Do you know how I can configure it to be executed on every Spring startup?

You can use the following property. Now the specific changeSet will run every time
- changeSet:
runAlways: true
https://docs.liquibase.com/concepts/basic/changeset.html
runAlways Executes the changeset on every run, even if it has been run before.

Related

How to resolve "input ConnectedServiceName expects a service connection of type AzureRM" error?

I am learning how to create azure pipeline and ran into the following error:
The pipeline is not valid. Job Phase_1: Step
AzureResourceGroupDeployment input ConnectedServiceName expects a
service connection of type AzureRM but the provided service connection
"MY-SERVICE-CONNECTION-NAME" is of type generic.
What am I missing here?
azure-pipelines.yml
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
branches:
include:
- master
paths:
include:
- cosmos
batch: True
jobs:
- job: Phase_1
displayName: Phase 1
cancelTimeoutInMinutes: 1
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
- task: AzureResourceGroupDeployment#2
displayName: Azure Deployment:Create Or Update Resource Group action on DISPLAY-NAME
inputs:
# azureSubscription: 'SUBSCRIPTION'
ConnectedServiceName: MY-SERVICE-CONNECTION-NAME
resourceGroupName: DISPLAY-NAME
location: West US # TBD
csmFile: cosmos/deploy.json
csmParametersFile: cosmos/parameters-dev.json
deploymentName: DEPLOYMENT-NAME
I tried values from "service connections" but not sure what is the issue here.
The error message is telling you the exact problem. Your service connection needs to be an Azure Resource Manager service connection. Create a service connection of the appropriate type.
I can reproduce the issue:
As Daniel said, this is caused by the service connection type.
From this document you can know what the parameters are:
https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureResourceGroupDeploymentV2/README.md#parameters-of-the-task
Share a little trick. Can help you avoid this type of problem in the future. When you type '- task: sometask#version' in the correct place of YML file of the pipeline in DevOps, you will see a 'Settings' button in the upper left, click it and you can set the value through the UI, which can filter the appropriate options for you:

Taurus YAML Runtime Property Change for JMeter

I am trying to invoke Jmeter script with user defined properties with YAML which I am able to change and execute with below configuration.
However if the test started and I need to increase the user on any thread - without stopping the test, how can I achieve that? Let's say the test was started with Thread1 value as 30 and now if I need to change it to 50 dynamically on runtime. I did not find a way myself. Please advise.
execution:
- #concurrency: ${__P(my_conc,3)} # use `my_conc` prop or default=3 if property isn't found
ramp-up: 1
hold-for: ${__P(my_hold,1)}
scenario: simple
modules:
jmeter:
properties:
my_hold: 60
scenarios:
simple:
variables:
#User DEFINED variable
Sampler1 : TestSampler1
Sampler2 : TestSampler2
script: SampleYAMLJMeter.jmx
properties:
#Thread Level variable
thread1 : 30
thread2 : 45
Add the following global JMeter properties:
modules:
jmeter:
properties:
beanshell.server.port: 9000
beanshell.server.file: ../extras/startup.bsh
Create the setconc.bsh file under .bzt/jmeter-taurus/x.x.x/lib folder and put the following code into it:
org.apache.jmeter.util.JMeterUtils.setProperty("my_conc", args[0]);
That's it, whenever you need to change the concurrency of your tests just execute the following command from the .bzt/jmeter-taurus/x.x.x/lib folder:
java -jar bshclient.jar localhost 9000 setconc.bsh 1234
replace 1234 with the actual concurrency you want to achieve
More information:
Beanshell server
How to Change JMeter´s Load During Runtime

How to update loggingService of container.v1.cluster with deployment-manager

I want to set the loggingService field of an existing container.v1.cluster through deployment-manager.
I have the following config
resources:
- name: px-cluster-1
type: container.v1.cluster
properties:
zone: europe-west1-b
cluster:
description: "dev cluster"
initialClusterVersion: "1.13"
nodePools:
- name: cluster-pool
config:
machineType: "n1-standard-1"
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
management:
autoUpgrade: true
autoRepair: true
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 3
maxNodeCount: 10
ipAllocationPolicy:
useIpAliases: true
loggingService: "logging.googleapis.com/kubernetes"
masterAuthorizedNetworksConfig:
enabled: false
locations:
- "europe-west1-b"
- "europe-west1-c"
When I try to run gcloud deployment-manager deployments update ..., I get the following error
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1582040492957-59edb819a5f3c-7155f798-5ba37285]: errors:
- code: NO_METHOD_TO_UPDATE_FIELD
message: No method found to update field 'cluster' on resource 'px-cluster-1' of
type 'container.v1.cluster'. The resource may need to be recreated with the new
field.
The same succeeds if I remove loggingService.
Is there a way to update loggingService using deployment-manager without deleting the cluster?
The error NO_METHOD_TO_UPDATE_FIELD is due to updating "initialClusterVersion" when you issued the update call to GKE. This field is only used on creation of the cluster, and the type definition doesn't currently allow for it to be updated later. So that should remain static at the original value and will have no effect on the deployment moving forward or try to delete/comment that line.
Even when the previous entry is true, there is also no method to update the logging service, actually Deployment Manager doesn't have many update methods, so, try using the gcloud command to update the cluster directly, keep in mind that you have to use the monitoring service together with the logging service, so, the commando would look like:
gcloud container clusters update px-cluster-1 --logging-service=logging.googleapis.com/kubernetes --monitoring-service=monitoring.googleapis.com/kubernetes --zone=europe-west1-b

Google Cloud Build - How to Cache Bazel?

I recently started using Cloud Build with Bazel.
So I have a basic cloudbuild.yaml
steps:
- id: 'run unit tests'
name: gcr.io/cloud-builders/bazel
args: ['test', '//...']
which runs all tests of my Bazel project.
But as you can see from this screenshot, every build takes around 4 minutes, although I haven't touched any code which would affect my tests.
Locally running the tests for the first time takes about 1 minute. But running the tests a second time, with the help of Bazels cache, it takes only a few seconds.
So my goal is to use the Bazel cache with Google Cloud Build
Update
As suggested by Thierry Falvo I'v looked into those recommendations. An thus I tried to the add the following to my cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://cents-ideas-build-cache/bazel-bin', 'bazel-bin']
- id: 'run unit tests'
name: gcr.io/cloud-builders/bazel
args: ['test', '//...']
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'bazel-bin', 'gs://cents-ideas-build-cache/bazel-bin']
Although I created the bucket and folder, I get this error:
CommandException: No URLs matched
I think that rather than cache discrete results (artifacts), you want to use GCS (cloud storage) as a bazel remote cache.
- name: gcr.io/cloud-builders/bazel
args: ['test', '--remote_cache=https://storage.googleapis.com/<bucketname>', '--google_default_credentials', '--test_output=errors', '//...']

grails database migration plugin problems after upgrade to grails 3

I was using a previous version grails-database-migration plugin for a while and never had any big issues with it. However, lately I upgraded the whole project to grails 3.0.9 and did some additional development, the behavior as follows:
Imported the current prod DB structure into local machine (that DB copy is without the latest changes and new entities)
Execute: grails -Dgrails.env=staging dbm-gorm-diff changlog.xml
What I expected at this point is new changlog.xml file with all changes of existing entities and new ones.
What I get:
Newly defined entities automatically got added into the DB.
The changes in changlog.xml only included the changes of already existing tables, such as:
also, If I try running grails -Dgrails.env=staging run-app
ERROR grails.boot.GrailsApp - Application startup failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'springLiquibase_dataSource': Invocation of init method failed; nested exception is java.lang.NoSuchMethodError: liquibase.integration.spring.SpringLiquibase.createDatabase(Ljava/sql/Connection;Lliquibase/resource/ResourceAccessor;)Lliquibase/database/Database;
FAILURE: Build failed with an exception.
What went wrong: Execution failed for task ':bootRun'.
Process 'command '/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/bin/java''
finished with non-zero exit value 1
...
...
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. | Error Failed to start server (Use --stacktrace to see the full trace)
Here is the portion of my application.yml
dataSource:
pooled: true
url: jdbc:mysql://127.0.0.1:3306/triz?useUnicode=yes&characterEncoding=UTF-8
driverClassName: "com.mysql.jdbc.Driver"
jmxExport: true
username: root
password: password
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
properties:
jmxEnabled: true
initialSize: 5
maxActive: 50
minIdle: 5
maxIdle: 25
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2
environments:
development:
dataSource:
dbCreate: create
# url: jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
test:
dataSource:
dbCreate: update
url: jdbc:h2:mem:testDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
staging:
dataSource:
url: jdbc:mysql://127.0.0.1:3306/triz_staging?useUnicode=yes&characterEncoding=UTF-8
and gradle.build
buildscript {
ext {
grailsVersion = project.grailsVersion
}
repositories {
mavenCentral()
mavenLocal()
maven { url "https://repo.grails.org/grails/core" }
}
dependencies {
classpath "org.grails:grails-gradle-plugin:$grailsVersion"
classpath 'com.bertramlabs.plugins:asset-pipeline-gradle:2.5.0'
// classpath 'com.bertramlabs.plugins:less-asset-pipeline:2.6.7'
classpath "org.grails.plugins:hibernate:4.3.10.5"
classpath 'org.grails.plugins:database-migration:2.0.0.RC4'
}
}
...
...
dependencies {
...
compile 'org.liquibase:liquibase-core:3.3.2'
runtime 'org.grails.plugins:database-migration:2.0.0.RC4'
}
UPDATE
I have another way to approach this problem:
My plan was to generate a changelog based on my current prod DB and then generate a diff for the changes I made. Sounds simple and straightforward; however, it didn't work out as expected. Here is what I did:
Dumped prod DB
Removed liquibase tables
Run: grails dbm-generate-changelog changelog-init.xml --add
At this point, I expected changelog-init.xml to contain the current state of DB. But, instead it applied the changes based on my models first, and then tried generating the diff. Eventually, I ended up with a changelog including my entire existing DB with changes applied from gorm.
What am I doing wrong here?
Additional Observations
It looks like, whenever I try to run ANY migration related commands, grails applies all the changes before that, even through my config says:
staging:
dataSource:
dbCreate: ~
url: jdbc:mysql://127.0.0.1:3306/triz_staging?useUnicode=yes&characterEncoding=UTF-8
properties:
jmxEnabled: true
Also tried completely removing dbCreate. Didn't change anything...
I am done, have no idea where to move next!!!
Well, here is a deal...
I am not sure if that was the real reason, but all I did is moved datasource config from application.yml to application.groovy and everything got back to normal.
I would be happy to hear thoughts.
Thanks.

Resources