How to run flyway:migrate on a cloned database - clone

We have a production database and we maintain it by using flyway. Recently we cloned our production database to create a UAT database. The UAT database has the same schema and data as that in production. Now we try to run "mvn flyway:migrate" on the UAT database to test new flyway script. However, we got
+---------+-----------------------+---------------------+---------+
| Version | Description | Installed on | State |
+---------+-----------------------+---------------------+---------+
| 0.0.1 | script.1 | | <Baseln |
| 0.0.2 | script.2 | | <Baseln |
| 0.0.3 | script.3 | | <Baseln |
| 0.1.1 | script.4 | | <Baseln |
| 0.1.2 | script.5 | | <Baseln |
| 0.2.0 | script.6 | | <Baseln |
| 0.5.1 | script.7 | | <Baseln |
| 0.5.2 | script.8 | | <Baseln |
| 0.6.0 | script.9 | | <Baseln |
| 0.7.0 | script.10 | | <Baseln |
| 0.8.0 | script.11 | | <Baseln |
| 0.9.0 | script.12 | | <Baseln |
| 0.10.0 | script.13 | | <Baseln |
| 0.11.1 | script.14 | | <Baseln |
| 0.12.0 | script.15 | | <Baseln |
| 0.13.0 | script.16 | | <Baseln |
| 0.14.0 | script.17 | | <Baseln |
| 0.15.0 | script.18 | | <Baseln |
| 0.16.0 | script.19 | | <Baseln |
| 0.16.1 | script.20 | | <Baseln |
| 0.17.0 | script.21 | | <Baseln |
| 0.17.1 | script.22 | | <Baseln |
| 0.18.0 | script.23 | | <Baseln |
| 1 | << Flyway Baseline >> | 2016-11-07 08:11:33 | Baselin |
| 1.16.0 | script.19 | 2017-02-15 10:03:18 | Future |
| 1.16.1 | script.20 | 2017-02-15 10:03:18 | Future |
+---------+-----------------------+---------------------+---------+
The script.23 is a new script. We expect the state is pending.
However, the state of all scripts became Baseln. I searched the relating topics for a day but could not find scenarios closed to my case. Is there any configuration on flyway (maven) I can use to run migrate command on a cloned database? Please help. (My database is SQL Server 2014, flyway version 4.0 ,maven version is 3.5, JDK version 1.7)
Thanks a lot.
Chi-Fu

I think that all versions lower than the baseline are not executed - they are supposed to belong to the baseline.
If script.23 is a new migration it should have a version greater than the last version, typically 1.18.0 (According to standard flyway config, V1_18_0__script.23.sql rather than V0_18_0__script.23.sql).
After having renamed this file, do a repair before trying to migrate again.

Related

Using Spring #Value annotation results in field not initialized error in Eclipse

I am currently developing a Spring Boot application in the Eclipse IDE with a Connection class which needs to know which data source to connect to. I decided to let it know this property from Spring's application.properties, through the #Value annotation:
#Value("${project.datasource}")
private final DataSource DATA_SOURCE;
where DataSource is an enum representing the possible data sources. However, in this method, I get a "Blank final field DATA_SOURCE may not have been initialized" error:
private DBConnection() throws SQLException {
ConnectionConfig config = new ConnectionConfig(DATA_SOURCE);
connection = DriverManager.getConnection(config.getUrl(), config.getUSERNAME(), config.getPASSWORD());
}
Inserting a default value doesn't work, either:
#Value("${project.datasource:POSTGRE_LOCAL}")
still gives the same error.
I tried to install the Spring Tools 4 plugin for Eclipse to check if this was just Eclipse not understanding the #Value annotation's implications, but it seems like this isn't the case. How do I solve this problem? Am I misunderstanding the implications myself?
application.properties:
project.datasource = POSTGRE_LOCAL
Project tree:
| .classpath
| .gitignore
| .project
| HELP.md
| mvnw
| mvnw.cmd
| pom.xml
|
+---.mvn
| \---wrapper
| maven-wrapper.jar
| maven-wrapper.properties
|
+---.settings
| org.eclipse.core.resources.prefs
| org.eclipse.jdt.core.prefs
| org.eclipse.m2e.core.prefs
| org.springframework.ide.eclipse.prefs
|
+---src
| +---main
| | +---java
| | | \---org
| | | \---ingsw21
| | | \---backend
| | | +---connection
| | | | DBConnection.java
| | | |
| | | +---controllers
| | | | UserController.java
| | | |
| | | +---DAOs
| | | | DAOUtente.java
| | | |
| | | +---DAOSQL
| | | | DAOSQLUtente.java
| | | |
| | | +---entities
| | | | Utente.java
| | | |
| | | +---enums
| | | | DataSource.java
| | | |
| | | \---exceptions
| | | BadRequestWebException.java
| | | DataAccessException.java
| | |
| | \---resources
| | application.properties
| |
| \---test
| \---java
| \---org
| \---ingsw21
| \---backend
| \---BackEnd
| BackEndApplicationTests.java
|
\---target
+---classes
| | application.properties
| |
| \---org
| \---ingsw21
| \---backend
| +---connection
| | DBConnection$ConnectionConfig.class
| | DBConnection.class
| |
| +---controllers
| | UserController.class
| |
| +---DAOs
| | DAOUtente.class
| |
| +---DAOSQL
| | DAOSQLUtente.class
| |
| +---entities
| | Utente.class
| |
| +---enums
| | DataSource.class
| |
| \---exceptions
| BadRequestWebException.class
| DataAccessException.class
|
\---test-classes
\---org
You cannot add #Value to a final field.
#Value("${project.datasource}")
private DataSource DATA_SOURCE;
should work just fine.
Reverse the "$" and "{". The expression syntax is "${...}".

how can a table of all databases be sent to elasticsearch?

Here's my situation.
Food_database is in mysql.
There are 130 tables in food_database
I would like to send 130 tables to elasticsearch via logstash_jdbc.
-> how can a table of all databases be sent to elasticsearch?
my conf file (attempt)
input {
jdbc {
clean_run => true
jdbc_driver_library => "C:\ElasticSearch\mysql-connector-java-8.0.23\mysql-connector-java-8.0.23.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/food_database?useSSL=false&user=root&password=1234"
jdbc_user => "root"
jdbc_password => "1234"
schedule => "* * * * *"
statement => "select * from ??????"
#use_column_value => true
#tracking_column => "jobid"
}
}
output{
elasticsearch {
hosts => "localhost:9200"
index => "test_indexfile"
}
stdout {
codec => rubydebug
}
}
But I don't know how to send all 130 tables in food_databases.
I found a similar question through googling, but I couldn't solve it.
-> save whole database to elasticsearch using logstash
-> https://dzone.com/articles/migrating-mysql-data-to-elasticsearch-using-logsta
Please help me.
update posting (tables in food_database)
+--------------------------------------+
| Tables_in_food_database |
+--------------------------------------+
| access_token |
| activity |
| address |
| answer_abuse_reason |
| answer_report_abuse |
| attribute |
| attribute_group |
| banner |
| banner_group |
| banner_image |
| banner_image_description |
| blog |
| blog_related |
| category |
| category_commission |
| category_description |
| category_path |
| contact |
| country |
| coupon |
| coupon_product_category |
| coupon_usage |
| coupon_usage_product |
| currency |
| customer |
| customer_activity |
| customer_cart |
| customer_document |
| customer_group |
| customer_ip |
| customer_transaction |
| customer_wishlist |
| delivery_allocation |
| delivery_location |
| delivery_location_to_location |
| delivery_person |
| delivery_person_to_location |
| delivery_status |
| email_template |
| geo_zone |
| jobs |
| language |
| login_log |
| manufacturer |
| migrations |
| order |
| order_cancel_reason |
| order_history |
| order_log |
| order_product |
| order_product_log |
| order_status |
| order_total |
| page |
| page_group |
| payment |
| payment_archive |
| payment_items |
| payment_items_archive |
| paypal_order |
| paypal_order_transaction |
| permission_module |
| permission_module_group |
| plugins |
| price_update_file_log |
| product |
| product_answer |
| product_answer_like_dislike |
| product_attribute |
| product_description |
| product_discount |
| product_image |
| product_price_log |
| product_question |
| product_rating |
| product_related |
| product_special |
| product_stock_alert |
| product_tag |
| product_tire_price |
| product_to_category |
| product_varient |
| product_varient_option |
| product_varient_option_details |
| product_varient_option_image |
| product_view_log |
| quotation |
| razorpay_order |
| razorpay_order_transaction |
| service |
| service_category |
| service_category_path |
| service_enquiry |
| service_image |
| service_to_category |
| sessions |
| settings |
| settlement |
| settlement_item |
| site_filter |
| site_filter_category |
| site_filter_section |
| site_filter_section_item |
| sku |
| stock_log |
| stock_status |
| stripe_order |
| stripe_order_transaction |
| tax |
| trend |
| trend_image |
| trend_recommend |
| user_group |
| users |
| varients |
| varients_value |
| vendor |
| vendor_category |
| vendor_coupon |
| vendor_coupon_product_category |
| vendor_global_setting |
| vendor_invoice |
| vendor_invoice_item |
| vendor_order_archive |
| vendor_order_archive_log |
| vendor_order_products |
| vendor_order_status |
| vendor_orders |
| vendor_orders_log |
| vendor_payment |
| vendor_payment_archive |
| vendor_product |
| widget |
| widget_item |
| zone |
| zone_to_geo_zone |
+--------------------------------------+
136 rows in set (0.00 sec)
I would like to send all the values of my goals 136 tables to elasticsearch via logstash.
If running a script next to logstash would be an option I would go for the following approach:
Create a bash script (or whatever language your preference has), put this in cron to do a simple 'show tables' and use the output in order to create 130 config files only containing the INPUT part for logstash with a naming convention like 'INPUT_tablename.conf'. This script should create the config as shown above, for each table that exists.
Make sure it lists the INPUT_* files in the directory and deletes the ones that no longer exists.
Make sure that when a file already exists it does not touch it
have your FILTER.conf and OUTPUT.conf in the same directory
Put you logstash in auto reload config mode
By doing it this way you seperate the thing you are struggling with and allows the database to have changes in tables, new ones that are added, and old ones that might be deleted or renamed.
I've learned to do it this way on clusters that I know will become very large and where I need to learn when the maximum io is being hit so i know when to add new nodes to which layer without killing the complete setup.

Config Processing error circle-ci when I do build

I'm facing a problem with Config Processing error (circle-ci).
Material that I use
Aws cloud front
aws s3
circle-ci
situation
I did set up on AWS and added value to Environment Variables (circle-ci ). I did commit on git and build on circle-ci and an error occurs and I could not get out this error.
This is my repo
error
bin/sh -eo pipefail
ERROR IN CONFIG FILE:
[#/jobs] 8 schema violations found
Any string key is allowed as job name.
1. [#/jobs/deploy-to-aws-cloudfront] 0 subschemas matched instead of one
| 1. [#/jobs/deploy-to-aws-cloudfront] only 1 subschema matches out of 2
| | 1. [#/jobs/deploy-to-aws-cloudfront] 3 schema violations found
| | | 1. [#/jobs/deploy-to-aws-cloudfront] required key [steps] not found
| | | 2. [#/jobs/deploy-to-aws-cloudfront/docker/0] 2 schema violations found
| | | | 1. [#/jobs/deploy-to-aws-cloudfront/docker/0] extraneous key [steps] is not permitted
| | | | | Permitted keys:
| | | | | - image
| | | | | - name
| | | | | - entrypoint
| | | | | - command
| | | | | - user
| | | | | - environment
| | | | | - aws_auth
| | | | | - auth
| | | | | Passed keys:
| | | | | - image
| | | | | - working_directory
| | | | | - steps
| | | | 2. [#/jobs/deploy-to-aws-cloudfront/docker/0] extraneous key [working_directory] is not permitted
| | | | | Permitted keys:
| | | | | - image
| | | | | - name
| | | | | - entrypoint
| | | | | - command
| | | | | - user
| | | | | - environment
| | | | | - aws_auth
| | | | | - auth
| | | | | Passed keys:
| | | | | - image
| | | | | - working_directory
| | | | | - steps
| 2. [#/jobs/deploy-to-aws-cloudfront] expected type: String, found: Mapping
| | Job may be a string reference to another job
2. [#/jobs/deploy-to-aws-s3] 0 subschemas matched instead of one
| 1. [#/jobs/deploy-to-aws-s3] only 1 subschema matches out of 2
| | 1. [#/jobs/deploy-to-aws-s3] 3 schema violations found
| | | 1. [#/jobs/deploy-to-aws-s3] required key [steps] not found
| | | 2. [#/jobs/deploy-to-aws-s3/docker/0] 2 schema violations found
| | | | 1. [#/jobs/deploy-to-aws-s3/docker/0] extraneous key [steps] is not permitted
| | | | | Permitted keys:
| | | | | - image
| | | | | - name
| | | | | - entrypoint
| | | | | - command
| | | | | - user
| | | | | - environment
| | | | | - aws_auth
| | | | | - auth
| | | | | Passed keys:
| | | | | - image
| | | | | - working_directory
| | | | | - steps
| | | | 2. [#/jobs/deploy-to-aws-s3/docker/0] extraneous key [working_directory] is not permitted
| | | | | Permitted keys:
| | | | | - image
| | | | | - name
| | | | | - entrypoint
| | | | | - command
| | | | | - user
| | | | | - environment
| | | | | - aws_auth
| | | | | - auth
| | | | | Passed keys:
| | | | | - image
| | | | | - working_directory
| | | | | - steps
| 2. [#/jobs/deploy-to-aws-s3] expected type: String, found: Mapping
| | Job may be a string reference to another job
-------
Warning: This configuration was auto-generated to show you the message above.
Don't rerun this job. Rerunning will have no effect.
false
The reason that the Config processing error was ignoring schema on a circle-ci.
In my case, that's an indentation error.
https://github.com/CircleCI-Public/circleci-cli/issues/326
This post was helpful for solving my error.

How to launch an LXD container on another node and exchange ssh keys with the container?

How to launch and LXD container on another node and exchange ssh keys with the container?
That is, how to give Ansible direct access to the LXD container using SSH?
I am aware of the authorized_key module however this would only exchange keys between the host and Ansible and not Ansible and the LXD container.
Please see the below diagram which describes the machine layout:
+----------------------------+ +----------------------------+
| | | |
| Baremetal Machine <------------------+ Ansible Machine |
| + | | |
| | | | |
| | | | |
| | | | |
| +--------------------+ | | |
| | | | | | |
| | v | | | |
| | LXD Container | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| +--------------------+ | | |
| | | |
+----------------------------+ +----------------------------+
Start containers from images that support some sort of provisioning system.
Most common is cloud-init – it's already inside many official cloud images.
When you create such a container, just add required configuration settings via user.user-data config option and it will be automatically applied when container started.
lxd_container module support config parameter to set container configuration options.
You can find useful cloud config examples here.

Chef solo can't find cookbook during kitchen test run

I'm trying to add a Test Kitchen for a bunch of cookbooks that we use to provision a Jenkins CI instance.
We use Berkshelf to manage dependencies. The file structure is as follows:
| .gitignore
| .kitchen.yml
| Berksfile
| Berksfile.lock
| bootstrap.sh
| chefignore
| metadata.rb
| provision.sh
| readme.md
| solo.json
| solo.rb
| tree.txt
| VERSION
|
+---.kitchen
| | default-ubuntu-1404.yml
| |
| +---kitchen-vagrant
| | \---kitchen-chef-jenkins-default-ubuntu-1404
| | | Vagrantfile
| | |
| | \---.vagrant
| | \---machines
| | \---default
| | \---virtualbox
| | action_set_name
| | id
| | index_uuid
| | private_key
| | synced_folders
| |
| \---logs
| default-centos-71.log
| default-ubuntu-1404.log
| kitchen.log
|
+---site-cookbooks
| +---ant
| | | .gitignore
| | | .kitchen.yml
| | | Berksfile
| | | CHANGELOG.md
| | | chefignore
| | | CONTRIBUTING.md
| | | LICENSE
| | | metadata.rb
| | | README.md
| | | TESTING.md
| | | Thorfile
| | |
| | +---attributes
| | | default.rb
| | |
| | +---providers
| | | library.rb
| | |
| | +---recipes
| | | default.rb
| | | install_package.rb
| | | install_source.rb
| | |
| | +---resources
| | | library.rb
| | |
| | \---templates
| | \---default
| | ant_home.sh.erb
| |
| +---haxe_cookbook
| | | CHANGELOG.md
| | | metadata.rb
| | | README.md
| | |
| | \---recipes
| | default.rb
| |
| \---mbp-jenkins
| | Berksfile
| | Berksfile.lock
| | CHANGELOG.md
| | chefignore
| | metadata.rb
| | README.md
| |
| +---recipes
| | default.rb
| |
| +---resources
| | | commons-net-3.3.jar
| | |
| | +---css
| | | style.css
| | |
| | +---images
| | | logo-mbp.png
| | | web.png
| | |
| | \---js
| | scripts.js
| |
| \---templates
| +---default
| | | config.xml
| | |
| | \---secrets
| | hudson.console.AnnotatedLargeText.consoleAnnotator
| | hudson.model.Job.serverCookie
| | hudson.util.Secret
| | jenkins.security.ApiTokenProperty.seed
| | jenkins.slaves.JnlpSlaveAgentProtocol.secret
| | master.key
| | org.acegisecurity.ui.rememberme.TokenBasedRememberMeServices.mac
| | org.jenkinsci.main.modules.instance_identity.InstanceIdentity.KEY
| |
| \---emails
| build-complete.html.groovy
| build-started.html.groovy
|
\---test
\---integration
\---default
Executing:
kitchen converge default-ubuntu-1404
Results in the following error:
[2015-08-24T09:13:24+00:00] ERROR: Cookbook mbp-jenkins not found. If you're loading mbp-jenkins from another cookbook, make sure you configure the dependency in your metadata
[2015-08-24T09:13:24+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
Which suggests that chef-solo can't find the cookbook mbp-jenkins. I would expect it to find it though as we define the cookbook paths in the solo.rb file as follows:
root = File.absolute_path(File.dirname(__FILE__))
file_cache_path 'cache'
cookbook_path [root + "/cookbooks", root + "/site-cookbooks",root + "/berks-cookbooks"]
Not really sure what is going wrong here so any suggestions would be appreciated
Update:
I have tried using the chef zero provisioner, that however gives me the output:
================================================================================
Error Resolving Cookbooks for Run List:
================================================================================
Missing Cookbooks:
------------------
No such cookbook: mbp-jenkins
Expanded Run List:
------------------
* mbp-jenkins::default
Have you tried using the chef_zero provisioner instead? I suspect your problem is because chef solo does not run Berkshelf, which would explain the missing cookbooks.
For example see:
How to customise a tomcat recipe in Chef
Update
The issue appears to be that the cookbooks in the site-cookbooks directory is not be copied over to the target machine.
Seems to me the simplest and best fix is to include the local cookbooks in your Berksfile as follows:
source 'https://supermarket.chef.io'
cookbook 'ant', path: 'site-cookbooks/ant'
cookbook 'haxe_cookbook', path: 'site-cookbooks/haxe_cookbook'
cookbook 'mbp-jenkins', path: 'site-cookbooks/mbp-jenkins'
metadata

Resources