I'm facing a problem with Config Processing error (circle-ci).
Material that I use
Aws cloud front
aws s3
circle-ci
situation
I did set up on AWS and added value to Environment Variables (circle-ci ). I did commit on git and build on circle-ci and an error occurs and I could not get out this error.
This is my repo
error
bin/sh -eo pipefail
ERROR IN CONFIG FILE:
[#/jobs] 8 schema violations found
Any string key is allowed as job name.
1. [#/jobs/deploy-to-aws-cloudfront] 0 subschemas matched instead of one
| 1. [#/jobs/deploy-to-aws-cloudfront] only 1 subschema matches out of 2
| | 1. [#/jobs/deploy-to-aws-cloudfront] 3 schema violations found
| | | 1. [#/jobs/deploy-to-aws-cloudfront] required key [steps] not found
| | | 2. [#/jobs/deploy-to-aws-cloudfront/docker/0] 2 schema violations found
| | | | 1. [#/jobs/deploy-to-aws-cloudfront/docker/0] extraneous key [steps] is not permitted
| | | | | Permitted keys:
| | | | | - image
| | | | | - name
| | | | | - entrypoint
| | | | | - command
| | | | | - user
| | | | | - environment
| | | | | - aws_auth
| | | | | - auth
| | | | | Passed keys:
| | | | | - image
| | | | | - working_directory
| | | | | - steps
| | | | 2. [#/jobs/deploy-to-aws-cloudfront/docker/0] extraneous key [working_directory] is not permitted
| | | | | Permitted keys:
| | | | | - image
| | | | | - name
| | | | | - entrypoint
| | | | | - command
| | | | | - user
| | | | | - environment
| | | | | - aws_auth
| | | | | - auth
| | | | | Passed keys:
| | | | | - image
| | | | | - working_directory
| | | | | - steps
| 2. [#/jobs/deploy-to-aws-cloudfront] expected type: String, found: Mapping
| | Job may be a string reference to another job
2. [#/jobs/deploy-to-aws-s3] 0 subschemas matched instead of one
| 1. [#/jobs/deploy-to-aws-s3] only 1 subschema matches out of 2
| | 1. [#/jobs/deploy-to-aws-s3] 3 schema violations found
| | | 1. [#/jobs/deploy-to-aws-s3] required key [steps] not found
| | | 2. [#/jobs/deploy-to-aws-s3/docker/0] 2 schema violations found
| | | | 1. [#/jobs/deploy-to-aws-s3/docker/0] extraneous key [steps] is not permitted
| | | | | Permitted keys:
| | | | | - image
| | | | | - name
| | | | | - entrypoint
| | | | | - command
| | | | | - user
| | | | | - environment
| | | | | - aws_auth
| | | | | - auth
| | | | | Passed keys:
| | | | | - image
| | | | | - working_directory
| | | | | - steps
| | | | 2. [#/jobs/deploy-to-aws-s3/docker/0] extraneous key [working_directory] is not permitted
| | | | | Permitted keys:
| | | | | - image
| | | | | - name
| | | | | - entrypoint
| | | | | - command
| | | | | - user
| | | | | - environment
| | | | | - aws_auth
| | | | | - auth
| | | | | Passed keys:
| | | | | - image
| | | | | - working_directory
| | | | | - steps
| 2. [#/jobs/deploy-to-aws-s3] expected type: String, found: Mapping
| | Job may be a string reference to another job
-------
Warning: This configuration was auto-generated to show you the message above.
Don't rerun this job. Rerunning will have no effect.
false
The reason that the Config processing error was ignoring schema on a circle-ci.
In my case, that's an indentation error.
https://github.com/CircleCI-Public/circleci-cli/issues/326
This post was helpful for solving my error.
Related
I am currently developing a Spring Boot application in the Eclipse IDE with a Connection class which needs to know which data source to connect to. I decided to let it know this property from Spring's application.properties, through the #Value annotation:
#Value("${project.datasource}")
private final DataSource DATA_SOURCE;
where DataSource is an enum representing the possible data sources. However, in this method, I get a "Blank final field DATA_SOURCE may not have been initialized" error:
private DBConnection() throws SQLException {
ConnectionConfig config = new ConnectionConfig(DATA_SOURCE);
connection = DriverManager.getConnection(config.getUrl(), config.getUSERNAME(), config.getPASSWORD());
}
Inserting a default value doesn't work, either:
#Value("${project.datasource:POSTGRE_LOCAL}")
still gives the same error.
I tried to install the Spring Tools 4 plugin for Eclipse to check if this was just Eclipse not understanding the #Value annotation's implications, but it seems like this isn't the case. How do I solve this problem? Am I misunderstanding the implications myself?
application.properties:
project.datasource = POSTGRE_LOCAL
Project tree:
| .classpath
| .gitignore
| .project
| HELP.md
| mvnw
| mvnw.cmd
| pom.xml
|
+---.mvn
| \---wrapper
| maven-wrapper.jar
| maven-wrapper.properties
|
+---.settings
| org.eclipse.core.resources.prefs
| org.eclipse.jdt.core.prefs
| org.eclipse.m2e.core.prefs
| org.springframework.ide.eclipse.prefs
|
+---src
| +---main
| | +---java
| | | \---org
| | | \---ingsw21
| | | \---backend
| | | +---connection
| | | | DBConnection.java
| | | |
| | | +---controllers
| | | | UserController.java
| | | |
| | | +---DAOs
| | | | DAOUtente.java
| | | |
| | | +---DAOSQL
| | | | DAOSQLUtente.java
| | | |
| | | +---entities
| | | | Utente.java
| | | |
| | | +---enums
| | | | DataSource.java
| | | |
| | | \---exceptions
| | | BadRequestWebException.java
| | | DataAccessException.java
| | |
| | \---resources
| | application.properties
| |
| \---test
| \---java
| \---org
| \---ingsw21
| \---backend
| \---BackEnd
| BackEndApplicationTests.java
|
\---target
+---classes
| | application.properties
| |
| \---org
| \---ingsw21
| \---backend
| +---connection
| | DBConnection$ConnectionConfig.class
| | DBConnection.class
| |
| +---controllers
| | UserController.class
| |
| +---DAOs
| | DAOUtente.class
| |
| +---DAOSQL
| | DAOSQLUtente.class
| |
| +---entities
| | Utente.class
| |
| +---enums
| | DataSource.class
| |
| \---exceptions
| BadRequestWebException.class
| DataAccessException.class
|
\---test-classes
\---org
You cannot add #Value to a final field.
#Value("${project.datasource}")
private DataSource DATA_SOURCE;
should work just fine.
Reverse the "$" and "{". The expression syntax is "${...}".
Here's my situation.
Food_database is in mysql.
There are 130 tables in food_database
I would like to send 130 tables to elasticsearch via logstash_jdbc.
-> how can a table of all databases be sent to elasticsearch?
my conf file (attempt)
input {
jdbc {
clean_run => true
jdbc_driver_library => "C:\ElasticSearch\mysql-connector-java-8.0.23\mysql-connector-java-8.0.23.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/food_database?useSSL=false&user=root&password=1234"
jdbc_user => "root"
jdbc_password => "1234"
schedule => "* * * * *"
statement => "select * from ??????"
#use_column_value => true
#tracking_column => "jobid"
}
}
output{
elasticsearch {
hosts => "localhost:9200"
index => "test_indexfile"
}
stdout {
codec => rubydebug
}
}
But I don't know how to send all 130 tables in food_databases.
I found a similar question through googling, but I couldn't solve it.
-> save whole database to elasticsearch using logstash
-> https://dzone.com/articles/migrating-mysql-data-to-elasticsearch-using-logsta
Please help me.
update posting (tables in food_database)
+--------------------------------------+
| Tables_in_food_database |
+--------------------------------------+
| access_token |
| activity |
| address |
| answer_abuse_reason |
| answer_report_abuse |
| attribute |
| attribute_group |
| banner |
| banner_group |
| banner_image |
| banner_image_description |
| blog |
| blog_related |
| category |
| category_commission |
| category_description |
| category_path |
| contact |
| country |
| coupon |
| coupon_product_category |
| coupon_usage |
| coupon_usage_product |
| currency |
| customer |
| customer_activity |
| customer_cart |
| customer_document |
| customer_group |
| customer_ip |
| customer_transaction |
| customer_wishlist |
| delivery_allocation |
| delivery_location |
| delivery_location_to_location |
| delivery_person |
| delivery_person_to_location |
| delivery_status |
| email_template |
| geo_zone |
| jobs |
| language |
| login_log |
| manufacturer |
| migrations |
| order |
| order_cancel_reason |
| order_history |
| order_log |
| order_product |
| order_product_log |
| order_status |
| order_total |
| page |
| page_group |
| payment |
| payment_archive |
| payment_items |
| payment_items_archive |
| paypal_order |
| paypal_order_transaction |
| permission_module |
| permission_module_group |
| plugins |
| price_update_file_log |
| product |
| product_answer |
| product_answer_like_dislike |
| product_attribute |
| product_description |
| product_discount |
| product_image |
| product_price_log |
| product_question |
| product_rating |
| product_related |
| product_special |
| product_stock_alert |
| product_tag |
| product_tire_price |
| product_to_category |
| product_varient |
| product_varient_option |
| product_varient_option_details |
| product_varient_option_image |
| product_view_log |
| quotation |
| razorpay_order |
| razorpay_order_transaction |
| service |
| service_category |
| service_category_path |
| service_enquiry |
| service_image |
| service_to_category |
| sessions |
| settings |
| settlement |
| settlement_item |
| site_filter |
| site_filter_category |
| site_filter_section |
| site_filter_section_item |
| sku |
| stock_log |
| stock_status |
| stripe_order |
| stripe_order_transaction |
| tax |
| trend |
| trend_image |
| trend_recommend |
| user_group |
| users |
| varients |
| varients_value |
| vendor |
| vendor_category |
| vendor_coupon |
| vendor_coupon_product_category |
| vendor_global_setting |
| vendor_invoice |
| vendor_invoice_item |
| vendor_order_archive |
| vendor_order_archive_log |
| vendor_order_products |
| vendor_order_status |
| vendor_orders |
| vendor_orders_log |
| vendor_payment |
| vendor_payment_archive |
| vendor_product |
| widget |
| widget_item |
| zone |
| zone_to_geo_zone |
+--------------------------------------+
136 rows in set (0.00 sec)
I would like to send all the values of my goals 136 tables to elasticsearch via logstash.
If running a script next to logstash would be an option I would go for the following approach:
Create a bash script (or whatever language your preference has), put this in cron to do a simple 'show tables' and use the output in order to create 130 config files only containing the INPUT part for logstash with a naming convention like 'INPUT_tablename.conf'. This script should create the config as shown above, for each table that exists.
Make sure it lists the INPUT_* files in the directory and deletes the ones that no longer exists.
Make sure that when a file already exists it does not touch it
have your FILTER.conf and OUTPUT.conf in the same directory
Put you logstash in auto reload config mode
By doing it this way you seperate the thing you are struggling with and allows the database to have changes in tables, new ones that are added, and old ones that might be deleted or renamed.
I've learned to do it this way on clusters that I know will become very large and where I need to learn when the maximum io is being hit so i know when to add new nodes to which layer without killing the complete setup.
I wish to get the returned value of status with shell then I can process to next command
# glance image-show 0227a985-cb1e-4f0c-81cb-003411988ea5
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| checksum | None |
| container_format | bare |
| created_at | 2021-03-15T02:54:15Z |
| disk_format | raw |
| hw_disk_bus | scsi |
| hw_qemu_guest_agent | yes |
| hw_scsi_model | virtio-scsi |
| id | 0227a985-cb1e-4f0c-81cb-003411988ea5 |
| locations | [] |
| min_disk | 0 |
| min_ram | 0 |
| name | not_inuse |
| os_hash_algo | None |
| os_hash_value | None |
| os_hidden | False |
| os_require_quiesce | yes |
| owner | 4d97a99e53bd4b51aa58601985776d5c |
| protected | False |
| size | None |
| status | active |
| tags | [] |
| updated_at | 2021-03-15T02:54:30Z |
| virtual_size | Not available |
| visibility | private |
+---------------------+--------------------------------------+
How do I get the printed value = active ??
# glance image-show 0227a985-cb1e-4f0c-81cb-003411988ea5 | grep status
| status | active |
Please help, Thank you
# glance image-show 0227a985-cb1e-4f0c-81cb-003411988ea5 | grep status | awk '{print $4}'
active
I have this following data in a single table. I need to split this table into multiple tables based on the YearMonth Column. Is there a way to automate this task.
+------------+-----------+
| Year_Month | Part# |
+------------+-----------+
| 2014-03 | CCH057169 |
| 2014-03 | CCH057276 |
| 2014-03 | CCH057303 |
| 2014-03 | CCH057430 |
| 2014-04 | CCH057409 |
| 2014-04 | CCH057497 |
| 2014-04 | CCH057570 |
| 2014-04 | CCH057583 |
| 2014-04 | CCH057650 |
| 2014-04 | CCH057696 |
| 2014-04 | CCH057707 |
| 2014-04 | CCH057798 |
| 2014-05 | CCH057701 |
| 2014-06 | CCH057235 |
| 2014-06 | CCH057280 |
| 2014-06 | CCH057693 |
| 2014-06 | CCH057707 |
| 2014-06 | CCH057721 |
| 2014-07 | CCH057235 |
| 2014-07 | CCH057427 |
| 2014-08 | CCH057650 |
| 2014-08 | CCH057696 |
| 2014-08 | CCH057798 |
| 2014-09 | CCH057303 |
| 2014-09 | CCH057482 |
| 2014-09 | CCH057668 |
| 2014-09 | CCH057744 |
| 2014-09 | CCH057776 |
| 2014-10 | CCH057668 |
| 2014-10 | CCH057696 |
| 2014-11 | CCH057390 |
| 2014-11 | CCH057409 |
| 2014-11 | CCH057679 |
| 2014-11 | CCH057700 |
| 2014-11 | CCH057721 |
| 2014-11 | CCH057749 |
| 2014-11 | CCH057896 |
| 2014-12 | CCH057169 |
| 2014-12 | CCH057693 |
| 2014-12 | CCH057696 |
| 2014-12 | CCH057708 |
| 2014-12 | CCH057876 |
| 2014-12 | CCH057896 |
| 2015-01 | CCH057630 |
| 2015-01 | CCH057679 |
| 2015-01 | CCH057700 |
| 2015-01 | CCH057776 |
| 2015-02 | CCH057409 |
| 2015-02 | CCH057482 |
+------------+-----------+
More Information:
I am getting the data from Oracle Database. The Purpose of this data is to compare between two given Dates and provide new records. Is there a way that I select two dates on the form (Slicer) and then the query has to fetch the data based on the date selection on the form.
Might be splitting hairs, but from a database (RDBMS) perspective, slicers change what is summarized and displayed, not what is queried. So, you might want to take some steps so you don't do something like getting the introduction date of every product Amazon has ever offered.
It sounds like you want to use a slicer with a range slider, which only works on numbers and dates. So, first add a column computed from the Year_Month column for a date, say the end of the month, and call it Month.
Month = EOMONTH(DATEVALUE([Year_Month] & "-01"), 0)
Then all you need to do is create a grid with the part# field and create a slicer for the Month field, which is initially configured with a range slider.
The project runs as expected with spring-boot:run. However, the executable JAR fails to run because it cannot find db/changelog.xml.
The following steps can be used to reproduce the problem:
run mvn package from project root
go to target folder
run java -jar executable-jar-with-liquibase-1.0.0-SNAPSHOT.jar
The log will now show an error because the table domain has not been created.
Note that the application.yml is found, since if liquibase.enabled is set to false, it will refuse to run entirely (as it should).
application.yml
server:
context-path: /api
spring:
datasource:
platform: h2
url: jdbc:h2:mem:testdb;MODE=PostgreSQL;DB_CLOSE_ON_EXIT=FALSE
jackson:
date-format: yyyy-MM-dd
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: none
liquibase:
enabled: false
change-log: classpath:db/changelog.xml
The generated JAR has the following contents:
.
|____BOOT-INF
| |____classes
| | |____application.yml
| | |____db
| | | |____changelog.xml
| | | |____changelogs
| | | | |____changelog_000.xml
| | |____nl
| | | |_____42
| | | | |____app
| | | | | |____ApplicationConfig.class
| | | | | |____domain
| | | | | | |____Domain.class
| | | | | | |____DomainController.class
| | | | | | |____DomainRepository.class
| | | | | | |____DomainService.class
| | | | | |____shared
| | | | | | |____AbstractEntity.class
| | | | | |____WebAppConfig.class
| | | | | |____WebApplication.class
| |____lib
| | |____accessors-smart-1.1.jar
| | |____antlr-2.7.7.jar
| | |____asm-5.0.3.jar
| | |____aspectjweaver-1.8.9.jar
| | |____assertj-core-2.5.0.jar
| | |____classmate-1.3.1.jar
| | |____dom4j-1.6.1.jar
| | |____h2-1.4.192.jar
| | |____hamcrest-core-1.3.jar
| | |____hamcrest-library-1.3.jar
| | |____hibernate-commons-annotations-5.0.1.Final.jar
| | |____hibernate-core-5.0.11.Final.jar
| | |____hibernate-entitymanager-5.0.11.Final.jar
| | |____hibernate-jpa-2.1-api-1.0.0.Final.jar
| | |____hibernate-validator-5.2.4.Final.jar
| | |____jackson-annotations-2.8.3.jar
| | |____jackson-core-2.8.3.jar
| | |____jackson-databind-2.8.3.jar
| | |____jackson-datatype-jsr310-2.8.3.jar
| | |____jandex-2.0.0.Final.jar
| | |____javassist-3.20.0-GA.jar
| | |____javax.transaction-api-1.2.jar
| | |____jboss-logging-3.3.0.Final.jar
| | |____jcl-over-slf4j-1.7.21.jar
| | |____json-20140107.jar
| | |____json-path-2.2.0.jar
| | |____json-smart-2.2.1.jar
| | |____jsonassert-1.3.0.jar
| | |____jul-to-slf4j-1.7.21.jar
| | |____liquibase-core-3.5.1.jar
| | |____log4j-over-slf4j-1.7.21.jar
| | |____logback-classic-1.1.7.jar
| | |____logback-core-1.1.7.jar
| | |____mockito-core-1.10.19.jar
| | |____objenesis-2.1.jar
| | |____slf4j-api-1.7.21.jar
| | |____snakeyaml-1.17.jar
| | |____spring-aop-4.3.3.RELEASE.jar
| | |____spring-aspects-4.3.3.RELEASE.jar
| | |____spring-beans-4.3.3.RELEASE.jar
| | |____spring-boot-1.4.1.RELEASE.jar
| | |____spring-boot-autoconfigure-1.4.1.RELEASE.jar
| | |____spring-boot-configuration-processor-1.4.1.RELEASE.jar
| | |____spring-boot-devtools-1.4.1.RELEASE.jar
| | |____spring-boot-starter-1.4.1.RELEASE.jar
| | |____spring-boot-starter-aop-1.4.1.RELEASE.jar
| | |____spring-boot-starter-data-jpa-1.4.1.RELEASE.jar
| | |____spring-boot-starter-jdbc-1.4.1.RELEASE.jar
| | |____spring-boot-starter-logging-1.4.1.RELEASE.jar
| | |____spring-boot-starter-test-1.4.1.RELEASE.jar
| | |____spring-boot-starter-tomcat-1.4.1.RELEASE.jar
| | |____spring-boot-starter-web-1.4.1.RELEASE.jar
| | |____spring-boot-test-1.4.1.RELEASE.jar
| | |____spring-boot-test-autoconfigure-1.4.1.RELEASE.jar
| | |____spring-context-4.3.3.RELEASE.jar
| | |____spring-core-4.3.3.RELEASE.jar
| | |____spring-data-commons-1.12.3.RELEASE.jar
| | |____spring-data-jpa-1.10.3.RELEASE.jar
| | |____spring-expression-4.3.3.RELEASE.jar
| | |____spring-jdbc-4.3.3.RELEASE.jar
| | |____spring-orm-4.3.3.RELEASE.jar
| | |____spring-tx-4.3.3.RELEASE.jar
| | |____spring-web-4.3.3.RELEASE.jar
| | |____spring-webmvc-4.3.3.RELEASE.jar
| | |____tomcat-embed-core-8.5.5.jar
| | |____tomcat-embed-el-8.5.5.jar
| | |____tomcat-embed-websocket-8.5.5.jar
| | |____tomcat-jdbc-8.5.5.jar
| | |____tomcat-juli-8.5.5.jar
| | |____validation-api-1.1.0.Final.jar
| | |____xml-apis-1.4.01.jar
|____META-INF
| |____MANIFEST.MF
| |____maven
| | |____nl.mad
| | | |____executable-jar-with-liquibase
| | | | |____pom.properties
| | | | |____pom.xml
|____org
| |____springframework
| | |____boot
| | | |____loader
| | | | |____archive
| | | | | |____Archive$Entry.class
| | | | | |____Archive$EntryFilter.class
| | | | | |____Archive.class
| | | | | |____ExplodedArchive$1.class
| | | | | |____ExplodedArchive$FileEntry.class
| | | | | |____ExplodedArchive$FileEntryIterator$EntryComparator.class
| | | | | |____ExplodedArchive$FileEntryIterator.class
| | | | | |____ExplodedArchive.class
| | | | | |____JarFileArchive$EntryIterator.class
| | | | | |____JarFileArchive$JarFileEntry.class
| | | | | |____JarFileArchive.class
| | | | |____data
| | | | | |____ByteArrayRandomAccessData.class
| | | | | |____RandomAccessData$ResourceAccess.class
| | | | | |____RandomAccessData.class
| | | | | |____RandomAccessDataFile$DataInputStream.class
| | | | | |____RandomAccessDataFile$FilePool.class
| | | | | |____RandomAccessDataFile.class
| | | | |____ExecutableArchiveLauncher$1.class
| | | | |____ExecutableArchiveLauncher.class
| | | | |____jar
| | | | | |____AsciiBytes.class
| | | | | |____Bytes.class
| | | | | |____CentralDirectoryEndRecord.class
| | | | | |____CentralDirectoryFileHeader.class
| | | | | |____CentralDirectoryParser.class
| | | | | |____CentralDirectoryVisitor.class
| | | | | |____FileHeader.class
| | | | | |____Handler.class
| | | | | |____JarEntry.class
| | | | | |____JarEntryFilter.class
| | | | | |____JarFile$1.class
| | | | | |____JarFile$2.class
| | | | | |____JarFile$3.class
| | | | | |____JarFile$JarFileType.class
| | | | | |____JarFile.class
| | | | | |____JarFileEntries$1.class
| | | | | |____JarFileEntries$EntryIterator.class
| | | | | |____JarFileEntries.class
| | | | | |____JarURLConnection$1.class
| | | | | |____JarURLConnection$JarEntryName.class
| | | | | |____JarURLConnection.class
| | | | | |____ZipInflaterInputStream.class
| | | | |____JarLauncher.class
| | | | |____LaunchedURLClassLoader$1.class
| | | | |____LaunchedURLClassLoader.class
| | | | |____Launcher.class
| | | | |____MainMethodRunner.class
| | | | |____PropertiesLauncher$1.class
| | | | |____PropertiesLauncher$ArchiveEntryFilter.class
| | | | |____PropertiesLauncher$FilteredArchive$1.class
| | | | |____PropertiesLauncher$FilteredArchive.class
| | | | |____PropertiesLauncher$PrefixMatchingArchiveFilter.class
| | | | |____PropertiesLauncher.class
| | | | |____util
| | | | | |____SystemPropertyUtils.class
| | | | |____WarLauncher.class
The entire project can be found here: https://github.com/robert-bor/executable-jar-with-liquibase
What am I doing wrong here?
there used to be a problem with the includeAll tag in liquibase, see this issue. It should be fix meanwhile, but at the moment I could not make it run with the includeAll tag.
As a solution for your problem use:
<include file="classpath:db/changelogs/changelog_000.xml" relativeToChangelogFile="false"/>