Spring Batch DB2 update DB2GSE.ST_POLYGON fails - spring

I am trying to insert a polygon into db2 table hosted on z/OS
This is my database Item Writer
<bean id="databaseItemWriter"
class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>
<![CDATA[
INSERT INTO SAMPLE_GEOMETRIES
(GEO_NAME, GEOMETRY)
VALUES
( ?, DB2GSE.ST_POLYGON(?, 1))
]]>
</value>
</property>
<property name="itemPreparedStatementSetter">
<bean class="com.amex.elbs.DAO.GeometriesItemPreparedStatementSetter" />
</property>
</bean>
This is my custom prepared statement setter
public class GeometriesItemPreparedStatementSetter implements ItemPreparedStatementSetter{
#Override
public void setValues(Geometries item, PreparedStatement ps) throws SQLException {
ps.setString(1, item.Id);
ps.setString(2, item.Polygon);
}
}
This is my sample input file. It is pipe delimited and it has the ID and the Polygon Co-ordinates.
pm251|'POLYGON((-159.335174733889 21.9483433404175,-159.327130348878 22.0446395507162,-159.295025589769 22.1248124949548,-159.343195828355 22.1970166285359,-159.391366885913 22.2291198667724,-159.576012589057 22.2131796383001,-159.712505933171 22.1490592515515,-159.800814224332 22.0366665967853,-159.736592652746 21.9644203111023,-159.640246973766 21.9483657695954,-159.576021285803 21.8841361312636,-159.439545188912 21.8680716835921,-159.335174733889 21.9483433404175))', 1
The below statement when executed on z/OS is successful.
,INSERT,INTO SAMPLE_GEOMETRIES
(GEO_NAME, GEOMETRY)
VALUES
( 'PM',
DB2GSE.ST_POLYGON('POLYGON((
-159.335174733889 21.9483433404175,
-159.327130348878 22.0446395507162,
-159.295025589769 22.1248124949548,
-159.343195828355 22.1970166285359,
-159.391366885913 22.2291198667724,
-159.576012589057 22.2131796383001,
-159.712505933171 22.1490592515515,
-159.800814224332 22.0366665967853,
-159.736592652746 21.9644203111023,
-159.640246973766 21.9483657695954,
-159.576021285803 21.8841361312636,
-159.439545188912 21.8680716835921,
-159.335174733889 21.9483433404175))',1))
---------+---------+---------+---------+---------
DSNE615I NUMBER OF ROWS AFFECTED IS 1
This is what I get when I execute
Caused by: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-245, SQLSTATE=428F5, SQLERRMC=DB2GSE.ST_POLYGON, DRIVER=4.12.55
at com.ibm.db2.jcc.am.hd.a(hd.java:676)
at com.ibm.db2.jcc.am.hd.a(hd.java:60)
at com.ibm.db2.jcc.am.hd.a(hd.java:127)
at com.ibm.db2.jcc.am.mn.c(mn.java:2621)
at com.ibm.db2.jcc.am.mn.d(mn.java:2609)
at com.ibm.db2.jcc.am.mn.a(mn.java:2085)
at com.ibm.db2.jcc.am.nn.a(nn.java:7054)
at com.ibm.db2.jcc.am.mn.a(mn.java:2062)
at com.ibm.db2.jcc.t4.cb.g(cb.java:136)
at com.ibm.db2.jcc.t4.cb.a(cb.java:41)
at com.ibm.db2.jcc.t4.q.a(q.java:32)
at com.ibm.db2.jcc.t4.rb.i(rb.java:135)
at com.ibm.db2.jcc.am.mn.ib(mn.java:2055)
at com.ibm.db2.jcc.am.nn.rc(nn.java:3219)
at com.ibm.db2.jcc.am.nn.s(nn.java:3370)
at com.ibm.db2.jcc.am.nn.l(nn.java:2499)
at com.ibm.db2.jcc.am.nn.addBatch(nn.java:2438)
at org.springframework.batch.item.database.JdbcBatchItemWriter$1.doInPreparedStatement(JdbcBatchItemWriter.java:190)
at org.springframework.batch.item.database.JdbcBatchItemWriter$1.doInPreparedStatement(JdbcBatchItemWriter.java:185)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:644)
... 28 more

The error message for SQLCODE -245 reads: "THE INVOCATION OF FUNCTION routine-name IS AMBIGUOUS".
Apparently, there are more than one version of DB2GSE.ST_POLYGON in the database, accepting different types of arguments. You are using an untyped parameter marker: DB2GSE.ST_POLYGON(?, 1), so DB2 is unable to determine which version of DB2GSE.ST_POLYGON you want.
Add an explicit cast to the function invokation using the appropriate data type, for example:
DB2GSE.ST_POLYGON(CAST( ? AS VARCHAR(1000)), 1)

Related

How to execute JSONPath or XPATH expression on the Message Context property in WSO2 ESB

I'm trying to set an Json object to a proerty using property mediator and then access the values of it using JSONPath or XPATH.
In this case, first I set the JSON Object to an OM typed property. In there, using $ctx:parent/child pattern I could access the values. But I can't execute the XPath expression over it (eg : $ctx:metadataOM/suppliers[0]).
I tried various scenarios and I noted that this can be done by setting that Json object to Payload and do JSONpath / XPath operations on it.
Is there anyway to do this?
Is there anyway to access the Message Context properties from json-eval?
Is there anyway to execute the JSONPath / XPATH on get-property method or $ctx: expressions?
Note : I'm looking for an answer that doesn't use Script medaitor and Class Mediator.
Payload :
{
"metadata":{
"language":"en",
"customerCountry":"GB",
"client":"coolpal",
"suppliers" : ["supplier-a","supplier-b"],
"currency":"USD"
}
}
API.xml
<inSequence>
<property expression="//jsonObject/metadata" name="metadataOM" scope="default" type="OM"/>
<property expression="//jsonObject/metadata" name="metadataSTR" scope="default" type="STRING"/>
<log level="custom">
<property expression="$ctx:metadataOM/suppliers" name="metadataOM-suppliers"/>
<property expression="$ctx:metadataOM/suppliers[0]" name="metadataOM-suppliers"/>
<property expression="json-eval($.metadataOM.suppliers)" name="eval-metadata-suppliers"/>
<property expression="json-eval($.metadataSTR.suppliers)" name="eval-metadata-suppliers"/>
<property expression="json-eval($ctx:metadataOM)" name="eval-metadata-suppliers"/>
</log>
<respond/>
</inSequence>
WSO2ESB version : 5.0.0
With Martin's help (Thanx Martin!) and some of self studies, I found an answer.
One reason was, when the json is connverted to the XML,it looks like below.
<metadata>
<language>en</language>
<customerCountry>GB</customerCountry>
<client>coolpal</client>
<suppliers>supplier-a</suppliers>
<suppliers>supplier-b</suppliers>
<currency>USD</currency>
</metadata>
Then I stored and loaded the first supplier as below. Second reason was, I forgot was suppliers are not zero indexed. :)
<property expression="/" name="metadataOM" scope="default" type="OM"/>
<log level="custom">
<property expression="$ctx:metadataOM" name="metadataOM"/>
<property expression="$ctx:metadataOM//jsonObject/metadata/suppliers[1]" name="metadataOM-supplier-1"/>
</log>

Cannot find Region in cache GemFireCache

Caused by: org.springframework.beans.factory.BeanInitializationException: Cannot find region [record] in cache GemFireCache[id = 20255495;
isClosing = false; isShutDownAll = false;
closingGatewayHubsByShutdownAll = false; created = Mon Jan 23 11:45:10 EST 2017; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]
at org.springframework.data.gemfire.RegionLookupFactoryBean.lookupFallback(RegionLookupFactoryBean.java:72)
at org.springframework.data.gemfire.RegionLookupFactoryBean.afterPropertiesSet(RegionLookupFactoryBean.java:59)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1541)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1479)
... 13 more
My XML File is :
<beans>
....
<context:component-scan base-package="spring.gemfire.repository.deptemp"/>
<gfe:client-cache id="gemfireCache" pool-name="gfPool"/>
<!--Region for being used by the Record Bean -->
<gfe:replicated-region id="record" cache-ref="gemfireCache"/>
<bean id="record" class="spring.gemfire.repository.deptemp.beans.Record"/>
<gfe:pool id="gfPool" max-connections="10" subscription-enabled="true" >
<gfe:locator host="localhost" port="10334"/>
</gfe:pool>
<gfe:lookup-region id="record" />
<gfe-data:repositories base-package="spring.gemfire.repository.deptemp.repos"/>
</beans>
Abhisekh-
Why do you have both this...
<gfe:replicated-region id="record" cache-ref="gemfireCache"/>
And this...
<gfe:lookup-region id="record" />
Also, you have defined this...
<bean id="record" class="spring.gemfire.repository.deptemp.beans.Record"/>
Which (most definitely) overrode your REPLICATE Region bean definition (also with id="record") based on the "order" of your bean definitions in your XML defined above.
While Spring first and foremost adheres to dependency order between bean definitions, it will generally follow the declared order when no dependencies (explicit or implicit) exist.
Since <bean id="record" .../> comes after <gfe:replicated-region id="record" ../>, then <bean id=record../> overrides the <gfe:replicated-region id="record"/> bean definition.
Additionally, the <gfe:lookup-region> is not needed since you are not using GemFire/Geode's native configuration (e.g. cache.xml) or Cluster Configuration Service.
Furthermore, you are declaring a ClientCache, so technically probably want a <gfe:client-region> to match the GemFire/Geode Server Region, yes?!
While you can create REPLICATE Regions (and even PARTITION Regions) on a client, you typically do not do this since those Regions are NOT part of any distributed system, or cluster of GemFire "Server" nodes.
A client Region (which can be a PROXY, or even a CACHING_PROXY) will distribute data operations to the Server. Additionally, if you have data that only a particular client needs, then you ought to create local Regions, using <gfe:local-region>.
I would definitely read this, first...
http://gemfire.docs.pivotal.io/geode/basic_config/book_intro.html
Followed by this next...
http://gemfire.docs.pivotal.io/geode/topologies_and_comm/book_intro.html
Then this...
http://gemfire.docs.pivotal.io/geode/developing/book_intro.html
And lastly...
http://docs.spring.io/spring-data-gemfire/docs/current/reference/html/#bootstrap
-John

flyway migrations with postgresql and postgis extension

I have 2 schemas in my db:
CREATE SCHEMA my_schema;
CREATE SCHEMA my_second_schema;
So i created an extension
CREATE EXTENSION postgis
VERSION "2.1.4";
and used it well with both schemas.
But flyway 3.0 works with only first schema, on my_second_schema throwing an error:
org.flywaydb.core.internal.dbsupport.FlywaySqlScriptException: Error executing statement at line 803: CREATE TABLE places (
id bigint DEFAULT nextval('places_sequence'::regclass) NOT NULL,
geo_location geometry,
created_at timestamp without time zone,
updated_at timestamp without time zone,
version bigint,
state boolean
)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1554)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:298)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:975)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:752)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482)
at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:125)
at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:60)
at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.delegateLoading(AbstractDelegatingSmartContextLoader.java:102)
at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.loadContext(AbstractDelegatingSmartContextLoader.java:248)
at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContextInternal(CacheAwareContextLoaderDelegate.java:64)
at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContext(CacheAwareContextLoaderDelegate.java:91)
... 23 more
Caused by: org.flywaydb.core.internal.dbsupport.FlywaySqlScriptException: Error executing statement at line 803: CREATE TABLE places (
id bigint DEFAULT nextval('places_sequence'::regclass) NOT NULL,
geo_location geometry,
created_at timestamp without time zone,
updated_at timestamp without time zone,
version bigint,
state boolean
)
at org.flywaydb.core.internal.dbsupport.SqlScript.execute(SqlScript.java:91)
at org.flywaydb.core.internal.resolver.sql.SqlMigrationExecutor.execute(SqlMigrationExecutor.java:73)
at org.flywaydb.core.internal.command.DbMigrate$5.doInTransaction(DbMigrate.java:287)
at org.flywaydb.core.internal.command.DbMigrate$5.doInTransaction(DbMigrate.java:285)
at org.flywaydb.core.internal.util.jdbc.TransactionTemplate.execute(TransactionTemplate.java:72)
at org.flywaydb.core.internal.command.DbMigrate.applyMigration(DbMigrate.java:285)
at org.flywaydb.core.internal.command.DbMigrate.access$800(DbMigrate.java:46)
at org.flywaydb.core.internal.command.DbMigrate$2.doInTransaction(DbMigrate.java:207)
at org.flywaydb.core.internal.command.DbMigrate$2.doInTransaction(DbMigrate.java:156)
at org.flywaydb.core.internal.util.jdbc.TransactionTemplate.execute(TransactionTemplate.java:72)
at org.flywaydb.core.internal.command.DbMigrate.migrate(DbMigrate.java:156)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:864)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:811)
at org.flywaydb.core.Flyway.execute(Flyway.java:1171)
at org.flywaydb.core.Flyway.migrate(Flyway.java:811)
at co.brandly.configuration.FlywayMigration.init(FlywayMigration.java:17)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1682)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1621)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1550)
... 40 more
Caused by: org.postgresql.util.PSQLException: ERROR: type "geometry" does not exist
Позиция: 276
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:374)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:366)
at org.flywaydb.core.internal.dbsupport.JdbcTemplate.executeStatement(JdbcTemplate.java:235)
at org.flywaydb.core.internal.dbsupport.SqlScript.execute(SqlScript.java:89)
... 62 more
So why there is 'ERROR: type "geometry" does not exist'?
My spring application context:
<bean id="flyway" class="org.flywaydb.core.Flyway">
<property name="dataSource" ref="dataSource"/>
<property name="schemas" value="my_second_schema, my_schema"/>
<property name="validateOnMigrate" value="false"/>
<property name="outOfOrder" value="true"/>
<property name="placeholderPrefix" value="$flyway{"/>
<property name="placeholderSuffix" value="}"/>
<property name="placeholders">
<map>
<entry key="schema" value="${flyway.placeholders.schema}"/>
<entry key="schema_analytics" value="${flyway.placeholders.schema_analytics}"/>
</map>
</property>
</bean>
Problem was in users privileges or something.
ALTER USER myuser WITH SUPERUSER;
This helps
Hey maybe your problem is that you didn't add all the extension in your schema :
-- Enable PostGIS (includes raster)
CREATE EXTENSION postgis;
-- Enable Topology
CREATE EXTENSION postgis_topology;
-- fuzzy matching needed for Tiger
CREATE EXTENSION fuzzystrmatch;
-- Enable US Tiger Geocoder
CREATE EXTENSION postgis_tiger_geocoder;
I hope it helps you.
As of flyway 3.1 there is baseline target, executing that which will treat your database as an existing database with postgis functionality already imported to your schema when creating the database with postigs_template.

Spring Integration Splitter Map Keys to different channels

I have a transformer which returns a Map as a result. This result is then put on to the output-channel. What I want to do is to go to different channel for each KEY in the map. How can I configure this in Spring Integration?
e.g.
Transformer -- produces --> Map
Map contains {(Key1, "some data"), (Key2, "some data")}
So for Key1 --> go to channel 1
So for Key2 --> go to channel 2
etc..
Code examples would be helpful.
Thanks in advance
GM
Your processing should consist of two steps:
Partitioning message into separate parts that will be processed independently,
Routing separate messages (the result of split) into appropriate channels.
For the first task you have to use splitter and for the second one - router (header value router fits best here).
Please find a sample Spring Integration configuration below. You may want to use an aggregator at the end of a chain in order to combine messages - I leave it at your discretion.
<channel id="inputChannel">
<!-- splitting message into separate parts -->
<splitter id="messageSplitter" input-channel="inputChannel" method="split"
output-channel="routingChannel">
<beans:bean class="com.stackoverflow.MapSplitter"/>
</spliter>
<channel id="routingChannel">
<!-- routing messages into appropriate channels basis on header value -->
<header-value-router input-channel="routingChannel" header-name="routingHeader">
<mapping value="someHeaderValue1" channel="someChannel1" />
<mapping value="someHeaderValue2" channel="someChannel2" />
</header-value-router>
<channel id="someChannel1" />
<channel id="someChannel2" />
And the splitter:
public final class MapSplitter {
public static final String ROUTING_HEADER_NAME = "routingHeader";
public List<Message<SomeData>> split(final Message<Map<Key, SomeData>> map) {
List<Message<SomeData>> result = new LinkedList<>();
for(Entry<Key, SomeData> entry : map.entrySet()) {
final Message<SomeData> message = new MessageBuilder()
.withPayload(entry.getValue())
.setHeader(ROUTING_HEADER_NAME, entry.getKey())
.build();
result.add(message);
}
return result;
}
}

Spring Batch reading file with related records

i am using spring batch to read a flat file. The file has related records. ie, there can be a parent record and any number of child records i wanted to read all the records and call web service to store it. I also wanted to capture the relationship and store it.one challenge is child record can be anywhere in the file. And child can also have many children records.I am unable to find the solution for this problem with spring batch.
please provide your suggestions
update: I dont have any option to use database as temporary storage of data.
I had solved such problem by processing the file multiple times.
On every pass I'll try to read\process every record in the file with such alg:
if record has parent - check if parent already stored. If no - I skip it in processor
if record unchanged (or already stored if updates are not possible) - skip it in processor
else - store in db
And then declare loop and decider:
<batch:step id="processParentAndChilds" next="loop">
<batch:tasklet>
<batch:chunk reader="processParentAndChildsReader"
commit-interval="1000">
<batch:processor>
<bean class="processParentAndChildsProcessor"/>
</batch:processor>
<batch:writer>
<bean class="processParentAndChildsWriter"/>
</batch:writer>
</batch:chunk>
</batch:tasklet>
</batch:step>
<batch:decision decider="processParentAndChildsRetryDecider" id="loop">
<batch:next on="NEXT_LEVEL" to="processprocessParentAndChilds"/>
<batch:next on="COMPLETED" to="goToNextSteps"/>
</batch:decision>
public class ProcessParentAndChildsRetryDecider implements JobExecutionDecider{
#Override
public FlowExecutionStatus decide(JobExecution jobExecution, StepExecution stepExecution) {
// if no on record written - no sense to try again
if (stepExecution.getWriteCount() > 0) {
return new FlowExecutionStatus("NEXT_LEVEL");
} else {
return FlowExecutionStatus.COMPLETED;
}
}
}

Resources