Cannot find Region in cache GemFireCache - gemfile

Caused by: org.springframework.beans.factory.BeanInitializationException: Cannot find region [record] in cache GemFireCache[id = 20255495;
isClosing = false; isShutDownAll = false;
closingGatewayHubsByShutdownAll = false; created = Mon Jan 23 11:45:10 EST 2017; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]
at org.springframework.data.gemfire.RegionLookupFactoryBean.lookupFallback(RegionLookupFactoryBean.java:72)
at org.springframework.data.gemfire.RegionLookupFactoryBean.afterPropertiesSet(RegionLookupFactoryBean.java:59)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1541)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1479)
... 13 more
My XML File is :
<beans>
....
<context:component-scan base-package="spring.gemfire.repository.deptemp"/>
<gfe:client-cache id="gemfireCache" pool-name="gfPool"/>
<!--Region for being used by the Record Bean -->
<gfe:replicated-region id="record" cache-ref="gemfireCache"/>
<bean id="record" class="spring.gemfire.repository.deptemp.beans.Record"/>
<gfe:pool id="gfPool" max-connections="10" subscription-enabled="true" >
<gfe:locator host="localhost" port="10334"/>
</gfe:pool>
<gfe:lookup-region id="record" />
<gfe-data:repositories base-package="spring.gemfire.repository.deptemp.repos"/>
</beans>

Abhisekh-
Why do you have both this...
<gfe:replicated-region id="record" cache-ref="gemfireCache"/>
And this...
<gfe:lookup-region id="record" />
Also, you have defined this...
<bean id="record" class="spring.gemfire.repository.deptemp.beans.Record"/>
Which (most definitely) overrode your REPLICATE Region bean definition (also with id="record") based on the "order" of your bean definitions in your XML defined above.
While Spring first and foremost adheres to dependency order between bean definitions, it will generally follow the declared order when no dependencies (explicit or implicit) exist.
Since <bean id="record" .../> comes after <gfe:replicated-region id="record" ../>, then <bean id=record../> overrides the <gfe:replicated-region id="record"/> bean definition.
Additionally, the <gfe:lookup-region> is not needed since you are not using GemFire/Geode's native configuration (e.g. cache.xml) or Cluster Configuration Service.
Furthermore, you are declaring a ClientCache, so technically probably want a <gfe:client-region> to match the GemFire/Geode Server Region, yes?!
While you can create REPLICATE Regions (and even PARTITION Regions) on a client, you typically do not do this since those Regions are NOT part of any distributed system, or cluster of GemFire "Server" nodes.
A client Region (which can be a PROXY, or even a CACHING_PROXY) will distribute data operations to the Server. Additionally, if you have data that only a particular client needs, then you ought to create local Regions, using <gfe:local-region>.
I would definitely read this, first...
http://gemfire.docs.pivotal.io/geode/basic_config/book_intro.html
Followed by this next...
http://gemfire.docs.pivotal.io/geode/topologies_and_comm/book_intro.html
Then this...
http://gemfire.docs.pivotal.io/geode/developing/book_intro.html
And lastly...
http://docs.spring.io/spring-data-gemfire/docs/current/reference/html/#bootstrap
-John

Related

How can I use a properties file in the "expand" portion of the Gradle Copy task?

Lets say I have properties file config.properties which has
prop1=abc
prop2=xyz
and a template-config.xml that looks something like
<bean id="id1" >
<property name="prop1" value="${prop1}" />
<property name="prop2" value="${prop2}" />
</bean>
I have 2 Questions:
Is there a way I can use the property file in the expand() portion of the gradle copy task to inject the properties into the config from gradle.build.kts?
Is there a way I can use expand to fill in only one of the properties without throwing an error?
So far I have
tasks.register<Copy>("create-config-from-template") {
from("$buildDir/resources/main/template-config.xml")
into("$buildDir/dist")
expand(Pair("prop1", "abc"))
}
However, this throws an error
Missing property (prop2) for Groovy template expansion. Defined keys [prop1].
I know that I can also specify the value for prop2 inside "expand()", but for my purposes it would help if I could only inject some of the properties and not others. Is there a simple way to tell gradle not to worry about the other "${}" properties in the file?
If not, is there a way I can use the actual property file as the set of properties to expand? I can't seem to find the Kotlin DSL syntax for this anywhere.
Thank you very much in advance.
For 1, you can use configure the existing processResources task to apply additional configuration to a specific file. In this case template-config.xml, for example:
tasks.processResources {
filesMatching("**/template-config.xml") {
val examplePropertiesTextResource = resources.text.fromFile(layout.projectDirectory.file("example.properties"))
val exampleProperties = Properties().apply {
FileInputStream(examplePropertiesTextResource.asFile()).use { load(it) }
}
val examplePropertiesMap = mutableMapOf<String, Any>().apply {
exampleProperties.forEach { k, v -> put(k.toString(), v) }
}
expand(examplePropertiesMap)
}
}
filesMatching will match on the file you want. Then the proceeding lines load the properties file, in this case a file name example.properties in the project's root directory. It is then transformed into a Map<String, Any> and passed to the expand method.
However, for 2, with the above approach it will fail if any properties are missing. This is because the underlying expansion engine (SimpleTemplateEngine) does not allow such configuration
As an alternative, you can use ExpandTokens from Apache ANT to achieve the replacement when some properties are missing:
tasks.processResources {
filesMatching("**/template-config.xml") {
val examplePropertiesTextResource = resources.text.fromFile(layout.projectDirectory.file("example.properties"))
val exampleProperties = Properties().apply {
FileInputStream(examplePropertiesTextResource.asFile()).use { load(it) }
}
val examplePropertiesMap = mapOf<String, Any>("tokens" to exampleProperties.toMap())
println(examplePropertiesMap)
filter(examplePropertiesMap, ReplaceTokens::class.java)
}
}
Note that for ReplaceTokens, you must change the placeholders from ${example} to #example#:
<bean id="id1" >
<property name="prop1" value="#prop1#" />
<property name="prop2" value="#prop2#" />
</bean>
This is because Gradle only accepts a Class and ReplaceTokens is final, so you can't extend the class to use ${} as the placeholders.

Element 'springService' is not defined in the Mule Registry [Mule 4]

I have a problem with beans in mule 4.
There is an error when trying to invoke java:
Element 'springService' is not defined in the Mule Registry
Error occures in run time.
I have created configuration.xml and declared spring config:
<spring:config name="springConfig" doc:name="springConfig" files="beans.xml" />
Then created beans.xml in /src/main/resources:
<beans ...>
<context:component-scan base-package="com.services" />
</beans>
com.services is an external service that has a class:
#Service
public class SpringService {
...
}
EDIT1:
The way I am trying to invoke java method:
<java:invoke doc:name="Get data" doc:id="id" class="com.services.SpringService" method="#[getData(String)]" instance="springService">
<java:args><![CDATA[#[{name: vars.name}]]]></java:args>
</java:invoke>
Stack trace:
Message : Element 'springService' is not defined in the
Mule Registry
Element : test-flow/processors/2 #
test-service:test.xml:14 (Get Data)
Element DSL : <java:invoke doc:name="Get data" doc:id="id"
class="com.services.SpringService" method="#[getData(String)]"
instance="springService">
java:args#[{name: vars.name}]</java:args>
</java:invoke> Error type : MULE:UNKNOWN
FlowStack : at test-flow(test-flow/processors/2 #
test-service:test.xml:14 (Get Data))
at get:\getData(name):test-api-config(get:\getData(name):test-api-config/processors/0
# test-service:test-api.xml:125 (Get Data))
at test-api-main(test-api-main/processors/0 # test-service:test-api.xml:15)
The documentation of the Java Module doesn't say it can invoke a Spring bean instance. The examples always mention using the new operation of the module to create instances.At the least if possiblle it is not documented how. Probably unsupported by the current version of the Java module.

Spring Batch DB2 update DB2GSE.ST_POLYGON fails

I am trying to insert a polygon into db2 table hosted on z/OS
This is my database Item Writer
<bean id="databaseItemWriter"
class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>
<![CDATA[
INSERT INTO SAMPLE_GEOMETRIES
(GEO_NAME, GEOMETRY)
VALUES
( ?, DB2GSE.ST_POLYGON(?, 1))
]]>
</value>
</property>
<property name="itemPreparedStatementSetter">
<bean class="com.amex.elbs.DAO.GeometriesItemPreparedStatementSetter" />
</property>
</bean>
This is my custom prepared statement setter
public class GeometriesItemPreparedStatementSetter implements ItemPreparedStatementSetter{
#Override
public void setValues(Geometries item, PreparedStatement ps) throws SQLException {
ps.setString(1, item.Id);
ps.setString(2, item.Polygon);
}
}
This is my sample input file. It is pipe delimited and it has the ID and the Polygon Co-ordinates.
pm251|'POLYGON((-159.335174733889 21.9483433404175,-159.327130348878 22.0446395507162,-159.295025589769 22.1248124949548,-159.343195828355 22.1970166285359,-159.391366885913 22.2291198667724,-159.576012589057 22.2131796383001,-159.712505933171 22.1490592515515,-159.800814224332 22.0366665967853,-159.736592652746 21.9644203111023,-159.640246973766 21.9483657695954,-159.576021285803 21.8841361312636,-159.439545188912 21.8680716835921,-159.335174733889 21.9483433404175))', 1
The below statement when executed on z/OS is successful.
,INSERT,INTO SAMPLE_GEOMETRIES
(GEO_NAME, GEOMETRY)
VALUES
( 'PM',
DB2GSE.ST_POLYGON('POLYGON((
-159.335174733889 21.9483433404175,
-159.327130348878 22.0446395507162,
-159.295025589769 22.1248124949548,
-159.343195828355 22.1970166285359,
-159.391366885913 22.2291198667724,
-159.576012589057 22.2131796383001,
-159.712505933171 22.1490592515515,
-159.800814224332 22.0366665967853,
-159.736592652746 21.9644203111023,
-159.640246973766 21.9483657695954,
-159.576021285803 21.8841361312636,
-159.439545188912 21.8680716835921,
-159.335174733889 21.9483433404175))',1))
---------+---------+---------+---------+---------
DSNE615I NUMBER OF ROWS AFFECTED IS 1
This is what I get when I execute
Caused by: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-245, SQLSTATE=428F5, SQLERRMC=DB2GSE.ST_POLYGON, DRIVER=4.12.55
at com.ibm.db2.jcc.am.hd.a(hd.java:676)
at com.ibm.db2.jcc.am.hd.a(hd.java:60)
at com.ibm.db2.jcc.am.hd.a(hd.java:127)
at com.ibm.db2.jcc.am.mn.c(mn.java:2621)
at com.ibm.db2.jcc.am.mn.d(mn.java:2609)
at com.ibm.db2.jcc.am.mn.a(mn.java:2085)
at com.ibm.db2.jcc.am.nn.a(nn.java:7054)
at com.ibm.db2.jcc.am.mn.a(mn.java:2062)
at com.ibm.db2.jcc.t4.cb.g(cb.java:136)
at com.ibm.db2.jcc.t4.cb.a(cb.java:41)
at com.ibm.db2.jcc.t4.q.a(q.java:32)
at com.ibm.db2.jcc.t4.rb.i(rb.java:135)
at com.ibm.db2.jcc.am.mn.ib(mn.java:2055)
at com.ibm.db2.jcc.am.nn.rc(nn.java:3219)
at com.ibm.db2.jcc.am.nn.s(nn.java:3370)
at com.ibm.db2.jcc.am.nn.l(nn.java:2499)
at com.ibm.db2.jcc.am.nn.addBatch(nn.java:2438)
at org.springframework.batch.item.database.JdbcBatchItemWriter$1.doInPreparedStatement(JdbcBatchItemWriter.java:190)
at org.springframework.batch.item.database.JdbcBatchItemWriter$1.doInPreparedStatement(JdbcBatchItemWriter.java:185)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:644)
... 28 more
The error message for SQLCODE -245 reads: "THE INVOCATION OF FUNCTION routine-name IS AMBIGUOUS".
Apparently, there are more than one version of DB2GSE.ST_POLYGON in the database, accepting different types of arguments. You are using an untyped parameter marker: DB2GSE.ST_POLYGON(?, 1), so DB2 is unable to determine which version of DB2GSE.ST_POLYGON you want.
Add an explicit cast to the function invokation using the appropriate data type, for example:
DB2GSE.ST_POLYGON(CAST( ? AS VARCHAR(1000)), 1)

Spring integration aggregator time expire - issue

Below code is accepting 2 messages, before proceeding to outbound channel.
<bean id="timeout"
class="org.springframework.integration.aggregator.TimeoutCountSequenceSizeReleaseStrategy">
<constructor-arg name="threshold" value="2" />
<constructor-arg name="timeout" value="7000" />
</bean>
<int:aggregator ref="updateCreate" input-channel="filteredAIPOutput"
method="handleMessage" release-strategy="releaseStrategyBean" release-strategy-method="timeout">
</int:aggregator>
My use case is to collate all the message for 10 min and send it to outbound channel. Not the based on the count of messages as shown above.
To implement this time based functionality, used below code:
<int:aggregator ref="updateCreate" input-channel="filteredAIPOutput"
method="handleMessage"
output-channel="outputappendFilenameinHeader" >
</int:aggregator>
<bean id="updateCreate" class="helper.UpdateCreateHelper"/>
I passed 10 messages, PojoDateStrategyHelper canRelease method invoked 10 times.
Tried to implement PojoDateStrategyHelper, with time difference logic, it's working as expected. After 10 min UpdateCreateHelper class is called, but it received only 1 message(last message). Remaining 9 messages not seen anywhere. Am i doing anything wrong here ? Messages are not collating.
I suspect there should be something inbuild with in SI, which can achieve this, if i pass 10 min as parameter, once it expires the 10 min time, it should pass on all the messages to outbound channel.
This is my UpdateCreateHelper.java code :
public Message<?> handleMessage(List<Message<?>> flights){
LOGGER.debug("orderItems list ::"+flights.size()); // this is always printing 1
MessageBuilder<?> messageWithHeader = MessageBuilder.withPayload(flights.get(0).getPayload().toString());
messageWithHeader.setHeader("ftp_filename", "");
return messageWithHeader.build();
}
#CorrelationStrategy
public String correlateBy(#Header("id") String id) {
return id;
}
#ReleaseStrategy
public boolean canRelease(List<Message<?>> flights) {
LOGGER.debug("inside canRelease ::"+flights.size()); // This is called for each and every message
return compareTime(date.getTime(), new Date().getTime());
}
I am new to SI (v3.x), i searched a lot for time bound related aggregator, couldn't find any useful source, Please suggest.
thanks!
Turn on DEBUG logging to see why you only see one message.
I suspect there should be something inbuilt with in SI, which can achieve this, ...
Prior to version 4.0 (and, by default, after), the aggregator is a completely passive component; the release strategy is only consulted when a new message arrives.
4.0 added group timeout capabilities whereby partial groups can be released (or discarded) after a timeout.
However, with any version, you can configure a MessageGroupStoreReaper to release partially complete groups after some timeout. See the documentation.
private String correlationId = date.toString();
#CorrelationStrategy
public String correlateBy(Message<?> message) {
**// Return the correlation ID which is the timestamp the current window started (all messages should have the same correlation id)**
return "same";
}
Earlier i was returning the Header Id, which is different from Message to Message. I hope this solution could help some one. I wasted almost 2 days by ignore such a small concept.

Spring Integration Splitter Map Keys to different channels

I have a transformer which returns a Map as a result. This result is then put on to the output-channel. What I want to do is to go to different channel for each KEY in the map. How can I configure this in Spring Integration?
e.g.
Transformer -- produces --> Map
Map contains {(Key1, "some data"), (Key2, "some data")}
So for Key1 --> go to channel 1
So for Key2 --> go to channel 2
etc..
Code examples would be helpful.
Thanks in advance
GM
Your processing should consist of two steps:
Partitioning message into separate parts that will be processed independently,
Routing separate messages (the result of split) into appropriate channels.
For the first task you have to use splitter and for the second one - router (header value router fits best here).
Please find a sample Spring Integration configuration below. You may want to use an aggregator at the end of a chain in order to combine messages - I leave it at your discretion.
<channel id="inputChannel">
<!-- splitting message into separate parts -->
<splitter id="messageSplitter" input-channel="inputChannel" method="split"
output-channel="routingChannel">
<beans:bean class="com.stackoverflow.MapSplitter"/>
</spliter>
<channel id="routingChannel">
<!-- routing messages into appropriate channels basis on header value -->
<header-value-router input-channel="routingChannel" header-name="routingHeader">
<mapping value="someHeaderValue1" channel="someChannel1" />
<mapping value="someHeaderValue2" channel="someChannel2" />
</header-value-router>
<channel id="someChannel1" />
<channel id="someChannel2" />
And the splitter:
public final class MapSplitter {
public static final String ROUTING_HEADER_NAME = "routingHeader";
public List<Message<SomeData>> split(final Message<Map<Key, SomeData>> map) {
List<Message<SomeData>> result = new LinkedList<>();
for(Entry<Key, SomeData> entry : map.entrySet()) {
final Message<SomeData> message = new MessageBuilder()
.withPayload(entry.getValue())
.setHeader(ROUTING_HEADER_NAME, entry.getKey())
.build();
result.add(message);
}
return result;
}
}

Resources