Spring Data queryForStream: how can it run out of Heap Space? - spring-boot

I have a Spring Boot application that reads from a database table with potentially millions of rows and thus uses the queryForStream method from Spring Data. This is the code:
Stream<MyResultDto> result = jdbcTemplate.queryForStream("select * from table", myRowMapper));
This runs well for smaller tables, but from about 500 MB of table size the application dies with a stacktrace like this:
Exception in thread "http-nio-8080-Acceptor" java.lang.OutOfMemoryError: Java heap space
at java.base/java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:64)
at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:363)
at org.apache.tomcat.util.net.SocketBufferHandler.<init>(SocketBufferHandler.java:58)
at org.apache.tomcat.util.net.NioEndpoint.setSocketOptions(NioEndpoint.java:486)
at org.apache.tomcat.util.net.NioEndpoint.setSocketOptions(NioEndpoint.java:79)
at org.apache.tomcat.util.net.Acceptor.run(Acceptor.java:149)
at java.base/java.lang.Thread.run(Thread.java:833)
2023-01-28 00:37:23.862 ERROR 1 --- [nio-8080-exec-3] o.a.c.h.Http11NioProtocol : Failed to complete processing of a request
java.lang.OutOfMemoryError: Java heap space
2023-01-28 00:37:30.548 ERROR 1 --- [nio-8080-exec-6] o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: Java heap space] with root cause
java.lang.OutOfMemoryError: Java heap space
Exception in thread "http-nio-8080-Poller" java.lang.OutOfMemoryError: Java heap space
As you can probably guess from the stack trace, I am streaming the database results out via a HTTP REST interface. The stack is PostgreSQL 15, the standard PostgreSQL JDBC driver 42.3.8 and the spring-boot-starter-data-jpa is 2.6.14, which results in spring-jdbc 5.3.24 being pulled.
It's worth noting that the table has no primary key, which I suppose should be no problem for the above query. I have not posted the RowMapper, because it never goes to work, the memory literally runs out after sending the query to the database. It just never comes back with a result set that the rowmapper could work on.
I have tried to use a jdbcTemplate.setFetchSize(1000) and also without specifying any fetch size, which I believe would result in the default being used (100 I think). In both cases the same thing happens - large result sets will not be streamed, but somehow exhaust the Java heap space before streaming starts. What could be the reason for this? Isn't the queryForStream method meant to exactly avoid such situations?

I was on the right track setting the fetch size, that is exactly what prevents the JDBC driver from loading the entire result set into memory. In my case the setting was silently ignored and that is a function of the PostgreSQL JDBC driver. It ignores the fetch size if autocommit is set to true, which is the default in Spring JDBC.
Therefore the solution was to define a datasource in Spring JDBC that sets autocommit to false and use that datasource for the streaming query. The fetch size was then applied and I ended up setting it to 10000, which in my case yielded the best performance / memory usage ratio.

Related

H2O Exception: water.AutoBuffer.AutoBufferException

We are producitonalizing a model that was created using h2o. However, during one of our load tests, we are getting a
ERRR: water.AutoBuffer$AutoBufferException
What exactly is the AutoBufferException and what causes it to be thrown?
Previously, when we deployed the application, we did not provide any Java heap memory parameters and during load testing, that always caused the instance to terminate due to an OOM error. However, since adding the Java heap memory parameter and conducting load tests, we have been getting the AutoBufferException under heavy load.
When we look at the Debug logs, we see that there is plenty of memory still available when this Exception is thrown. CPU usage seems to be pretty normal as well. Below is a snippet of our code and the stacktrace that the error produces.
According to the logs, h2o makes a series of web requests in order to complete the prediction. The exception only gets thrown during one of those calls: (Request Type: POST, Request Path: /99/Models.bin). Once the exception gets thrown, the request fails and the service picks up the next request.
h2o Version: 3.18.0.11
Python Version: 3.5.4
OS: RHEL 7.x
h2o.connect(ip = HOST, port = PORT)
loaded_model = h2o.load_model(MODEL_PATH)
def predict(to_be_scored):
to_be_scored_hex = h2o.H2OFrame(to_be_scored)
prediction = loaded_model.predict(to_be_scored_hex)
return prediction
We would expect the code to return a prediction, however the following stacktrace is the output.
Stacktrace: [water.AutoBuffer$AutoBufferException,
water.AutoBuffer.getImpl(AutoBuffer.java:634),
water.AutoBuffer.getSp(AutoBuffer.java:610),
water.AutoBuffer.get1(AutoBuffer.java:749),
water.AutoBuffer.get1U(AutoBuffer.java:750),
water.AutoBuffer.getInt(AutoBuffer.java:829),
water.AutoBuffer.get(AutoBuffer.java:793),
water.AutoBuffer.getKey(AutoBuffer.java:811),
water.AutoBuffer.getKey(AutoBuffer.java:808),
hex.Model.readAll_impl(Model.java:1635),
hex.tree.SharedTreeModel.readAll_impl(SharedTreeModel.java:411),
water.AutoBuffer.getKey(AutoBuffer.java:814),
water.Keyed.readAll(Keyed.java:50),
hex.Model.importBinaryModel(Model.java:2256),
water.api.ModelsHandler.importModel(ModelsHandler.java:209),
sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source),
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43),
java.lang.reflect.Method.invoke(Method.java:498),
water.api.Handler.handle(Handler.java:63),
water.api.RequestServer.serve(RequestServer.java:451),
water.api.RequestServer.doGeneric(RequestServer.java:296),
water.api.RequestServer.doPost(RequestServer.java:222),
javax.servlet.http.HttpServlet.service(HttpServlet.java:755),
javax.servlet.http.HttpServlet.service(HttpServlet.java:848),
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684),
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:503),
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086),
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429),
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020),
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135),
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154),
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116),
water.JettyHTTPD$LoginHandler.handle(JettyHTTPD.java:197),
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154),
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116),
org.eclipse.jetty.server.Server.handle(Server.java:370),
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494),
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53),
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:982),
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1043),
org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865),
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240),
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72),
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264),
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608),
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543),
java.lang.Thread.run(Thread.java:748)];parms={dir=/opt/h2o/data/model, model_id=}

Getting Slow operation detected: com.hazelcast.map.impl.operation.DeleteOperation while calling ITopic.publish()

[SpringBoot Application] I am trying to delete a key from Hazelcast Map from an async Method. In the Delete Method of the MapStore class, I am trying to put a key into a topic and calling Publish().However, sometimes I am getting this message
Slow operation detected: com.hazelcast.map.impl.operation.DeleteOperation. I am adding the stack trace below.
2019-03-27 11:52:08.041 [31m WARN[0;39m [] 24586 --- [trace=,span=] [35m[hz._hzInstance_1_gaian.SlowOperationDetectorThread][0;39m [33mc.h.s.i.o.s.SlowOperationDetector [0;39m: [localhost]:5701 [gaian] [3.10] Slow operation detected: com.hazelcast.map.impl.operation.DeleteOperation
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:160)
com.hazelcast.client.spi.ClientProxy.invokeOnPartition(ClientProxy.java:225)
com.hazelcast.client.proxy.PartitionSpecificClientProxy.invokeOnPartition(PartitionSpecificClientProxy.java:49)
com.hazelcast.client.proxy.ClientTopicProxy.publish(ClientTopicProxy.java:52)
com.gaian.adwize.cache.mapstore.CampaignMapStore.delete(CampaignMapStore.java:95)
com.gaian.adwize.cache.mapstore.CampaignMapStore.delete(CampaignMapStore.java:36)
com.hazelcast.map.impl.MapStoreWrapper.delete(MapStoreWrapper.java:115)
com.hazelcast.map.impl.mapstore.writethrough.WriteThroughStore.remove(WriteThroughStore.java:56)
com.hazelcast.map.impl.mapstore.writethrough.WriteThroughStore.remove(WriteThroughStore.java:28)
com.hazelcast.map.impl.recordstore.DefaultRecordStore.delete(DefaultRecordStore.java:565)
com.hazelcast.map.impl.operation.DeleteOperation.run(DeleteOperation.java:38)
com.hazelcast.spi.Operation.call(Operation.java:148)
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:202)
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:191)
com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:406)
com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:433)
com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:581)
com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:566)
com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:525)
com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:215)
com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:60)
com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:67)
com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:130)
com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:110)
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155)
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125)
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100)
Any help from the community will be really helpful. Thanks.
Slow operation is not necessarily an error. The newer versions of Hazelcast can calculate what is a normal response time and detect when something is out of ordinary. It’s usually more of a network latency issue or garbage collection introducing latency.
This is not an error. This is visible in Hazelcast Version 4.0.x as well.

Can MAX_UTILIZATION for PROCESSES reached cause "Unable to get managed connection" Exception?

A JBoss 5.2 application server log was filled with thousands of the following exception:
Caused by: javax.resource.ResourceException: Unable to get managed connection for jdbc_TestDB
at org.jboss.resource.connectionmanager.BaseConnectionManager2.getManagedConnection(BaseConnectionManager2.java:441)
at org.jboss.resource.connectionmanager.TxConnectionManager.getManagedConnection(TxConnectionManager.java:424)
at org.jboss.resource.connectionmanager.BaseConnectionManager2.allocateConnection(BaseConnectionManager2.java:496)
at org.jboss.resource.connectionmanager.BaseConnectionManager2$ConnectionManagerProxy.allocateConnection(BaseConnectionManager2.java:941)
at org.jboss.resource.adapter.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:96)
... 9 more
Caused by: javax.resource.ResourceException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] )
at org.jboss.resource.connectionmanager.InternalManagedConnectionPool.getConnection(InternalManagedConnectionPool.java:311)
at org.jboss.resource.connectionmanager.JBossManagedConnectionPool$BasePool.getConnection(JBossManagedConnectionPool.java:689)
at org.jboss.resource.connectionmanager.BaseConnectionManager2.getManagedConnection(BaseConnectionManager2.java:404)
... 13 more
I've stripped off the first part of the exception, which is basically our internal JDBC wrapper code which tries to get a DB connection from the pool.
Looking at the Oracle DB side I ran the query:
select resource_name, current_utilization, max_utilization, limit_value
from v$resource_limit
where resource_name in ('sessions', 'processes');
This produced the output:
RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION LIMIT_VALUE
processes 1387 1500 1500
sessions 1434 1586 2272
Given the fact that that PROCESSES limit of 1500 was reached, would this cause the JBoss exceptions we experienced? I've also been investigating the possibility of connection leaks, but haven't found any evidence of that so far.
What is the recommended course of action here? Is simply increasing the limit a valid solution?
Usually when max_utilization gets the processes value listener will refuse new connections to database. you can see the errors relates to it in alert log. to solve this in database side you should increase the processes parameter.
hmm strange. is it possible, that exception wrapping in JBOSS hides the original error? You should get some sql exception whose text starts with ORA-. Maybe your JDBC wrapper does not handle errors properly.
The recommended actions is to:
check configured size of connection pool against processes sessions Oracle startup paramters.
check Oracles view v$session, especially columns STATUS, LAST_CALL_ET, SQL_ID, PREV_SQL_ID.
translate sql_id(prev_sql_id) into sql_text via v$sql.
if you application has a connection leak, sql_id and pred_sql_id might point you onto a place in your source code, where a connection was used last (i.e. where it was leaked).

Cypher query removing properties results in out-of-memory error in neo4j-shell

I have a large network of over 15 million nodes. I want to remove the property "CONTROL" from all of them using a Cypher query in the neo4-shell.
If I try and execute any of the following:
MATCH (n) WHERE has(n.`CONTROL`) REMOVE n.`CONTROL` RETURN COUNT(n);
MATCH (n) WHERE has(n.`CONTROL`) REMOVE n.`CONTROL`;
MATCH (n) REMOVE n.`CONTROL`;
the system returns:
Error occurred in server thread; nested exception is:
java.lang.OutOfMemoryError: Java heap space
Even the following query gives the OutOfMemoryError:
MATCH (n) REMOVE n.`CONTROL` RETURN n.`ID` LIMIT 10;
As a test, the following does execute properly:
MATCH (n) WHERE has(n.`CONTROL`) RETURN COUNT(n);
returning 16636351.
Some details:
The memory limit depends on the following settings:
wrapper.java.maxmemory (conf/neo4j-wrapper.conf)
neostore..._memory (conf/neo4j.properties)
By setting these values to total 28 GB in both files, results in a java_pidXXX.hprof file of about 45 GB (wrapper.java.additional=-XX:+HeapDumpOnOutOfMemoryError).
The only clue I could google was:
...you use the Neo4j-Shell which is just an ops tool and just collects the data in memory before sending back, it was never meant to handle huge result sets.
Is it really not possible to remove properties in large networks using the neo4j-shell and cypher? Or what am I doing wrong?
PS
Additional information:
Neo4j version: 2.1.3
Java versions: Java(TM) SE Runtime Environment (build 1.7.0_76-b13) and OpenJDK Runtime Environment (IcedTea 2.5.4) (7u75-2.5.4-1~trusty1)
The database is 7.4 GB (16636351 nodes, 14724489 relations)
The property "CONTROL" is empty, i.e., it has just been defined for all the nodes without actually assigning a property value.
An example of the exception from data/console.log:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid20541.hprof ...
Dump file is incomplete: file size limit
Exception in thread "GC-Monitor" Exception in thread "pool-2-thread-2" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.lang.StringCoding.safeTrim(StringCoding.java:79)
at java.lang.StringCoding.access$300(StringCoding.java:50)
at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:305)
at java.lang.StringCoding.encode(StringCoding.java:344)
at java.lang.StringCoding.encode(StringCoding.java:387)
at java.lang.String.getBytes(String.java:956)
at ch.qos.logback.core.encoder.LayoutWrappingEncoder.convertToBytes(LayoutWrappingEncoder.java:122)
at ch.qos.logback.core.encoder.LayoutWrappingEncoder.doEncode(LayoutWrappingEncoder.java:135)
at ch.qos.logback.core.OutputStreamAppender.writeOut(OutputStreamAppender.java:194)
at ch.qos.logback.core.FileAppender.writeOut(FileAppender.java:209)
at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:219)
at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103)
at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)
at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260)
at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442)
at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:396)
at ch.qos.logback.classic.Logger.warn(Logger.java:709)
at org.neo4j.kernel.logging.LogbackService$Slf4jToStringLoggerAdapter.warn(LogbackService.java:243)
at org.neo4j.kernel.impl.cache.MeasureDoNothing.run(MeasureDoNothing.java:84)
java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1857)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1079)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "Statistics Gatherer[primitives]" java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI RenewClean-[10.65.4.212:42299]" java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI RenewClean-[10.65.4.212:43614]" java.lang.OutOfMemoryError: Java heap space
see here: http://jexp.de/blog/2013/05/on-importing-data-in-neo4j-blog-series/
To update data with Cypher it is also necessary to take transaction size into account. For the embedded case, batching transactions is discussed in the next installment of this series. For the remote execution via the Neo4j REST API there are a few important things to remember. Especially with large index lookups and match results, it might happen that the query updates hundreds of thousands of elements. Then a paging mechanism using WITH and SKIP/LIMIT can be put in front of the updating operation.
MATCH (m:Movie)<-[:ACTED_IN]-(a:Actor)
WITH a, count(*) AS cnt
SKIP {offset} LIMIT {pagesize}
SET a.movie_count = cnt
RETURN count(*)
Run with pagesize=20000 and increasing offset=0,20000,40000,… until the query returns a count < pagesize
So in your case, repeat this until it returns 0 rows. You can also increase the limit to 1M.
MATCH (n) WHERE has(n.`CONTROL`)
WITH n
LIMIT 100000
REMOVE n.`CONTROL`
RETURN COUNT(n);

HazelCast IMap.values() giving OutofMemory on Tomcat

I'm still trying to get to know hazelcast and have to make a decision on whether to use it or not.
I wrote a simple application where in I startup the cache on (single node) server startup and load the Map at the same time with about 400 entries.The Object itself has two String fields. I have a service class that looksup the cache and tries to get all the values from the map.
However, I'm getting a OutofMemoryError on Java Heap Space while trying to get values out of the hazelcast map. Eventually we plan to move to a 5 node cluster to start with.
Following is the cache spring config:
<hz:hazelcast id="instance">
<hz:config>
<hz:group name="dev" password=""/>
<hz:properties>
<hz:property name="hazelcast.merge.first.run.delay.seconds">5</hz:property>
<hz:property name="hazelcast.merge.next.run.delay.seconds">5</hz:property>
</hz:properties>
<hz:network port="5701" port-auto-increment="false">
<hz:join>
<hz:multicast enabled="true" />
</hz:join>
</hz:network>
</hz:config>
</hz:hazelcast>
<hz:map instance-ref="instance" id="statusMap" name="statuses" />
Following is where the error occurs:
map = instance.getMap("statuses");
Set<Status> statuses = (Set<Status>) map.values();
return statuses;
Any other method of IMap works fine. I tried getting the keySet and size and both worked fine. It is only when I try to get the values that the OutofMemory error shows up.
java.lang.OutOfMemoryError: Java heap space
I've tried the above with a standalone java application and it works fine. I've also monitored with visual VM and don't see any spike in used Heap Memory when the error occurs which is all the more confusing. Available Heap is 1G and the used Heap was about 70MB when the error occurred.
However, when I take out cache implementation from the application, it works fine going to the Database and getting the data.
I've also tried playing around with the tomcat vm args to no success. It is the same OutofMemoryError when I access IMap.values() with or without SQLPredicate. Any help or direction in this matter will be greatly appreciated.
Thanks.
As the exception mentions you're running out of heap space since the values method tries to return all deserialized values at once. If they don't fit into memory you'll likely to get an OOME.
You can use paging to prevent this from happening: http://hazelcast.org/docs/latest/manual/html-single/hazelcast-documentation.html#paging-predicate-order-limit-
How big are your 400 entries?
And like Chris said, the whole data is being pulled in memory.
In the future we'll replace this by an iteration based approach where we'll only pull small chunks in memory instead of the whole thing.
I figured out the issue. The Status object was implementing "com.hazelcast.nio.serialization.Portable" for Serialization. I did not configure the corresponding serialization factory. After I configured the factory as follows, it worked fine:
<hz:serialization>
<hz:portable-factories>
<hz:portable-factory factory-id="1" class-name="ApplicationPortableFactory" />
</hz:portable-factories>
</hz:serialization>
Apologize for not giving the complete background initially as I myself noticed it later on. Thanks for replying though. I wasn't aware of the Paging Predicate and now I'm using it for sorting and paging results. Thanks again.

Resources