I'm running a sample code I wrote to test HBase lockRow() and unlockRow() methods. The sample code is below:
HTable table = new HTable(config, "test");
RowLock rowLock = table.lockRow(Bytes.toBytes(row));
System.out.println("Obtained rowlock on " + row + "\nRowLock: " + rowLock);
Put p = new Put(Bytes.toBytes(row));
p.add(Bytes.toBytes("colFamily"), Bytes.toBytes(colFamily), Bytes.toBytes(value));
table.put(p);
System.out.println("put row");
table.unlockRow(rowLock);
System.out.println("Unlocked row!");
When I execute my code, I get an UnknownRowLockException. The documentation says that this error is thrown when an unknown row lock is passed to the region servers. I'm not sure how this is happening & how to resolve it.
The stack trace is below:
Obtained rowlock on row2
RowLock: org.apache.hadoop.hbase.client.RowLock#15af33d6
put row
Exception in thread "main" org.apache.hadoop.hbase.UnknownRowLockException: org.apache.hadoop.hbase.UnknownRowLockException: 5763272717012243790
at org.apache.hadoop.hbase.regionserver.HRegionServer.unlockRow(HRegionServer.java:2099)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:604)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1055)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:96)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.translateException(HConnectionManager.java:1268)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1014)
at org.apache.hadoop.hbase.client.HTable.unlockRow(HTable.java:870)
at HelloWorld.Hello.HelloWorld.main(HelloWorld.java:41)
EDIT:
I just realized that I should be printing rowLock.getLockId() instead of rowLock. I did this and compared it to the rowlock in the stack trace, and they are the same, so I'm not sure why the UnknownRowLockException occurs.
Please change the 'file descriptor limit' on the underlying system.
On linux you can do this with ulimit
Note that HBase prints in its logs as the first line the ulimit its seeing.
I was able to resolve this error in this way:
The rowLock being obtained needs to be passed as a parameter to the put constructor.
HTable table = new HTable(config, "test");
RowLock rowLock = table.lockRow(Bytes.toBytes(row));
System.out.println("Obtained rowlock on " + row + "\nRowLock: " + rowLock);
Put p = new Put(Bytes.toBytes(row), rowLock);
p.add(Bytes.toBytes("colFamily"), Bytes.toBytes(colFamily), Bytes.toBytes(value));
table.put(p);
System.out.println("put row");
table.unlockRow(rowLock);
System.out.println("Unlocked row!");
In my earlier approach, a rowLock was being obtained on a row of the table. However, since the rowLock was not used (not passed to put constructor), when I call the unlockRow method, the method waits for 60 seconds (lock timeout) to check if the lock has been used. After 60 seconds, the lock expires, and I end up with UnknownRowLockException
Related
When I use jdbc to query data from oracle database,
java code like this:
ParsedSql parsedSql = NamedParameterUtils.parseSqlStatement(apiSql);
MapSqlParameterSource paramSource = new MapSqlParameterSource(param);
String sqlToUse = NamedParameterUtils.substituteNamedParameters(parsedSql, paramSource);
List<SqlParameter> declaredParameters = NamedParameterUtils.buildSqlParameterList(parsedSql, paramSource);
PreparedStatementCreatorFactory creatorFactory = new PreparedStatementCreatorFactory(sqlToUse, declaredParameters);
Object[] params = NamedParameterUtils.buildValueArray(parsedSql, paramSource, null);
PreparedStatementCreator creator = creatorFactory.newPreparedStatementCreator(params);
PreparedStatement preparedStatement = creator.createPreparedStatement(conn);
if(batchCount > maxBatchCount || batchCount == 0){
batchCount = (int)maxBatchCount;
}
preparedStatement.setFetchSize(batchCount);
ResultSet resultSet = preparedStatement.executeQuery();
I set the fetchSize here.When the fetchSize is 10,the execution is normal.When the fetchSize is 100,000,an error occurs.Here is the error message:
java.sql.SQLException: Error
at com.alibaba.druid.pool.DruidDataSource.handleConnectionException(DruidDataSource.java:1770)
at com.alibaba.druid.pool.DruidPooledConnection.handleException(DruidPooledConnection.java:133)
at com.alibaba.druid.pool.DruidPooledStatement.checkException(DruidPooledStatement.java:82)
at com.alibaba.druid.pool.DruidPooledPreparedStatement.executeQuery(DruidPooledPreparedStatement.java:240)
at com.eternalinfo.alioth.openapi.service.DsApiInfoService.getApiData(DsApiInfoService.java:444)
at com.eternalinfo.alioth.openapi.service.DsApiInfoService.queryForList(DsApiInfoService.java:398)
at com.eternalinfo.alioth.openapi.service.DsApiInfoService.queryForList(DsApiInfoService.java:519)
at com.eternalinfo.alioth.openapi.service.DsApiInfoService$$FastClassBySpringCGLIB$$7b418e4c.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:769)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
...
Caused by: java.lang.ArrayIndexOutOfBoundsException: 802200000
at oracle.jdbc.driver.T4CNumberAccessor.unmarshalOneRow(T4CNumberAccessor.java:201)
at oracle.jdbc.driver.T4CTTIrxd.unmarshal(T4CTTIrxd.java:945)
at oracle.jdbc.driver.T4CTTIrxd.unmarshal(T4CTTIrxd.java:865)
at oracle.jdbc.driver.T4C8Oall.readRXD(T4C8Oall.java:790)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:403)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:208)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1046)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1207)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3613)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3657)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1495)
at com.alibaba.druid.pool.DruidPooledPreparedStatement.executeQuery(DruidPooledPreparedStatement.java:227)
... 117 more
enviroments:
jdk8
ojdbc-11.2.0.4
oracle 10,11,12
I execute a simple full query sql statement with 89 fields. I know that it's possible to get ArrayIndexOutOfBoundsException when carring too many arguments in bulk inserts,but I never seen it in queries.
There's nothing on that so far,does anyone know?
When you set the fetch size to 100,000 you're telling the driver to fetch that many rows in one single roundtrip. That means the driver has to allocate enough space in heap to store all these rows before it can start processing them. And in the end it looks like the buffers are being corrupted anyway, hence this error.
This fetch size is much higher than any reasonable number typically used. The default is 10 which may be too small. It's hard to tell what a reasonable number should be in this case without knowing the shape of the row but the rule of thumb is "don't go any higher than 1,000". Past that you'll be in the red zone for sure.
I'm trying to start oracle tracing through invoking direct JDBC calls. I'm obtaining my connection from Spring (boot/jdbc). Then I run the TKProf commands through statements... execute the query and print to the log.
The 3 statements below are returning false. If I use this same statements through Intellij's console I will get the intended results and my *.trc file is properly generated.
try (final Connection connection = DataSourceUtils.getConnection(dataSource)) {
log.debug(query);
final Long maxCount = findMaxCount();
boolean traceIdSet = connection.createStatement().execute("ALTER SESSION SET TRACEFILE_IDENTIFIER = '" + traceId + "'");
boolean traceEnabled = connection.createStatement().execute("ALTER SESSION SET EVENTS '10046 trace name context forever, level 8'");
final PreparedStatement stmt = connection.prepareStatement(query);
map(consumer, stmt.executeQuery(query));
boolean traceIdOff = connection.createStatement().execute("ALTER SESSION SET EVENTS '10046 trace name context off'");
log.debug("|" + traceIdSet + "|" + traceEnabled + "|" + traceIdOff + "| ____________________ DONE __________________________");
} catch (SQLException e) {
log.error("Error Performing the Query", e);
}
It has to be something in my configuration... I mean, java thin driver can do it because I can do it over the IDE... so I have to be missing some other stuff, maybe a Spring Boot convention that I should change.
Could you please help, any input is valuable.
Thanks!
My bad, the real issue was that I wasn't getting proper response from...
SELECT value FROM v$diag_info
Where I couldn't found the trace file, but only some others...
Nevertheless the trc files are in place, so there is no problem with Spring Boot/JDBC for enabling TKProf.
I have been working on to find the numbers of total records parsed by all the mappers using the MAP_INPUT_RECORDS variable.
Here is the code I am using :
Counters counters = job.getCounters();
for (CounterGroup group : counters) {
System.out.println("* Counter Group: " + group.getDisplayName() + " (" + group.getName() + ")");
System.out.println(" number of counters in this group: " + group.size());
for (Counter counter : group) {
System.out.println(" - " + counter.getDisplayName() + ": " + counter.getName() + ": "+counter.getValue());
}
}
Also, I tried using the following code snippet :
{
Counters counters = job.getCounters();
int recordCountData = (int)
//counters.getGroup("org.apache.hadoop.mapred.Task$Counter").findCounter("MAP_INPUT_RECORDS").getValue();
int recordCountData = (int) counters.findCounter(
"org.apache.hadoop.mapred.Task$Counter","MAP_INPUT_RECORDS")
.getValue();
}
But everytime it throws the following error :
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.CounterGroup, but class was expected
at com.ssga.common.riskmeasures.validation.mr.RiskMeasuresValidationDriver.run(RiskMeasuresValidationDriver.java:169)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.ssga.common.riskmeasures.validation.mr.RiskMeasuresValidationDriver.main(RiskMeasuresValidationDriver.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
PS : I am trying to use the above mentioned approaches after job.waitForCompletion(true) in the Driver class.
Any approach on how I can resolve this issue?
Thanks in advance.
Akhilesh
"The new API favors abstract classes over interfaces, since these are easier to evolve. For example, you can add a method (with a default implementation) to an abstract class without breaking old implementations of the class2. For example, the Mapper and Reducer interfaces in the old API are abstract classes in the new API."
"The new API is in the org.apache.hadoop.mapreduce package (and subpackages).
The old API can still be found in org.apache.hadoop.mapred."
-Hadoop the Definitive Guide by Tom White, 3rd Edition, Page 28
Check your mapper and reducer, use them as classes.
Either I run a scan command or a count, this error pops up and the error message doesn't make sense to me.
What does it say & how to solve it?
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
request=scanner_id: 788 number_of_rows: 100 close_scanner: false
next_call_seq: 0
Commands:
count 'table', 5000
scan 'table', {COLUMN => ['cf:cq'], FILTER => "ValueFilter( =, 'binaryprefix:somevalue')"}
EDIT:
I have added the following settings in hbase-site.xml
<property>
<name>hbase.rpc.timeout</name>
<value>1200000</value>
</property>
<property>
<name>hbase.client.scanner.caching</name>
<value>100</value>
</property>
NO IMPACT
EDIT2: Added sleep
Result[] results = scanner.next(100);
for (int i = 0; i < results.length; i++) {
result = results[i];
try {
...
count++;
...
Thread.sleep(10); // ADDED SLEEP
} catch (Throwable exception) {
System.out.println(exception.getMessage());
System.out.println("sleeping");
}
}
New Error after Edit2:
org.apache.hadoop.hbase.client.ScannerTimeoutException: 101761ms passed since the last invocation, timeout is currently set to 60000
...
Caused by: org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Name: 31, already closed?
...
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException: Name: 31, already closed?
...
FINALLY BLOCK: 9900
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hbase.client.ScannerTimeoutException: 101766ms passed since the last invocation, timeout is currently set to 60000
...
Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException: 101766ms passed since the last invocation, timeout is currently set to 60000
...
Caused by: org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Name: 31, already closed?
...
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException: Name: 31, already closed?
...
EDIT: By using the same client version shipped with the downloaded hbase(not maven 0.99), i was able to solve this issue.
Server version is 0.98.6.1
Contains client jars inside ./lib folder
Don't forget to attach the zookeeper library
OLD:
Right now I did two things, changed the table connection API (0.99)
Configuration conf = HBaseConfiguration.create();
TableName name = TableName.valueOf("TABLENAME");
Connection conn = ConnectionFactory.createConnection(conf);
Table table = conn.getTable(name);
Then when the error pops up, i try to recreate the connection
scanner.close();
conn.close();
conf.clear();
conf = HBaseConfiguration.create();
conn = ConnectionFactory.createConnection(conf);
table = conn.getTable(name);
table = ConnectionFactory.createConnection(conf).getTable(name);
scanner = table.getScanner(scan);
This works but is might slow after the first error it receives. Very slow to scan through all the rows
this sometimes occurs when you did huge deletes, you need to merge empty regions and try to balance your regions
Can be caused by a broken disk as well. In my case it was not so broken so that Ambari, HDFS or our monitoring services noticed it, but broken enough so that it couldn't serve one region.
After stopping the regionserver using that disk, the scan worked.
I found the regionserver by running hbase shell in debug mode:
hbase shell -d
Then some regionservers appeared in the output and one of them stood out.
Then I ran dmesg on the host to find the failing disk.
how to call an Oracle procedure with paramters in grails, the produre looks like this,
var returntype number; begin IZMST_PKG_SOLO_GENERATE_SQL.pro_get_row_cnt(1,181,:returntype); end;
I have tried to call this procedure but the program encounter an error, you can check the error message as below.
org.apache.commons.dbcp.DelegatingCallableStatement with address:
"oracle.jdbc.driver.OracleCallableStatementWrapper#52d0d407" is
closed.. Stacktrace follows: java.sql.SQLException:
org.apache.commons.dbcp.DelegatingCallableStatement with address:
"oracle.jdbc.driver.OracleCallableStatementWrapper#52d0d407" is
closed. at
org.apache.commons.dbcp.DelegatingStatement.checkOpen(DelegatingStatement.java:137)
at
org.apache.commons.dbcp.DelegatingCallableStatement.getObject(DelegatingCallableStatement.java:144)
at
com.pg.izoom.de.ExtractManagementController$$EO5PWcUR.addExtract(ExtractManagementController.groovy:74)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
pasted a piece of the code
Connection conn = dataSource.getConnection()
Sql sql = new Sql(conn)
//int test = sql.call("",)
sql.call'BEGIN IZMST_PKG_SOLO_GENERATE_SQL.pro_get_row_cnt(?,?,?); END;',[1L,qryId,Sql.resultSet(OracleTypes.NUMBER)],{dwells->
println dwells
}
PS: this problem has been solved, update the description to make this question easier to understand.
If I understood, your return type is number, so your can use directly Sql.NUMERIC.
sql.call '{call IZMST_PKG_SOLO_GENERATE_SQL.pro_get_row_cnt(?,?,?) }',[1L, qryId, Sql.NUMERIC],{ dwells ->
println dwells
}