BDB JE exception - berkeley-db-je

com.sleepycat.je.DatabaseException: (JE 3.2.76) fetchTarget of 0x45/0x27aa63 parent IN=3316846 lastFullVersion=0x45/0x47b147 parent.getDirty()=false state=0 com.sleepycat.je.log.DbChecksumException: (JE 3.2.76) Read invalid log entry type: 97
at com.sleepycat.je.tree.IN.fetchTarget(IN.java:989)
at com.sleepycat.je.tree.Tree.getNextBinInternal(Tree.java:1441)
at com.sleepycat.je.tree.Tree.getNextBin(Tree.java:1306)
at com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1498)
at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1368)
at com.sleepycat.je.Cursor.retrieveNextAllowPhantoms(Cursor.java:1587)
at com.sleepycat.je.Cursor.retrieveNext(Cursor.java:1397)
at com.sleepycat.je.Cursor.getNext(Cursor.java:456)
at browse.bdb.dao.JENodeDao.load(JENodeDao.java:76)
.................my code..................
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: com.sleepycat.je.log.DbChecksumException: (JE 3.2.76) Read invalid log entry type: 97
at com.sleepycat.je.log.LogEntryHeader.<init>(LogEntryHeader.java:69)
at com.sleepycat.je.log.LogManager.getLogEntryFromLogSource(LogManager.java:631)
at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:597)
at com.sleepycat.je.tree.IN.fetchTarget(IN.java:958)
... 16 more
what is wrong ?

it's likely some log file is corrupted, try to recover environment when opening.

This exception appear because I've tried to read record which was not flushed to the file system yet.
For JE it's: environment.sync();
For JNI it's: environment.syncCache(null);

Related

some errors on our Spark, our spark yarn model can not work any away

The system report the error log tips under the lines, it saying that we have some problems in our hive system, but i had reviews so many times on my hive conf file, don't find any problems:
================================== Traceback (most recent call last): File "/data/spark/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw) File "/data/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line
319, in get_return_value py4j.protocol.Py4JJavaError: An error
occurred while calling o22.sessionState. :
java.lang.IllegalArgumentException: Error while instantiating
'org.apache.spark.sql.hive.HiveSessionState':
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:981)
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:110)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:978)
... 13 more Caused by: java.lang.IllegalArgumentException: Error while instantiating
'org.apache.spark.sql.hive.HiveExternalCatalog':
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:169)
at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:86)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:101)
at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:100)
at org.apache.spark.sql.internal.SessionState.<init>(SessionState.scala:157)
at org.apache.spark.sql.hive.HiveSessionState.<init>(HiveSessionState.scala:32)
... 18 more Caused by: java.lang.NoSuchMethodException: org.apache.spark.sql.hive.HiveExternalCatalog.<init>(org.apache.spark.SparkConf,
org.apache.hadoop.conf.Configuration)
at java.lang.Class.getConstructor0(Class.java:2892)
at java.lang.Class.getDeclaredConstructor(Class.java:2058)
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:164)
... 26 more

Error while saving as parquet {Could not read footer}

I am writing a spark application.
After processing logs , i save the output in parquet format. (using Dataframe.saveAsParquetFile() api)
But I am getting error sometime while saving as parquet. The error goes away if i rerun the process to save as parquet.
Please let me know root cause for this
java.io.IOException: Could not read footer: java.io.IOException: Could not read footer for file org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus#6233d82f
at parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:238)
at org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache.refresh(newParquet.scala:369)
at org.apache.spark.sql.parquet.ParquetRelation2.org$apache$spark$sql$parquet$ParquetRelation2$$metadataCache$lzycompute(newParquet.scala:154)
at org.apache.spark.sql.parquet.ParquetRelation2.org$apache$spark$sql$parquet$ParquetRelation2$$metadataCache(newParquet.scala:152)
at org.apache.spark.sql.parquet.ParquetRelation2.refresh(newParquet.scala:197)
at org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.insert(commands.scala:134)
at org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.run(commands.scala:114)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:939)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:939)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:332)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:144)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:135)
at org.apache.spark.sql.DataFrame.saveAsParquetFile(DataFrame.scala:1494)
at ParquetWriter$.main(ParquetWriter.scala:182)
at ParquetWriter.main(ParquetWriter.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: Could not read footer for file org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus#6233d82f
at parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:230)
at parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:224)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.fs.ChecksumException: Checksum error: file:/{directory}/output.parquet2/part-r-01418.gz.parquet at 17920
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:248)
at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:273)
at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:211)
at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:229)
at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193)
at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:431)
at org.apache.hadoop.fs.FSInputChecker.seek(FSInputChecker.java:412)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:48)
at org.apache.hadoop.fs.ChecksumFileSystem$FSDataBoundedInputStream.seek(ChecksumFileSystem.java:318)
at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:413)
at parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:228)
... 5 more
Thanks

Getting error,Error: org.kitesdk.data.DatasetIOException: Cannot decode Avro value

I am trying to load data from redshift on to hdfs(parquet format),using sqoop(--as-parquetfile).
Has anyone else encountered this same error (see below)? If so, how did you go about fixing the problem?
Error: org.kitesdk.data.DatasetIOException: Cannot decode Avro value
at org.kitesdk.data.spi.SchemaUtil.fromString(SchemaUtil.java:419)
at org.kitesdk.data.spi.predicates.In.fromString(In.java:47)
at org.kitesdk.data.spi.predicates.Predicates.fromString(Predicates.java:85)
at org.kitesdk.data.spi.Constraints.fromQueryMap(Constraints.java:468)
at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.loadOrCreateTaskAttemptView(DatasetKeyOutputFormat.java:577)
at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.getRecordWriter(DatasetKeyOutputFormat.java:426)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:644)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.EOFException
at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:153)
at org.apache.avro.io.BinaryDecoder.readIndex(BinaryDecoder.java:423)
at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:229)
at org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:206)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:139)
at org.kitesdk.data.spi.SchemaUtil.fromString(SchemaUtil.java:417)
... 13 more
Thanks for any suggestions you may have.

Hadoop error while processing file with brackets

I have a lot of different files *.doc, *.pdf and so on. I wanted to process them with mapReduce.
I put them in HDFS and then started java MapReduce program using Hue.
If files are well formated and doesn't have brackets "(){}[]" in their name all goes fine.
But if there is a file OPN_last_[age.PDF
I get this errors:
Failing Oozie Launcher, Main class [distr.fors.ru.Index], main() threw exception, Illegal file pattern: Unclosed character class near index 17
OPN_last_[age.PDF
^
java.io.IOException: Illegal file pattern: Unclosed character class near index 17
OPN_last_[age.PDF
^
at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:70)
at org.apache.hadoop.fs.GlobFilter.<init>(GlobFilter.java:49)
at org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1670)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1627)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:211)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1063)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1080)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:992)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596)
at distr.fors.ru.Index.run(Index.java:78)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at distr.fors.ru.Index.main(Index.java:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:495)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.util.regex.PatternSyntaxException: Unclosed character class near index 17
OPN_last_[age.PDF
^
at org.apache.hadoop.fs.GlobPattern.error(GlobPattern.java:167)
at org.apache.hadoop.fs.GlobPattern.set(GlobPattern.java:151)
at org.apache.hadoop.fs.GlobPattern.<init>(GlobPattern.java:42)
at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:66)
... 32 more
If there is a file like this: {2011-01-27} (3769330).pdf
I get such error:
Input Pattern hdfs://fd-bigdata.distr.fors.ru:8020/{2011-01-27} (3769330).pdf matches 0 files
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231)
t org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1063)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1080)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:992)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596)
at distr.fors.ru.Index.run(Index.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at distr.fors.ru.Index.main(Index.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:495)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
I realy need to process such files. What can I make to solve such problems?
P.S. I am using the latest CDH 4.4.0.
To deal with special characters in Java you should escape them with double backslash '\':
'[' => '\\['
'}' => '\\}'
This works for me in Java, in Pig and in Oozie. Hope it will also solve your problem.

How to extract exceptions from java log file?

I have a huge log file with a lot of exceptions and I need to extract full stack traces and few lines before and after it. It will be perfect if tool for this is bash script.
example:
$16.02.2012 16:04:34 *INFO * [main] InitialContextInitializer: Reference bound: rmirepository (InitialContextInitializer.java, line 203)
16.02.2012 16:04:34 *ERROR* [main] StandaloneContainerInitializedListener: Error of StandaloneContainer initialization (StandaloneContainerInitializedListener.java, line 109)
java.lang.RuntimeException: Cannot instantiate component key=org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoader type=org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoader found at file:/home/roman/reports/backup/1.14.7-5636/rdbms/single/exo-tomcat_1.14.7-5636/exo-configuration.xml
at org.exoplatform.container.jmx.MX4JComponentAdapter.getComponentInstance(MX4JComponentAdapter.java:134)
at org.exoplatform.container.management.ManageableComponentAdapter.getComponentInstance(ManageableComponentAdapter.java:68)
at org.exoplatform.container.ConcurrentPicoContainer.getInstance(ConcurrentPicoContainer.java:468)
at org.exoplatform.container.ConcurrentPicoContainer.getComponentInstancesOfType(ConcurrentPicoContainer.java:366)
at org.exoplatform.container.CachingContainer.getComponentInstancesOfType(CachingContainer.java:111)
at org.exoplatform.container.LifecycleVisitor.visitContainer(LifecycleVisitor.java:151)
at org.exoplatform.container.ConcurrentPicoContainer.accept(ConcurrentPicoContainer.java:615)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.picocontainer.defaults.AbstractPicoVisitor.traverse(AbstractPicoVisitor.java:32)
at org.exoplatform.container.LifecycleVisitor.traverse(LifecycleVisitor.java:90)
at org.exoplatform.container.LifecycleVisitor.start(LifecycleVisitor.java:170)
at org.exoplatform.container.ConcurrentPicoContainer.start(ConcurrentPicoContainer.java:554)
at org.exoplatform.container.ExoContainer.start(ExoContainer.java:266)
at org.exoplatform.container.StandaloneContainer$3.run(StandaloneContainer.java:178)
at org.exoplatform.container.StandaloneContainer$3.run(StandaloneContainer.java:175)
at org.exoplatform.commons.utils.SecurityHelper.doPrivilegedAction(SecurityHelper.java:291)
at org.exoplatform.container.StandaloneContainer.getInstance(StandaloneContainer.java:174)
at org.exoplatform.container.StandaloneContainer.getInstance(StandaloneContainer.java:129)
at org.exoplatform.ws.frameworks.servlet.StandaloneContainerInitializedListener.contextInitialized(StandaloneContainerInitializedListener.java:104)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4205)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4704)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:799)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:779)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:601)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:675)
at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:601)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:502)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1315)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:324)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:840)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
at org.apache.catalina.core.StandardService.start(StandardService.java:525)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:754)
at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: java.lang.RuntimeException: Cannot instantiate component key=org.exoplatform.services.jcr.RepositoryService type=org.exoplatform.services.jcr.impl.RepositoryServiceImpl found at file:/home/roman/reports/backup/1.14.7-5636/rdbms/single/exo-tomcat_1.14.7-5636/exo-configuration.xml
at org.exoplatform.container.jmx.MX4JComponentAdapter.getComponentInstance(MX4JComponentAdapter.java:134)
at org.exoplatform.container.management.ManageableComponentAdapter.getComponentInstance(ManageableComponentAdapter.java:68)
at org.exoplatform.container.ConcurrentPicoContainer.getInstance(ConcurrentPicoContainer.java:468)
at org.exoplatform.container.ConcurrentPicoContainer.getComponentInstanceOfType(ConcurrentPicoContainer.java:422)
at org.exoplatform.container.CachingContainer.getComponentInstanceOfType(CachingContainer.java:139)
at org.exoplatform.container.ExoContainer.createComponent(ExoContainer.java:407)
at org.exoplatform.container.jmx.MX4JComponentAdapter.getComponentInstance(MX4JComponentAdapter.java:96)
... 45 more
Use awk:
BEGIN {
previous = "";
}
/^\tat / {
if( previous != "" ) {
print previous;
previous = "";
}
print;
next;
}
{ previous = $0; }
should do the trick. In a nutshell, look for the pattern \tat (tab, at, blank) which almost always is used in a stack trace.
If you have many exceptions, then you can use maps (associative arrays in AWK's lingo) to save part of the exception message and then do statistics (like which exception happens the most).
Most of line in Java exception block are start with TAB delimiter.
So to get all exceptions from a huge log file you can use grep TAB delimiter.
Use following command to grep all exceptions from a file:
**$ grep -P -A 15 -B 15 "^\t" inputLogFile**
You can use below command
grep -B 10 -A 100 "Caused By" app.log -->this will give all the java exception logged in application log file. You can change string if you want to find specific exceptions.
https://www.techiedelight.com/java-program-search-exceptions-huge-log-file-on-server/
You can try something like this,
=======
grep -B20 -A20 ^'$16.02.2012 16:04:34' <path/to/log/file>
=======
Where
-B20 -- will display 20 lines before the line where the match was found.
-A20 -- will display 20 lines after the line where the match was found.

Resources