How to enlarge data column in SonarQube? - sonarqube

I'm trying to check my source code with cppcheck and SonarQube.
When I run sonar-runner, I met error below
SonarQube Runner 2.4
Java 1.6.0_33 Sun Microsystems Inc. (64-bit)
Linux 3.11.0-26-generic amd64
INFO: Error stacktraces are turned on.
INFO: Runner configuration file: /var/lib/jenkins/sonarqube/sonar-runner-2.4/conf/sonar-runner.properties
INFO: Project configuration file: /var/lib/jenkins/jobs/MIP35.KT.Centrex.Branch/workspace/hudson_mvmw/sonar-project.properties
INFO: Default locale: "ko_KR", source code encoding: "UTF-8"
INFO: Work directory: /data/jenkins/jobs/MIP35.KT.Centrex.Branch/workspace/hudson_mvmw/./.sonar
INFO: SonarQube Server 4.5
16:23:56.070 INFO - Load global referentials...
16:23:56.152 INFO - Load global referentials done: 84 ms
16:23:56.158 INFO - User cache: /var/lib/jenkins/.sonar/cache
16:23:56.164 INFO - Install plugins
16:23:56.273 INFO - Install JDBC driver
16:23:56.278 INFO - Create JDBC datasource for jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8
16:23:57.156 INFO - Initializing Hibernate
16:23:57.990 INFO - Load project referentials...
16:23:58.522 INFO - Load project referentials done: 532 ms
16:23:58.522 INFO - Load project settings
16:23:58.788 INFO - Loading technical debt model...
16:23:58.809 INFO - Loading technical debt model done: 21 ms
16:23:58.811 INFO - Apply project exclusions
16:23:58.962 INFO - ------------- Scan mvmw for KT centrex at branch
16:23:58.968 INFO - Load module settings
16:23:59.939 INFO - Language is forced to c++
16:23:59.940 INFO - Loading rules...
16:24:00.558 INFO - Loading rules done: 618 ms
16:24:00.576 INFO - Configure Maven plugins
16:24:00.660 INFO - No quality gate is configured.
16:24:00.759 INFO - Base dir: /data/jenkins/jobs/MIP35.KT.Centrex.Branch/workspace/hudson_mvmw/.
16:24:00.759 INFO - Working dir: /data/jenkins/jobs/MIP35.KT.Centrex.Branch/workspace/hudson_mvmw/./.sonar
16:24:00.760 INFO - Source paths: moimstone
16:24:00.760 INFO - Source encoding: UTF-8, default locale: ko_KR
16:24:00.760 INFO - Index files
16:24:20.825 INFO - 13185 files indexed
16:26:35.895 WARN - SQL Error: 1406, SQLState: 22001
16:26:35.895 ERROR - Data truncation: Data too long for column 'data' at row 1
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
Total time: 2:40.236s
Final Memory: 27M/1765M
INFO: ------------------------------------------------------------------------
ERROR: Error during Sonar runner execution
org.sonar.runner.impl.RunnerException: Unable to execute Sonar
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:91)
at org.sonar.runner.impl.BatchLauncher$1.run(BatchLauncher.java:75)
at java.security.AccessController.doPrivileged(Native Method)
at org.sonar.runner.impl.BatchLauncher.doExecute(BatchLauncher.java:69)
at org.sonar.runner.impl.BatchLauncher.execute(BatchLauncher.java:50)
at org.sonar.runner.api.EmbeddedRunner.doExecute(EmbeddedRunner.java:102)
at org.sonar.runner.api.Runner.execute(Runner.java:100)
at org.sonar.runner.Main.executeTask(Main.java:70)
at org.sonar.runner.Main.execute(Main.java:59)
at org.sonar.runner.Main.main(Main.java:53)
Caused by: org.sonar.api.utils.SonarException: Unable to read and import the source file : '/data/jenkins/jobs/MIP35.KT.Centrex.Branch/workspace/hudson_mvmw/moimstone/mgrs/mUIMgr/gui/resource/wideBasicStyle/320Wx240H/imageMerged.c' with the charset : 'UTF-8'.
at org.sonar.batch.scan.filesystem.ComponentIndexer.importSources(ComponentIndexer.java:96)
at org.sonar.batch.scan.filesystem.ComponentIndexer.execute(ComponentIndexer.java:79)
at org.sonar.batch.scan.filesystem.DefaultModuleFileSystem.index(DefaultModuleFileSystem.java:245)
at org.sonar.batch.phases.PhaseExecutor.execute(PhaseExecutor.java:111)
at org.sonar.batch.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:194)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
at org.sonar.batch.scan.ProjectScanContainer.scan(ProjectScanContainer.java:233)
at org.sonar.batch.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:228)
at org.sonar.batch.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:221)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
at org.sonar.batch.scan.ScanTask.scan(ScanTask.java:64)
at org.sonar.batch.scan.ScanTask.execute(ScanTask.java:51)
at org.sonar.batch.bootstrap.TaskContainer.doAfterStart(TaskContainer.java:125)
at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92)
at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77)
at org.sonar.batch.bootstrap.BootstrapContainer.executeTask(BootstrapContainer.java:173)
at org.sonar.batch.bootstrapper.Batch.executeTask(Batch.java:95)
at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:67)
at org.sonar.runner.batch.IsolatedLauncher.execute(IsolatedLauncher.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:87)
... 9 more
Caused by: javax.persistence.PersistenceException: Unable to persist : SnapshotSource[snapshot_id=53035,data=#if defined(__cplusplus)
#pragma hdrstop
#endif
#include "Prj_pcx2_resource.h"
#if defined(__cplusplus)
#pragma package(smart_init)
#endif
const rgb24_type Prj_Bg_call_ColorTable[59] PCX2_SEGMENT =
{
{0xFF,0xFF,0xFF}, {0xFE,0xFE,0xFE}, {0xE7,0xE7,0xE7}, {0xC7,0xC7,0xC7}, {0x9B,0x9B,0x9B}, {0xFD,0xFD,0xFD}, {0xCF,0xCF,0xCF}, {0xA8,0xA8,0xA8}, {0xBC,0xBC,0xBC}, {0xD6,0xD6,0xD6},
{0xDC,0xDC,0xDC}, {0xCE,0xCE,0xCE}, {0xB5,0xB5,0xB5}, {0xD0,0xD0,0xD0}, {0xE1,0xE1,0xE1}, {0xA7,0xA7,0xA7}, {0xFA,0xFA,0xFA}, {0xBE,0xBE,0xBE}, {0xBB,0xBB,0xBB}, {0xF3,0xF3,0xF3},
{0x9A,0x9A,0x9A}, {0xEC,0xEC,0xEC}, {0xE9,0xE9,0xE9}, {0x99,0x99,0x99}, {0x98,0x98,0x98}, {0x97,0x97,0x97}, {0x96,0x96,0x96}, {0x95,0x95,0x95}, {0x94,0x94,0x94}, {0x93,0x93,0x93},
{0x92,0x92,0x92}, {0x91,0x91,0x91}, {0x90,0x90,0x90}, {0x8F,0x8F,0x8F}, {0x8E,0x8E,0x8E}, {0x8D,0x8D,0x8D}, {0x8C,0x8C,0x8C}, {0x8B,0x8B,0x8B}, {0x8A,0x8A,0x8A}, {0x89,0x89,0x89},
{0x88,0x88,0x88}, {0x87,0x87,0x87...]
at org.sonar.jpa.session.JpaDatabaseSession.internalSave(JpaDatabaseSession.java:136)
at org.sonar.jpa.session.JpaDatabaseSession.save(JpaDatabaseSession.java:103)
at org.sonar.batch.index.SourcePersister.saveSource(SourcePersister.java:47)
at org.sonar.batch.index.DefaultPersistenceManager.setSource(DefaultPersistenceManager.java:68)
at org.sonar.batch.index.DefaultIndex.setSource(DefaultIndex.java:467)
at org.sonar.batch.scan.filesystem.ComponentIndexer.importSources(ComponentIndexer.java:93)
... 34 more
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.DataException: could not insert: [org.sonar.api.database.model.SnapshotSource]
at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:614)
at org.hibernate.ejb.AbstractEntityManagerImpl.persist(AbstractEntityManagerImpl.java:226)
at org.sonar.jpa.session.JpaDatabaseSession.internalSave(JpaDatabaseSession.java:130)
... 39 more
Caused by: org.hibernate.exception.DataException: could not insert: [org.sonar.api.database.model.SnapshotSource]
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:100)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66)
at org.hibernate.id.insert.AbstractReturningDelegate.performInsert(AbstractReturningDelegate.java:64)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2176)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2656)
at org.hibernate.action.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:71)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279)
at org.hibernate.event.def.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:321)
at org.hibernate.event.def.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:204)
at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:130)
at org.hibernate.ejb.event.EJB3PersistEventListener.saveWithGeneratedId(EJB3PersistEventListener.java:49)
at org.hibernate.event.def.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:154)
at org.hibernate.event.def.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:110)
at org.hibernate.event.def.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:61)
at org.hibernate.impl.SessionImpl.firePersist(SessionImpl.java:646)
at org.hibernate.impl.SessionImpl.persist(SessionImpl.java:620)
at org.hibernate.impl.SessionImpl.persist(SessionImpl.java:624)
at org.hibernate.ejb.AbstractEntityManagerImpl.persist(AbstractEntityManagerImpl.java:220)
... 40 more
Caused by: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column 'data' at row 1
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4235)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2825)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2156)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2459)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2376)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2360)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.hibernate.id.IdentityGenerator$GetGeneratedKeysDelegate.executeAndExtract(IdentityGenerator.java:94)
at org.hibernate.id.insert.AbstractReturningDelegate.performInsert(AbstractReturningDelegate.java:57)
... 55 more
ERROR:
ERROR: Re-run SonarQube Runner using the -X switch to enable full debug logging.
I have a huge source file which is image data file. It's over 100 Megabytes.
How can I enlarge data column? Is there setting for it?

There's no point in analyzing such a file, SonarQube won't give you useful information about it. And this is true for any other file like this one.
The solution is to exclude those image data files using the standard exclusion mechanism provided by SonarQube.
For instance, I would do something like:
sonar.exclusions=**/*imageMerged*

Related

Issues in creating J2C Java beans using the Batch Import Utility in WAS 8.5

I'm facing issues in creating J2C Java beans using the Batch Import Utility.
In my project I have a custom ANT Build file which invokes ImportBatch.bat file of WSAD 5.1 Plugin. In WAS 5.1 it is working fine but in WAS 8.5 using Rational Application Developer 9.5 the same utility throws NullPointerException.
As per my analysis WAS 5.1 has "com.ibm.etools.ctc.import.batch_5.1.1" plugin which is used to perform the above task. I searched for this plugin in WAS 8.5 and I got that it has been changed to "com.ibm.adapter.j2c.command_6.2.700.v20150320_0049" plugin. It has the same importBatch.bat file.
Also I've to change importBatch file as per the current RAD 9.5 jar "equinox Launcher" because RAD 9.5 has no startup.jar
The original entry in RAD 9.5 importBatch.bat file:-
"%eclipse_root%\jre\bin\java" -Xmx256M -verify -Dimport.batch.cp="%currentCodepage%" -cp "%eclipse_root%\startup.jar"
org.eclipse.core.launcher.Main -clean -data "%workspace%" -application com.ibm.adapter.j2c.command.BatchImport -file=%import_file% -style=%generation_style%
changes I've made:-
"%eclipse_root%\jdk\jre\bin\java" -Xmx256M -verify -Dimport.batch.cp="%currentCodepage%" -cp "%eclipse_root%\plugins\org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar" org.eclipse.equinox.launcher.Main -clean -data "%workspace%" -application
com.ibm.adapter.j2c.command.BatchImport -file=%import_file% -style=%generation_style%
I've gone through the IBM Knowledge center for this topic but still no success.
http://www.ibm.com/support/knowledgecenter/SS4JCV_7.5.5/com.ibm.etools.j2c.doc/topics/tusingbatchimporter.html
Please have a look at the exception that I'm getting in workspace logs.
!SESSION 2016-08-20 14:07:55.714 -----------------------------------------------
eclipse.buildId=unknown
java.fullversion=JRE 1.8.0 IBM J9 2.8 Windows 7 amd64-64 Compressed References 20150630_255633 (JIT enabled, AOT enabled)
J9VM - R28_jvm.28_20150630_1742_B255633
JIT - tr.r14.java_20150625_95081.01
GC - R28_jvm.28_20150630_1742_B255633_CMPRSS
J9CL - 20150630_255633
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en_GB
!ENTRY org.eclipse.osgi 4 0 2016-08-20 14:07:57.403
!MESSAGE The -clean (osgi.clean) option was not successful. Unable to clean the storage area: C:\Program Files\IBM\SDP
\configuration\org.eclipse.osgi
!ENTRY org.eclipse.equinox.registry 2 0 2016-08-20 14:10:05.082
!MESSAGE The extensions and extension-points from the bundle "org.eclipse.emf.commonj.sdo" are ignored. The bundle is not
marked as singleton.
!ENTRY org.eclipse.core.resources 2 10035 2016-08-20 14:10:54.180
!MESSAGE The workspace exited with unsaved changes in the previous session; refreshing workspace to recover changes.
!ENTRY org.eclipse.osgi 4 0 2016-08-20 14:10:55.789
!MESSAGE Application error
!STACK 1
java.lang.NullPointerException
at com.ibm.adapter.j2c.command.internal.batchimport.BatchImportApplication.copyTestBucket(BatchImportApplication.java:140)
at com.ibm.adapter.j2c.command.internal.batchimport.BatchImportApplication.run(BatchImportApplication.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:507)
at org.eclipse.equinox.internal.app.EclipseAppContainer.callMethodWithException(EclipseAppContainer.java:587)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:198)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:507)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:648)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:603)
at org.eclipse.equinox.launcher.Main.run(Main.java:1465)
at org.eclipse.equinox.launcher.Main.main(Main.java:1438)
!ENTRY com.ibm.etools.references 4 0 2016-08-20 14:10:56.147
!MESSAGE Framework stop [EXCEPTION] Exception during shutdown. Indexer will be rebuilt on next startup.
!STACK 0
java.lang.NullPointerException
at com.ibm.etools.references.internal.management.InternalReferenceManager.hasStar(InternalReferenceManager.java:1394)
at com.ibm.etools.references.internal.management.InternalReferenceManager$ShutdownCode.call
(InternalReferenceManager.java:175)
at com.ibm.etools.references.internal.management.InternalReferenceManager$ShutdownCode.call
(InternalReferenceManager.java:1)
at java.util.concurrent.FutureTask.run(FutureTask.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:618)
at java.lang.Thread.run(Thread.java:785)
at com.ibm.etools.references.internal.ReferenceThreadFactrory$ReferencesThread.run(ReferenceThreadFactrory.java:37)
!ENTRY com.ibm.etools.references 4 0 2016-08-20 14:10:56.153
!MESSAGE [SCR] Error while attempting to deactivate instance of component Component[
name = Reference Manager
activate = activate
deactivate = deactivate
modified =
configuration-policy = optional
factory = null
autoenable = true
immediate = true
implementation = com.ibm.etools.references.management.ReferenceManager
state = Disabled
properties =
serviceFactory = false
serviceInterface = [com.ibm.etools.references.management.ReferenceManager]
references = {
Reference[name = IWorkspace, interface = org.eclipse.core.resources.IWorkspace, policy = static, cardinality =
1..1, target = null, bind = null, unbind = null]
Reference[name = IPreferencesService, interface = org.eclipse.core.runtime.preferences.IPreferencesService, policy
= static, cardinality = 1..1, target = null, bind = null, unbind = null]
Reference[name = ThreadSupport, interface = com.ibm.etools.references.internal.ThreadSupport, policy = static,
cardinality = 1..1, target = null, bind = addThreadSupport, unbind = removeThreadSupport]
Reference[name = EventAdmin, interface = org.osgi.service.event.EventAdmin, policy = static, cardinality = 1..1,
target = null, bind = addEventAdmin, unbind = removeEventAdmin]
}
located in bundle = com.ibm.etools.references_1.2.400.v20150320_0049 [732]
]
!STACK 0
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:507)
at org.eclipse.equinox.internal.ds.model.ServiceComponent.deactivate(ServiceComponent.java:363)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.deactivate(ServiceComponentProp.java:161)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:387)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:102)
at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:366)
at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:306)
at org.eclipse.equinox.internal.ds.Resolver.disposeComponentConfigs(Resolver.java:724)
at org.eclipse.equinox.internal.ds.Resolver.disableComponents(Resolver.java:700)
at org.eclipse.equinox.internal.ds.SCRManager.stoppingBundle(SCRManager.java:554)
at org.eclipse.equinox.internal.ds.SCRManager.bundleChanged(SCRManager.java:233)
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:902)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEventPrivileged(EquinoxEventPublisher.java:165)
at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:75)
at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:67)
at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor.publishModuleEvent(EquinoxContainerAdaptor.java:102)
at org.eclipse.osgi.container.Module.publishEvent(Module.java:466)
at org.eclipse.osgi.container.Module.doStop(Module.java:624)
at org.eclipse.osgi.container.Module.stop(Module.java:488)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.decStartLevel(ModuleContainer.java:1623)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1542)
at org.eclipse.osgi.container.SystemModule.stopWorker(SystemModule.java:248)
at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule.stopWorker(EquinoxBundle.java:145)
at org.eclipse.osgi.container.Module.doStop(Module.java:626)
at org.eclipse.osgi.container.Module.stop(Module.java:488)
at org.eclipse.osgi.container.SystemModule.stop(SystemModule.java:186)
at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule$1.run(EquinoxBundle.java:160)
at java.lang.Thread.run(Thread.java:785)
Caused by: java.lang.NullPointerException
at com.ibm.etools.references.internal.management.InternalReferenceManager.deactivate(InternalReferenceManager.java:1153)
at com.ibm.etools.references.management.ReferenceManager.deactivate(ReferenceManager.java:321)
... 33 more
Root exception:
java.lang.NullPointerException
at com.ibm.etools.references.internal.management.InternalReferenceManager.deactivate(InternalReferenceManager.java:1153)
at com.ibm.etools.references.management.ReferenceManager.deactivate(ReferenceManager.java:321)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:507)
at org.eclipse.equinox.internal.ds.model.ServiceComponent.deactivate(ServiceComponent.java:363)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.deactivate(ServiceComponentProp.java:161)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:387)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:102)
at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:366)
at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:306)
at org.eclipse.equinox.internal.ds.Resolver.disposeComponentConfigs(Resolver.java:724)
at org.eclipse.equinox.internal.ds.Resolver.disableComponents(Resolver.java:700)
at org.eclipse.equinox.internal.ds.SCRManager.stoppingBundle(SCRManager.java:554)
at org.eclipse.equinox.internal.ds.SCRManager.bundleChanged(SCRManager.java:233)
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:902)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEventPrivileged(EquinoxEventPublisher.java:165)
at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:75)
at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:67)
at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor.publishModuleEvent(EquinoxContainerAdaptor.java:102)
at org.eclipse.osgi.container.Module.publishEvent(Module.java:466)
at org.eclipse.osgi.container.Module.doStop(Module.java:624)
at org.eclipse.osgi.container.Module.stop(Module.java:488)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.decStartLevel(ModuleContainer.java:1623)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1542)
at org.eclipse.osgi.container.SystemModule.stopWorker(SystemModule.java:248)
at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule.stopWorker(EquinoxBundle.java:145)
at org.eclipse.osgi.container.Module.doStop(Module.java:626)
at org.eclipse.osgi.container.Module.stop(Module.java:488)
at org.eclipse.osgi.container.SystemModule.stop(SystemModule.java:186)
at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule$1.run(EquinoxBundle.java:160)
at java.lang.Thread.run(Thread.java:785)
This looks like a problem for IBM support, I suggest you raise a PMR via the IBM support process to get a solution to this issue.

dse cassandra start up error - org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager

I am trying to start DSE Search the first time after "tar xvf dse-4.7.8-bin.tar.gz". Below is the error:
[spark#osboxes dse-4.7.8]$ bin/dse cassandra -s
Tomcat: Logging to /home/spark/tomcat
[spark#osboxes dse-4.7.8]$ INFO 21:59:57 Loading DSE module
INFO 21:59:57 Loading settings from file:/usr/mware/dse-4.7.8/resources/dse/conf/dse.yaml
INFO 21:59:58 Load of settings is done.
INFO 21:59:58 CQL slow log is not enabled
INFO 21:59:58 CQL system info tables are not enabled
INFO 21:59:58 Resource level latency tracking is not enabled
INFO 21:59:58 Database summary stats are not enabled
INFO 21:59:58 Cluster summary stats are not enabled
INFO 21:59:58 Histogram data tables are not enabled
INFO 21:59:58 User level latency tracking is not enabled
INFO 21:59:58 Spark cluster info tables are not enabled
INFO 21:59:58 Loading settings from file:/usr/mware/dse-4.7.8/resources/cassandra/conf/cassandra.yaml
INFO 21:59:58 Node configuration:[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_snapshot=true; batch_size_warn_threshold_in_kb=64; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_directory=/var/lib/cassandra/commitlog; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data]; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=com.datastax.bdp.snitch.DseSimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=dc; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=10000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_caches_directory=/var/lib/cassandra/saved_caches; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; write_request_timeout_in_ms=2000]
INFO 21:59:58 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 21:59:58 Global memtable on-heap threshold is enabled at 545MB
INFO 21:59:58 Global memtable off-heap threshold is enabled at 545MB
INFO 21:59:58 Detected search service is enabled, setting my workload to Search
INFO 21:59:58 Detected search service is enabled, setting my DC to Solr
INFO 21:59:58 Initialized DseDelegateSnitch with workload Search, delegating to com.datastax.bdp.snitch.DseSimpleSnitch
INFO 21:59:58 Loading settings from file:/usr/mware/dse-4.7.8/resources/cassandra/conf/cassandra.yaml
INFO 21:59:58 Node configuration:[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_snapshot=true; batch_size_warn_threshold_in_kb=64; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_directory=/var/lib/cassandra/commitlog; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data]; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=com.datastax.bdp.snitch.DseSimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=dc; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=10000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_caches_directory=/var/lib/cassandra/saved_caches; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; write_request_timeout_in_ms=2000]
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at com.datastax.bdp.DseCoreModule.bindSecretManager(DseCoreModule.java:74)
at com.datastax.bdp.DseCoreModule.configure(DseCoreModule.java:60)
at com.google.inject.AbstractModule.configure(AbstractModule.java:59)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:223)
at com.google.inject.AbstractModule.install(AbstractModule.java:118)
at com.datastax.bdp.server.AbstractDseModule.configure(AbstractDseModule.java:28)
at com.datastax.bdp.DseModule.configure(DseModule.java:35)
at com.google.inject.AbstractModule.configure(AbstractModule.java:59)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:223)
at com.google.inject.spi.Elements.getElements(Elements.java:101)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:133)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:103)
at com.google.inject.Guice.createInjector(Guice.java:95)
at com.google.inject.Guice.createInjector(Guice.java:72)
at com.google.inject.Guice.createInjector(Guice.java:62)
at com.datastax.bdp.DseModule.main(DseModule.java:71)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 28 more
There has been no change in any configuration. I have only set "HADOOP_CONF_DIR=/usr/mware/dse-4.7.8/resources/hadoop/conf " in the .bashrc file (and of course I ran . ~/.bashrc afterwards). Am I missing something here? "bin/dse cassandra" is also having the same error. OS is CentOS 7.2.1511 64 bit.

An Unexpected Server error has occurred in JDeveloper

I use Oracle Weblogic server and JDeveloper 12c (install oracle fusion soa suite including jDeveloper) to create some Web SERVICE need it in my project.
jDeveloper worked for some days , but now when I tried to test simple web service "hello world" it return a "popup"
*An Unexpected Server error has occurred in JDeveloper*
and this is a message returned in the console
Nov 11, 2015 11:36:06 AM oracle.bali.xml.addin.XMLSourceNode getXmlContext
SEVERE: Error creating XmlContext for oracle.ide.Context[{Context.EVENT=java.awt.event.MouseEvent[MOUSE_PRESSED,(195,196),absolute(247,326),button=3,modifiers=Meta+Button3,extModifiers=Button3,clickCount=1] on nav-AppServerNavigatorManager, Context.VIEW=ApplicationServerNavigatorViewType.ApplicationServerNavigatorName, ExplorerContext.TNODES=[Loracle.ide.explorer.TNode;#1893566a, NavigatorWindow=ApplicationServerNavigatorViewType.ApplicationServerNavigatorName, Context.SELECTION=[Loracle.ide.model.Element;#710e55c0, Context.WORKSPACE=HelloWorld.jws, Context.NODE=bpelprocess1_client_ep (ws), Context.PROJECT=ProjectHelloWorld.jpr}]
java.lang.IllegalStateException: XmlContext: Error acquiring text buffer for file: https://iteslab-OptiPlex-7020:7102/soa-infra/services/default/ProjectHelloWorld!1.0/bpelprocess1_client_ep?WSDL
at oracle.bali.xml.addin.XMLSourceNode._getXmlContext(XMLSourceNode.java:1695)
at oracle.bali.xml.addin.XMLSourceNode.getXmlContext(XMLSourceNode.java:192)
at oracle.bali.xml.gui.jdev.JDevXmlContext.getXmlContext(JDevXmlContext.java:224)
at oracle.bali.xml.addin.CheckSyntaxController._getDocumentElement(CheckSyntaxController.java:637)
at oracle.bali.xml.addin.CheckSyntaxController.update(CheckSyntaxController.java:276)
at oracle.ide.controller.IdeAction$ControllerDelegatingController.update(IdeAction.java:1487)
at oracle.ide.controller.IdeAction.updateAction(IdeAction.java:783)
at oracle.ide.controller.MenuManager.updateMenuItemAction(MenuManager.java:1257)
at oracle.ide.controller.MenuManager.updatePopupMenuItems(MenuManager.java:1228)
at oracle.ide.controller.ContextMenu.prepareShow(ContextMenu.java:329)
at oracle.ide.controller.ContextMenu.show(ContextMenu.java:290)
at oracle.ideimpl.explorer.BaseTreeExplorer.tryPopup(BaseTreeExplorer.java:2302)
at oracle.ideimpl.explorer.BaseTreeExplorer.mousePressed(BaseTreeExplorer.java:2234)
at oracle.ideimpl.explorer.CustomTree.processMouseEvent(CustomTree.java:232)
at java.awt.Component.processEvent(Component.java:6281)
at java.awt.Container.processEvent(Container.java:2229)
at java.awt.Component.dispatchEventImpl(Component.java:4872)
at java.awt.Container.dispatchEventImpl(Container.java:2287)
at java.awt.Component.dispatchEvent(Component.java:4698)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4832)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4489)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4422)
at java.awt.Container.dispatchEventImpl(Container.java:2273)
at java.awt.Window.dispatchEventImpl(Window.java:2719)
at java.awt.Component.dispatchEvent(Component.java:4698)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:747)
at java.awt.EventQueue.access$300(EventQueue.java:103)
at java.awt.EventQueue$3.run(EventQueue.java:706)
at java.awt.EventQueue$3.run(EventQueue.java:704)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:87)
at java.awt.EventQueue$4.run(EventQueue.java:720)
at java.awt.EventQueue$4.run(EventQueue.java:718)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:717)
at oracle.javatools.internal.ui.EventQueueWrapper._dispatchEvent(EventQueueWrapper.java:169)
at oracle.javatools.internal.ui.EventQueueWrapper.dispatchEvent(EventQueueWrapper.java:151)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:91)
Caused by: java.io.IOException
at oracle.ide.net.HttpURLFileSystemHelper.openInputStream(HttpURLFileSystemHelper.java:476)
at oracle.ide.net.URLFileSystemHelperDecorator.openInputStream(URLFileSystemHelperDecorator.java:291)
at oracle.ide.net.URLFileSystemHelperDecorator.openInputStream(URLFileSystemHelperDecorator.java:291)
at oracle.jdevimpl.webservices.ide.InterruptableHttpURLFileSystemHelperDecorator.openInputStream(InterruptableHttpURLFileSystemHelperDecorator.java:134)
at oracle.ideimpl.net.LazyURLFileSystemHelperDecorator.openInputStream(LazyURLFileSystemHelperDecorator.java:364)
at oracle.ide.net.URLFileSystem.openInputStream(URLFileSystem.java:1368)
at oracle.ide.net.URLFileSystemHelper.createReader(URLFileSystemHelper.java:1536)
at oracle.ide.net.URLFileSystemHelperDecorator.createReader(URLFileSystemHelperDecorator.java:343)
at oracle.ide.net.URLFileSystemHelperDecorator.createReader(URLFileSystemHelperDecorator.java:343)
at oracle.ideimpl.net.LazyURLFileSystemHelperDecorator.createReader(LazyURLFileSystemHelperDecorator.java:426)
at oracle.ide.net.URLFileSystem.createReader(URLFileSystem.java:1707)
at oracle.bali.xml.addin.XMLSourceNode.createReader(XMLSourceNode.java:1304)
at oracle.ide.model.TextNode.loadTextBuffer(TextNode.java:302)
at oracle.ide.model.TextNode.openImpl(TextNode.java:537)
at oracle.ide.model.Node.open(Node.java:1045)
at oracle.ide.model.Node.open(Node.java:992)
at oracle.ide.model.TextNode.acquireTextBufferOrThrow(TextNode.java:812)
at oracle.bali.xml.addin.XMLSourceNode._getXmlContext(XMLSourceNode.java:1687)
... 44 more
how to resolve this? thanks

apache thrift transport TTransportException

Hive Version : 0.13.1
Pig Version : 0.13.0
I was trying to get read the hive tables using pig with the below command.
grunt> DATA = LOAD 'dev.profile' USING org.apache.hcatalog.pig.HCatLoader();
I get the below piece of log
2014-07-16 22:44:58,986 [main] WARN org.apache.hadoop.hive.conf.HiveConf - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2014-07-16 22:44:59,037 [main] INFO hive.metastore - Trying to connect to metastore with URI thrift://localhost:10000
2014-07-16 22:44:59,057 [main] INFO hive.metastore - Connected to metastore.
2014-07-16 22:45:02,019 [main] WARN org.apache.hadoop.hive.conf.HiveConf - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2014-07-16 22:45:02,166 [main] WARN org.apache.hadoop.hive.conf.HiveConf - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
when i do the describe the results comes properly as expected.
grunt> describe DATA
2014-07-16 22:46:42,189 [main] WARN org.apache.hadoop.hive.conf.HiveConf - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
DATA: {name: chararray,age: int,salary: int}
but when i dump the data i get SocketTimeoutException
2014-07-16 22:47:25,146 [main] ERROR hive.log - Got exception: org.apache.thrift.transport.TTransportException java.net.SocketTimeoutException: Read timed out
org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_databases(ThriftHiveMetastore.java:600)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_databases(ThriftHiveMetastore.java:587)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:826)
at org.apache.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.isOpen(HiveClientCache.java:276)
at org.apache.hcatalog.common.HiveClientCache.get(HiveClientCache.java:146)
at org.apache.hcatalog.common.HCatUtil.getHiveClient(HCatUtil.java:548)
at org.apache.hcatalog.pig.PigHCatUtil.getHiveMetaClient(PigHCatUtil.java:158)
at org.apache.hcatalog.pig.PigHCatUtil.getTable(PigHCatUtil.java:200)
at org.apache.hcatalog.pig.HCatLoader.getSchema(HCatLoader.java:195)
at org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:175)
at org.apache.pig.newplan.logical.relational.LOLoad.<init>(LOLoad.java:89)
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:885)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3568)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1625)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:188)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1712)
at org.apache.pig.PigServer$Graph.access$000(PigServer.java:1420)
at org.apache.pig.PigServer.storeEx(PigServer.java:1004)
at org.apache.pig.PigServer.store(PigServer.java:974)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:542)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 40 more
2014-07-16 22:47:25,148 [main] ERROR hive.log - Converting exception to MetaException
2014-07-16 22:47:25,151 [main] INFO hive.metastore - Trying to connect to metastore with URI thrift://localhost:10000
2014-07-16 22:47:25,152 [main] INFO hive.metastore - Connected to metastore.
2014-07-16 22:47:45,173 [main] ERROR org.apache.pig.PigServer - exception during parsing: Error during parsing. Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
Failed to parse: Can not retrieve schema from loader org.apache.hcatalog.pig.HCatLoader#1342464f
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:198)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1712)
at org.apache.pig.PigServer$Graph.access$000(PigServer.java:1420)
at org.apache.pig.PigServer.storeEx(PigServer.java:1004)
at org.apache.pig.PigServer.store(PigServer.java:974)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:542)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.lang.RuntimeException: Can not retrieve schema from loader org.apache.hcatalog.pig.HCatLoader#1342464f
at org.apache.pig.newplan.logical.relational.LOLoad.<init>(LOLoad.java:91)
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:885)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3568)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1625)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:188)
... 17 more
Caused by: org.apache.pig.impl.logicalLayer.FrontendException: ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
at org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:179)
at org.apache.pig.newplan.logical.relational.LOLoad.<init>(LOLoad.java:89)
... 24 more
Caused by: java.io.IOException: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.hcatalog.pig.PigHCatUtil.getTable(PigHCatUtil.java:205)
at org.apache.hcatalog.pig.HCatLoader.getSchema(HCatLoader.java:195)
at org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:175)
... 25 more
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table(ThriftHiveMetastore.java:1036)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table(ThriftHiveMetastore.java:1022)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:997)
at org.apache.hcatalog.common.HCatUtil.getTable(HCatUtil.java:194)
at org.apache.hcatalog.pig.PigHCatUtil.getTable(PigHCatUtil.java:201)
... 27 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 37 more
2014-07-16 22:47:45,176 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
Even though i am able to connect to metastore i am not able to retrieve the data. What could be the reason for read fail ?
and at times the process fails with java.lang.OutOfMemoryError: Java heap space
Any help would be greatly appreciated.
Edit the hive-site.xml.
Replace hive.metastore.ds.retry with /hive.hmshandler.retry.
vim /usr/local/Cellar/hive/0.13.1/libexec/conf/hive-site.xml
:%s/hive.metastore.ds.retry/hive.hmshandler.retry/g
:wq

Pig: error while union the result of mapreduce

Pig script
base = load 'u.base' as (uid:long, gid:long, pref:double);
sim1 = mapreduce 'mahout-core-0.7-job.jar'
store base into 'input'
load 'output' as (gid1:long, gid2:long, sim:double)
`org.apache.mahout.cf.taste.hadoop.similarity.item.ItemSimilarityJob -i input -o output -s SIMILARITY_EUCLIDEAN_DISTANCE`;
sim2 = foreach sim1 generate gid2 as gid1, gid1 as gid2, sim;
sim3 = union sim1,sim2;
dump sim3;
Pig output
2013-03-28 09:21:32,564 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNION,NATIVE
2013-03-28 09:21:32,676 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2013-03-28 09:21:32,699 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 4
2013-03-28 09:21:32,702 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2127: Internal Error: Cloning of plan failed for optimization.
Details at logfile: /home/chenwl/logs/pig_1364433685680.log
Pig log
Pig Stack Trace
---------------
ERROR 2127: Internal Error: Cloning of plan failed for optimization.
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias sim3
at org.apache.pig.PigServer.openIterator(PigServer.java:836)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:696)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:320)
at org.apache.pig.tools.grunt.GruntParser.loadScript(GruntParser.java:531)
at org.apache.pig.tools.grunt.GruntParser.processScript(GruntParser.java:480)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.Script(PigScriptParser.java:804)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:449)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:194)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:538)
at org.apache.pig.Main.main(Main.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store alias sim3
at org.apache.pig.PigServer.storeEx(PigServer.java:935)
at org.apache.pig.PigServer.store(PigServer.java:898)
at org.apache.pig.PigServer.openIterator(PigServer.java:811)
... 16 more
Caused by: org.apache.pig.impl.plan.optimizer.OptimizerException: ERROR 2127: Internal Error: Cloning of plan failed for optimization.
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer.mergeDiamondMROper(MultiQueryOptimizer.java:304)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer.visitMROp(MultiQueryOptimizer.java:219)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceOper.visit(MapReduceOper.java:273)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceOper.visit(MapReduceOper.java:46)
at org.apache.pig.impl.plan.ReverseDependencyOrderWalker.walk(ReverseDependencyOrderWalker.java:71)
at org.apache.pig.impl.plan.PlanVisitor.visit(PlanVisitor.java:46)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer.visit(MultiQueryOptimizer.java:94)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.compile(MapReduceLauncher.java:617)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:146)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1264)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1249)
at org.apache.pig.PigServer.storeEx(PigServer.java:931)
... 18 more
Caused by: java.lang.CloneNotSupportedException: Unable to find clone for op 1-36: Native('hadoop jar mahout-core-0.7-job.jar org.apache.mahout.cf.taste.hadoop.similarity.item.ItemSimilarityJob -i input -o output -s SIMILARITY_EUCLIDEAN_DISTANCE ') - scope-12
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.plans.PhysicalPlan.clone(PhysicalPlan.java:273)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer.mergeDiamondMROper(MultiQueryOptimizer.java:298)
... 29 more
================================================================================
Environment
OS: ubuntu 12.04
Hadoop: 1.0.4 Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290
Pig: 0.11.0 (r1446324)
P.S.:
It works if sim1 was loaded from hdfs, e.g. sim1 = load 'sim' as (gid1:long, gid2:long, sim:double).

Resources