Getting resource string not found error while running JMeter script - jmeter

I get below error while running script and execution is automatically stopped after running some threads. E.g I ran 1000 threads simultaneously and 926 tests were passed and then execution was stopped. I have used both JMeter 5.0 and 5.2 versions
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [#_samples]
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [median]
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [90%_line]
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [95%_line]
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [99%_line]
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [min]
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [error_%]
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [throughput]
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [received_kb/sec]
2020-02-22 15:08:25,341 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [sent_kb/sec]
2020-02-22 15:08:25,890 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [received_kb/sec]
2020-02-22 15:08:25,905 WARN o.a.j.u.JMeterUtils: ERROR! Resource string not found: [received_kb/sec]

It is not an error, you're getting this message of WARN severity and it means that JMeter tries to find default value for these metrics in JMeter properties and fails. It doesn't have any negative impact on the results. If you don't want to see this you can decrease JMeter logs verbosity for JMeterUtils class by adding the next line to log4j2.xml file (lives in "bin" folder of your JMeter installation)
<Logger name="org.apache.jmeter.util.JMeterUtils" level="error" />
This is actually a well-known JMeter bug #62770 so if you upgrade to the latest JMeter 5.2.1 you should stop seeing these warnings in any case.

Related

Cassandra 4.0.1 can't be started using 'cassandra -f' command on MAC OSX

After my Mac upgraded to Monterey, I had to reinstall cassandra from 3.x.x to 4.0.1.
I can't start Cassandra 4.0.1 using 'cassandra -f' command. I see following warning/errors:
WARN [main] 2022-01-19 17:11:58,324 StartupChecks.java:143 - jemalloc shared library could not be preloaded to speed up memory allocations
WARN [main] 2022-01-19 17:11:58,325 StartupChecks.java:187 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
Errors:
ERROR [SSTableBatchOpen:12] 2022-01-19 17:12:02,616 DebuggableThreadPoolExecutor.java:263 - Error in ThreadPoolExecutor
java.lang.AssertionError: Stats component is missing for sstable /usr/local/var/lib/cassandra/data/system/prepared_statements-18a9c2576a0c3841ba718cd529849fef/nb-346-big
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:461)
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:372)
at org.apache.cassandra.io.sstable.format.SSTableReader$2.run(SSTableReader.java:540)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
WARN [main] 2022-01-19 17:12:07,595 CommitLogReplayer.java:305 - Origin of 6 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,598 CommitLogReplayer.java:305 - Origin of 4 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,599 CommitLogReplayer.java:305 - Origin of 4 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,600 CommitLogReplayer.java:305 - Origin of 6 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,601 CommitLogReplayer.java:305 - Origin of 3 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,602 CommitLogReplayer.java:305 - Origin of 2 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,602 CommitLogReplayer.java:305 - Origin of 1 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,603 CommitLogReplayer.java:305 - Origin of 3 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,604 CommitLogReplayer.java:305 - Origin of 1 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,604 CommitLogReplayer.java:305 - Origin of 1 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,604 CommitLogReplayer.java:305 - Origin of 3 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,606 CommitLogReplayer.java:305 - Origin of 677 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,607 CommitLogReplayer.java:305 - Origin of 2 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,607 CommitLogReplayer.java:305 - Origin of 6 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,608 CommitLogReplayer.java:305 - Origin of 1 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [main] 2022-01-19 17:12:07,608 CommitLogReplayer.java:305 - Origin of 2 sstables is unknown or doesn't match the local node; commitLogIntervals for them were ignored
WARN [MemtableFlushWriter:1] 2022-01-19 17:12:08,879 NativeLibrary.java:318 - open(/usr/local/var/lib/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f, O_RDONLY) failed, errno (24).
WARN [MemtableFlushWriter:2] 2022-01-19 17:12:08,879 NativeLibrary.java:318 - open(/usr/local/var/lib/cassandra/data/system_schema/types-5a8b1ca866023f77a0459273d308917a, O_RDONLY) failed, errno (24).
ERROR [MemtableFlushWriter:1] 2022-01-19 17:12:08,885 LogReplica.java:108 - Failed to sync file /usr/local/var/lib/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/nb_txn_flush_fd43a060-798d-11ec-901b-455a45092af6.log
org.apache.cassandra.io.FSWriteError: java.nio.file.FileSystemException: /usr/local/var/lib/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/nb_txn_flush_fd43a060-798d-11ec-901b-455a45092af6.log: Too many open files
at org.apache.cassandra.io.util.FileUtils.write(FileUtils.java:867)
at org.apache.cassandra.io.util.FileUtils.appendAndSync(FileUtils.java:810)
at org.apache.cassandra.db.lifecycle.LogReplica.append(LogReplica.java:104)
at org.apache.cassandra.db.lifecycle.LogReplicaSet.lambda$null$5(LogReplicaSet.java:225)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:138)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:128)
at org.apache.cassandra.db.lifecycle.LogReplicaSet.append(LogReplicaSet.java:225)
at org.apache.cassandra.db.lifecycle.LogFile.addRecord(LogFile.java:363)
at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:282)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:138)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:128)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:123)
at org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:466)
at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:141)
at org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:250)
at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:141)
at org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1140)
at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1075)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.nio.file.FileSystemException: /usr/local/var/lib/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/nb_txn_flush_fd43a060-798d-11ec-901b-455a45092af6.log: Too many open files
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182)
at org.apache.cassandra.io.util.FileUtils.write(FileUtils.java:844)
... 21 common frames omitted
ERROR [MemtableFlushWriter:1] 2022-01-19 17:12:08,887 DefaultFSErrorHandler.java:104 - Exiting forcefully due to file system exception on startup, disk failure policy "stop"
org.apache.cassandra.io.FSWriteError: java.nio.file.FileSystemException: /usr/local/var/lib/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/nb_txn_flush_fd43a060-798d-11ec-901b-455a45092af6.log: Too many open files
at org.apache.cassandra.io.util.FileUtils.write(FileUtils.java:867)
at org.apache.cassandra.io.util.FileUtils.appendAndSync(FileUtils.java:810)
at org.apache.cassandra.db.lifecycle.LogReplica.append(LogReplica.java:104)
at org.apache.cassandra.db.lifecycle.LogReplicaSet.lambda$null$5(LogReplicaSet.java:225)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:138)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:128)
at org.apache.cassandra.db.lifecycle.LogReplicaSet.append(LogReplicaSet.java:225)
at org.apache.cassandra.db.lifecycle.LogFile.addRecord(LogFile.java:363)
at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:282)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:138)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:128)
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:123)
at org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:466)
at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:141)
at org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:250)
at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:141)
at org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1140)
at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1075)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.nio.file.FileSystemException: /usr/local/var/lib/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/nb_txn_flush_fd43a060-798d-11ec-901b-455a45092af6.log: Too many open files
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182)
at org.apache.cassandra.io.util.FileUtils.write(FileUtils.java:844)
... 21 common frames omitted
ERROR [MemtableFlushWriter:2] 2022-01-19 17:12:08,888 LogTransaction.java:304 - [nb_txn_flush_fd43a061-798d-11ec-901b-455a45092af6.log in /usr/local/var/lib/cassandra/data/system_schema/types-5a8b1ca866023f77a0459273d308917a] was not completed, trying to abort it now
It can be started using command 'brew services start cassandra' OR at least I see success message after starting it using 'brew services ...'
But while connecting it with using cqlsh, I see following messages:
bin % cqlsh
/usr/local/Cellar/cassandra/4.0.1/libexec/bin/cqlsh.py:460: DeprecationWarning: Legacy execution parameters will be removed in 4.0. Consider using execution profiles.
Connection error: ('Unable to connect to any servers', {'127.0.0.1:9042': ConnectionRefusedError(61, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
I spent almost two days by using Java 8 then Java 11, tried cassandra#3 but nothing is working.
Any idea?
The error is here: Too many open files <- you need to increase the limit on the number of open files. This could be done with ulimit command and make it permanent as described in this answer.

GPU resource for hadoop 3.0 / yarn

I try to use Hadoop 3.0 GA release with gpu, but when I executed the below shell command, there is an error and not working with gpu. please check the below and just let you know the shell command. I guess that there are misconfigurations from me.
2018-01-09 15:04:49,256 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:main(355)) - Initializing ApplicationMaster
2018-01-09 15:04:49,391 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:init(514)) - Application master for app, appId=1, clustertimestamp=1515477741976, attemptId=1
2018-01-09 15:04:49,418 WARN [main] distributedshell.ApplicationMaster (ApplicationMaster.java:init(626)) - Timeline service is not enabled
2018-01-09 15:04:49,418 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(649)) - Starting ApplicationMaster
2018-01-09 15:04:49,542 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(60)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-01-09 15:04:49,623 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(659)) - Executing with tokens:
2018-01-09 15:04:49,744 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(662)) - Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1515477741976 } attemptId: 1 } keyId: 1619387150)
2018-01-09 15:04:49,801 INFO [main] client.RMProxy (RMProxy.java:newProxyInstance(133)) - Connecting to ResourceManager at /0.0.0.0:8030
2018-01-09 15:04:49,886 INFO [main] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:serviceInit(138)) - Upper bound of the thread pool size is 500
2018-01-09 15:04:49,889 WARN [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(786)) - Timeline service is not enabled
2018-01-09 15:04:50,170 INFO [main] conf.Configuration (Configuration.java:getConfResourceAsInputStream(2656)) - resource-types.xml not found
2018-01-09 15:04:50,170 INFO [main] resource.ResourceUtils (ResourceUtils.java:addResourcesFileToConf(395)) - Unable to find 'resource-types.xml'.
2018-01-09 15:04:50,183 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,185 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,185 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,185 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,187 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,187 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,188 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,188 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,188 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,188 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(717)) - Max mem capability of resources in this cluster 8192
2018-01-09 15:04:50,188 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(720)) - Max vcores capability of resources in this cluster 4
2018-01-09 15:04:50,189 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(739)) - appattempt_1515477741976_0001_000001 received 0 previous attempts' running containers on AM registration.
2018-01-09 15:04:50,202 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:setupContainerAskForRM(1311)) - Requested container ask: Capability[<memory:-1, vCores:-1>]Priority[0]AllocationRequestId[0]ExecutionTypeRequest[{Execution Type: GUARANTEED, Enforce Execution Type: false}]Resource Profile[gpu-1]
2018-01-09 15:04:50,246 WARN [AMRM Heartbeater thread] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:51,255 WARN [AMRM Heartbeater thread] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:52,273 WARN [AMRM Heartbeater thread] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:52,278 INFO [AMRM Callback Handler Thread] distributedshell.ApplicationMaster (ApplicationMaster.java:onContainersAllocated(957)) - Got response from RM for container ask, allocatedCnt=1
2018-01-09 15:04:52,278 WARN [AMRM Callback Handler Thread] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
And the shell command that I executed with respect to YARN-7223 ticket is followed by,
yarn jar <path/to/hadoop-yarn-applications-distributedshell.jar> \ -jar <path/to/hadoop-yarn-applications-distributedshell.jar> \ -shell_command /usr/local/nvidia/bin/nvidia-smi -container_resource_profile gpu-1
Thanks in advance.

Could not access hive cli

i have installed hadoop and hive on the centos 7 machine, i am unable to access hive cli.
[centos#ip-10-103-2-173 hive]$ hive
OpenJDK 64-Bit Server VM warning: Using the ParNew young collector with the Serial old collector is deprecated and will likely be removed in a future release
WARNING: Use "yarn jar" to launch YARN applications.
OpenJDK 64-Bit Server VM warning: Using the ParNew young collector with the Serial old collector is deprecated and will likely be removed in a future release
17/02/27 14:17:24 WARN conf.HiveConf: HiveConf of name hive.metastore.pre-event.listeners does not exist
17/02/27 14:17:24 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist
17/02/27 14:17:24 WARN conf.HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
17/02/27 14:17:24 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist
17/02/27 14:17:24 WARN conf.HiveConf: HiveConf of name hive.semantic.analyzer.factory.impl does not exist
Logging initialized using configuration in file:/etc/hive/conf/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
please help. I am using postgresql.

Cannot write to log - pig error in local mode

Folks
I am new to hadoop eco system , was trying to install pig .
got stuck with the below error will trying to execute the tutorial script.
while installing the pig in hadoop cluster i did not change any permission
please let me know if have to do or the below is pertain to that.
any information is highly appreciable
2016-01-08 07:19:20,655 [main] WARN org.apache.pig.Main - Cannot write to
log file: /usr/lib/pig-0.6.0/tutorial/scripts/pig_1452266360654.log
2016-01-08 07:19:20,723 [main] WARN org.apache.pig.Main - Cannot write to log file: /usr/lib/pig-0.6.0/tutorial/scripts//script1-hadoop.pig1452266360723.log
2016-01-08 07:19:20,740 [main] ERROR org.apache.pig.Main - ERROR 2999: Unexpected internal error. null
2016-01-08 07:19:20,747 [main] WARN org.apache.pig.Main - There is no log file to write to.
2016-01-08 07:19:20,747 [main] ERROR org.apache.pig.Main - java.lang.NullPointerException
at java.util.Hashtable.put(Hashtable.java:394)
at java.util.Properties.setProperty(Properties.java:143)
at org.apache.pig.Main.main(Main.java:373)

Hive error when running from hortonworks sandbox

I am following this document to test the sentiment analysis - can someone please help me out -- thanks!!
[root#sandbox ~]# hive -f hiveddl.sql
15/04/12 15:43:23 WARN conf.HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
15/04/12 15:43:23 WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
15/04/12 15:43:23 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
15/04/12 15:43:23 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist
Logging initialized using configuration in file:/etc/hive/conf/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hive/lib/hive-jdbc-0.14.0.2.2.0.0-2041-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Added [json-serde-1.1.6-SNAPSHOT-jar-with-dependencies.jar] to class path
Added resources: [json-serde-1.1.6-SNAPSHOT-jar-with-dependencies.jar]
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.serde2.objectinspector.primitive.AbstractPrimitiveJavaObjectInspector.<init>(Lorg/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils$PrimitiveTypeEntry;)V
#
There is already this issue reported and answered on github:
Github issue link

Resources