I installed ActiveMQ in my local machine. I want to test it using Gatling. I created one simulation class to test my JMS. But I am getting error shown below.
Here is My Code:
import io.gatling.core.Predef._
import io.gatling.jms.Predef._
import scala.concurrent.duration._
import javax.jms._
import org.apache.activemq.ActiveMQConnectionFactory
class JmsTest extends Simulation{
val jmsConfig = jms
.connectionFactoryName("connectionFactory")
.url("tcp://localhost:61616")
.credentials("admin", "admin")
.contextFactory(classOf[org.apache.activemq.jndi.ActiveMQInitialContextFactory].getName)
.listenerCount(1)
val scn = scenario("JMS DSL test").repeat(1) {
exec(jms("req reply testing")
.reqreply
.queue("MyQueue")
.replyQueue("MyTopic")
.textMessage("Hello this is Naveen")
)
}
setUp(scn.inject(atOnceUsers(1)))
.protocols(jmsConfig)
}
I created jndi.properties file with following code.
java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory
java.naming.provider.url = vm://localhost
connectionFactoryNames = connectionFactory
queue.MyQueue = TestJms1
topic.MyTopic = TestJms1
The error which I am getting is:
Exception in thread "main" java.lang.NoSuchMethodError: io.gatling.jms.Predef$.jms()Lio/gatling/jms/protocol/JmsProtocolBuilderBase$;
at computerdatabase.JmsTest.<init>(JmsTest.scala:14)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at io.gatling.app.Gatling$.io$gatling$app$Gatling$$$anonfun$1(Gatling.scala:41)
at io.gatling.app.Gatling.run(Gatling.scala:92)
at io.gatling.app.Gatling.runIfNecessary(Gatling.scala:75)
at io.gatling.app.Gatling.start(Gatling.scala:65)
at io.gatling.app.Gatling$.start(Gatling.scala:57)
at io.gatling.app.Gatling$.fromArgs(Gatling.scala:49)
at io.gatling.app.Gatling$.main(Gatling.scala:43)
at io.gatling.app.Gatling.main(Gatling.scala)
Related
I am using dropwizard 1.2.4 with log4j 1.2.17. I have followed the instructions as mentioned below
https://github.com/arteam/dropwizard-nologback/
It is throwing the exception like below during unit testing.
java.lang.NoClassDefFoundError: ch/qos/logback/core/filter/Filter
at io.dropwizard.testing.junit.ResourceTestRule.<clinit>(ResourceTestRule.java:34)
at com.vnera.restapilayer.api.resources.ApiInfoControllerTest.<clinit>(ApiInfoControllerTest.java:25)
at sun.misc.Unsafe.ensureClassInitialized(Native Method)
at sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43)
at sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:156)
at java.lang.reflect.Field.acquireFieldAccessor(Field.java:1088)
at java.lang.reflect.Field.getFieldAccessor(Field.java:1069)
at java.lang.reflect.Field.get(Field.java:393)
at org.junit.runners.model.FrameworkField.get(FrameworkField.java:73)
at org.junit.runners.model.TestClass.getAnnotatedFieldValues(TestClass.java:230)
at org.junit.runners.ParentRunner.classRules(ParentRunner.java:255)
at org.junit.runners.ParentRunner.withClassRules(ParentRunner.java:244)
at org.junit.runners.ParentRunner.classBlock(ParentRunner.java:194)
at org.junit.runners.ParentRunner.run(ParentRunner.java:362)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.lang.ClassNotFoundException: ch.qos.logback.core.filter.Filter
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 19 more
My test code looks like below
import io.dropwizard.testing.junit.ResourceTestRule;
import org.junit.Assert;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.experimental.categories.Category;
import javax.ws.rs.core.Response;
import static org.mockito.Mockito.mock;
#Category(value = UnitTest.class)
public class ApiInfoControllerTest {
private static ApiNonFunctionalHandler nonFunctionalHandler = mock(ApiNonFunctionalHandler.class);
private static ApiFilter apiFilter = new ApiFilter(nonFunctionalHandler);
private static final String authToken = "NetworkInsight xTyAGJmZ8nU8yJDP7LnA8Q==";
#ClassRule
public static final ResourceTestRule resources = ResourceTestRule.builder()
.addResource(new ApiInfoController())
.addProvider(apiFilter).build();
#Test
public void testApiVersion() throws Exception {
Response response = resources.client()
.target(ApiConstants.INFO_BASE_URL + "/version")
.request()
.header("Authorization", authToken)
.buildGet().invoke();
Assert.assertNotNull(response);
Assert.assertEquals(response.toString(), Response.Status.OK.getStatusCode(), response.getStatus());
final VersionResponse actualError = response.readEntity(VersionResponse.class);
Assert.assertEquals(actualError.getApiVersion(), ApiConstants.API_VERSION);
}
}
My main application is working fine. The configuration.yaml for main application looks like below
# Change default server ports
server:
applicationConnectors:
- type: http
port: 8123
adminConnectors:
- type: http
port: 8124
requestLog:
type: external
logging:
type: external
Can someone let me know what could be going wrong and how can I get around this?
EDIT
Output of mvn dependency:tree is placed here as I am hitting the character limit here.
This is a bug in dropwizard 1.2.4 as discussed below
https://github.com/dropwizard/dropwizard/pull/2338
I am trying to connect to Aster server with jdbc drivers from java. I have added already the Jar files to the classpath.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.*;
public class TeradataJDBCConnection {
public static void main(String[] args) throws Exception {
Class.forName("com.asterdata.ncluster.Driver");
String url="jdbc:ncluster://<ip_address>:2406/test";
Connection conn=DriverManager.getConnection(url, "user123", "test");
}
}
But I am getting the below error.
Exception in thread "main" java.sql.SQLException: [AsterData][ASTERJDBCDSII](34) : Failed to connect to 10.99.186.92. Please check the host address. ()
at com.asterdata.ncluster.jdbc.core.NClusterConnection.connect(Unknown Source)
at com.simba.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source)
at com.simba.jdbc.common.AbstractDriver.connect(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at Tera.TeradataJDBCConnection.main(TeradataJDBCConnection.java:17)
Caused by: com.asterdata.ncluster.jdbc.core.MuleException: [AsterData][ASTERJDBCDSII](34) : Failed to connect to 10.99.186.92. Please check the host address. ()
... 6 more
No bug in the code.
Everything is running fine.
Download the jdbc driver from here.
https://aster-community.teradata.com/docs/DOC-2254
You could also download the driver from here: http://downloads.teradata.com/download/aster/aster-client-tools-for-windows
You can use the jar noarch-aster-jdbc-driver from AsterJDBC__indep_indep.06.10.00.02.zip file. This jar file works for my connection.
I'm trying to get simple example of springboot and camel working but come undone. Not sure what i'm doing wrong. in the gradle build i've included so far
dependencies {
compile 'org.apache.camel:camel-spring-boot-starter:2.18.4'
compile 'org.apache.camel:camel-groovy:2.18.4'
compile 'org.apache.camel:camel-stream:2.18.4'
compile 'org.codehaus.groovy:groovy-all:2.4.11'
testCompile group: 'junit', name: 'junit', version: '4.11'
testCompile group: 'junit', name: 'junit', version: '4.12'
}
i've create a DirectRoute component like this
#Component
class DirectRoute extends RouteBuilder{
#Override
void configure () throws Exception {
from ("direct:in") //tried stream:in also
.to ("stream:out")
}
}
I then have a driver bean that try's to invoke the route
#Component
public class HelloImpl implements Hello {
#Produce(uri = "direct:in")
private ProducerTemplate template;
#Override
public String say(String value) throws ExecutionException, InterruptedException {
assert template
println "def endpoint is : " + template.getDefaultEndpoint()
return template.sendBody (template.getDefaultEndpoint(), value)
}
}
lastly in the springboot application class i added a command line runner like this, that gets my bean from the spring context, and invokes the say method. I'm using groovy so i just passed a closure to the command line runner.
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
//return closure to run on startup - just list the beans enabled
{args ->
println("Let's inspect the beans provided by Spring Boot:")
String[] beanNames = ctx.getBeanDefinitionNames()
Arrays.sort(beanNames)
for (String beanName : beanNames) {
println(beanName)
}
println("call the direct:start route via the service")
Hello service = ctx.getBean("helloService")
def result = service.say("William")
println "service returned : $result "
}
}
when i run my application i get all the bean names printed out (that's ok), however when i invoke the direct:in via producer template i get this error (org.apache.camel.component.direct.DirectConsumerNotAvailableException) see below.
I was expecting the route to be triggered the name sent to see that arrive in the output stream - but this is what i get.
Caused by: org.apache.camel.CamelExecutionException: Exception occurred during execution on the exchange: Exchange[ID-MONSTER-PC2-58911-1496920205300-0-2]
at org.apache.camel.util.ObjectHelper.wrapCamelExecutionException(ObjectHelper.java:1795) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.util.ExchangeHelper.extractResultBody(ExchangeHelper.java:677) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.DefaultProducerTemplate.extractResultBody(DefaultProducerTemplate.java:515) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.DefaultProducerTemplate.extractResultBody(DefaultProducerTemplate.java:511) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:163) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.ProducerTemplate$sendBody$0.call(Unknown Source) ~[na:na]
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) [groovy-all-2.4.11.jar:2.4.11]
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) [groovy-all-2.4.11.jar:2.4.11]
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133) [groovy-all-2.4.11.jar:2.4.11]
at services.HelloImpl.say(HelloImpl.groovy:29) ~[main/:na]
at services.Hello$say.call(Unknown Source) ~[na:na]
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) [groovy-all-2.4.11.jar:2.4.11]
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) [groovy-all-2.4.11.jar:2.4.11]
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) [groovy-all-2.4.11.jar:2.4.11]
at application.Application$_commandLineRunner_closure1.doCall(Application.groovy:47) ~[main/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_121]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_121]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121]
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) ~[groovy-all-2.4.11.jar:2.4.11]
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) ~[groovy-all-2.4.11.jar:2.4.11]
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294) ~[groovy-all-2.4.11.jar:2.4.11]
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) ~[groovy-all-2.4.11.jar:2.4.11]
at groovy.lang.Closure.call(Closure.java:414) ~[groovy-all-2.4.11.jar:2.4.11]
at org.codehaus.groovy.runtime.ConvertedClosure.invokeCustom(ConvertedClosure.java:54) ~[groovy-all-2.4.11.jar:2.4.11]
at org.codehaus.groovy.runtime.ConversionHandler.invoke(ConversionHandler.java:124) ~[groovy-all-2.4.11.jar:2.4.11]
at com.sun.proxy.$Proxy44.run(Unknown Source) ~[na:na]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:776) [spring-boot-1.5.2.RELEASE.jar:1.5.2.RELEASE]
... 10 common frames omitted
Caused by: org.apache.camel.component.direct.DirectConsumerNotAvailableException: No consumers available on endpoint: direct://in. Exchange[ID-MONSTER-PC2-58911-1496920205300-0-2]
at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:55) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:197) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:97) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.ProducerCache$1.doInProducer(ProducerCache.java:529) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.ProducerCache$1.doInProducer(ProducerCache.java:497) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.ProducerCache.doInProducer(ProducerCache.java:365) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.ProducerCache.sendExchange(ProducerCache.java:497) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.ProducerCache.send(ProducerCache.java:225) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.DefaultProducerTemplate.send(DefaultProducerTemplate.java:144) ~[camel-core-2.18.4.jar:2.18.4]
at org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:161) ~[camel-core-2.18.4.jar:2.18.4]
What have i done wrong - and why does the producer template invocation on 'direct:in' (also tried stream in with same problem) not work? I thought that .to("stream:out") would be a consumer.
any pointers or advice gratefully received at this point
I have an update on my problems:
I had a subpackage with the application class annotated with #SpringBootApplication. So yes, unadorned it only scans subpackages.
you can add scanBasePackages= or scanBaseClasses= parameter, however when I tried doing a scan for single class, it seemed to scan the whole directory any way and grabbed the others as well.
I refactored the app to have a single root package with subpackages and elected to set the 'scanBasePakages to the new root package. but left the Application class in its own subpackage (personal preference only - documentation suggests leaving the Application in the root package)
you can now add other classes annotated with #Configuration to generate beans or use the basic #Component.
if you create Camel routes annotated with #Component they will be auto configured in the camelContext for you.
it appears by default that Spring isnt not starting the camelContext for you. When I checked the status of the context it shows as starting and not started. so in my commandLineRunner I had to start get the spring injected camelContext and had to start it myself, and exited it when I finished. I was slightly suprised as I thought SpringBootStarter would auto start the camelContext, but it appears not.
once you have Spring component scanning etc working and you start the camelContext, then problems with the org.apache.camel.component.direct.DirectConsumerNotAvailableException exception went away and things started to work - at least the baby examples I'm trying.
So revised structure now looks like this:
The revised ApplicationClass now looks like this with some simple println output to see the state of the context, and beans in the spring ctx. The helleoService bean is still the proxy I use to setup the producer template to call the DirectRoute.
package com.softwood.application
import groovy.util.logging.Slf4j
import org.apache.camel.CamelContext
import org.springframework.beans.factory.annotation.Autowired
import com.softwood.services.Hello
/**
* Created by willw on 07/06/2017.
*/
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean
#Slf4j //inject logger
#SpringBootApplication (scanBasePackages = ["com.softwood"]) //forces scan at parent
// same as #Configuration #EnableAutoConfiguration #ComponentScan with 'defaults' e.g. sub packages
public class Application {
#Autowired
ApplicationContext ctx
#Autowired
CamelContext camelContext
public static void main(String[] args) {
SpringApplication.run(Application.class, args)
}
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
//return closure to run on startup - just list the beans enabled
{args ->
println("Let's inspect the beans provided by Spring Boot:")
String[] beanNames = ctx.getBeanDefinitionNames()
Arrays.sort(beanNames)
for (String beanName : beanNames) {
println(beanName)
}
/* when component scan is working - bean routes are added
automatically to camel context via springBoot, however you do have to start
the camel context, yourself
*/
println "camelCtx has following components : " + camelContext.componentNames
println "camelCtx state is : " + camelContext.status
println "starting camel context"
camelContext.start()
println "camelCtx state now is : " + camelContext.status
//log.debug "wills logging call "
println("call the direct:start route via the service")
Hello service = ctx.getBean("helloService")
def result = service.say("William")
println "service returned : $result "
println "sleep 5 seconds "
sleep (5000)
println "stop camel context"
camelContext.stop()
println "camelCtx state now is : " + camelContext.status
}
}
}
That proxy is just registered as a simple bean like this in the spring context
package com.softwood.services
/**
* Created by willw on 07/06/2017.
*/
import org.apache.camel.Produce;
import org.apache.camel.ProducerTemplate
import org.springframework.stereotype.Component;
import java.util.concurrent.ExecutionException
#Component
public class HelloImpl implements Hello {
#Produce(uri = "direct:in") /* ?block=true */
private ProducerTemplate template
#Override
public String say(String value) throws ExecutionException, InterruptedException {
assert template
println "def endpoint is : " + template.getDefaultEndpoint()
//Future future = template.asyncSendBody(template.getDefaultEndpoint(), value)
//return future.get()
return template.sendBody (template.getDefaultEndpoint(), value)
}
}
The TimedRoute just sorts itself out with no template required to invoke in
package com.softwood.camelRoutes
/**
* Created by willw on 07/06/2017.
*/
import org.apache.camel.builder.RouteBuilder
import org.springframework.stereotype.Component
#Component
class TimedRoute extends RouteBuilder {
#Override
void configure () throws Exception {
from ("timer:foo")
.to ("log:com.softwood.application.Application?level=WARN")
}
}
My simple no-op file route isn't working (yet) and not sure why. I suspect I've not got the file config right somehow; some playing is required.
package com.softwood.camelRoutes
import org.apache.camel.builder.RouteBuilder
import org.springframework.stereotype.Component
/**
* Created by willw on 08/06/2017.
*/
#Component
class FileNoOpRoute extends RouteBuilder{
#Override
void configure () throws Exception {
from ("file:../com.softwood.file-inbox?recursive=true&noop=true&idempotent=true")
.to ("file:../com.softwood.file-outbox")
}
}
However the basics are not working and least camel is doing something whereas before I just had the exception and nothing.
I have found another question on Spring config highlighting some of the above also.
I am trying to run the connector example on local machine but keep getting UnknownHostException. How do I configure the access to BigQuery using the Hadoop Connector?
package com.mycompany.dataproc;
import com.google.cloud.hadoop.io.bigquery.BigQueryConfiguration;
import com.google.cloud.hadoop.io.bigquery.GsonBigQueryInputFormat;
import com.google.gson.JsonObject;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.LongWritable;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import scala.Tuple2;
public class BigQueryAccessExample {
JavaSparkContext jsc ;
public BigQueryAccessExample(JavaSparkContext jsc){
}
public static void main(String[] args) throws Exception{
SparkConf conf = new SparkConf()
.setAppName("BigQuery Reader").setMaster("local[5]");
conf.set("spark.serializer", org.apache.spark.serializer.KryoSerializer.class.getName());
JavaSparkContext jsc = new JavaSparkContext(conf);
String projectId = "mycompany-data";
String fullyQualifiedInputTableId = "mylogs.display20151030";
Configuration hadoopConfiguration = jsc.hadoopConfiguration();
//BigQueryConfiguration.
// Set the job-level projectId.
hadoopConfiguration.set(BigQueryConfiguration.PROJECT_ID_KEY, projectId);
// Use the systemBucket for temporary BigQuery export data used by the InputFormat.
String bucket = "my-spark-test";
hadoopConfiguration.set(BigQueryConfiguration.GCS_BUCKET_KEY, bucket);
com.google.cloud.hadoop.io.bigquery.
// Configure input and output for BigQuery access.
BigQueryConfiguration.configureBigQueryInput(hadoopConfiguration, fullyQualifiedInputTableId);
//BigQueryConfiguration.configureBigQueryOutput(conf, fullyQualifiedOutputTableId, outputTableSchema);
JavaPairRDD<LongWritable, JsonObject> tableData = jsc.newAPIHadoopRDD(hadoopConfiguration, GsonBigQueryInputFormat.class, LongWritable.class, JsonObject.class);
//tableData.count();
JavaRDD<JsonObject> myRdd = tableData.map(new Function<Tuple2<LongWritable, JsonObject>, JsonObject>() {
public JsonObject call(Tuple2<LongWritable, JsonObject> v1) throws Exception {
System.out.println(String.format("idx: %s val: %s", v1._1(), v1._2().toString()));
return v1._2();
}
});
myRdd.take(10);
}
}
but I get UnknownHostException
java.net.UnknownHostException: metadata
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1168)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1104)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:998)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:932)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142)
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189)
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71)
at com.google.cloud.hadoop.io.bigquery.BigQueryFactory.createBigQueryCredential(BigQueryFactory.java:81)
at com.google.cloud.hadoop.io.bigquery.BigQueryFactory.getBigQuery(BigQueryFactory.java:101)
at com.google.cloud.hadoop.io.bigquery.BigQueryFactory.getBigQueryHelper(BigQueryFactory.java:89)
at com.google.cloud.hadoop.io.bigquery.AbstractBigQueryInputFormat.getBigQueryHelper(AbstractBigQueryInputFormat.java:363)
at com.google.cloud.hadoop.io.bigquery.AbstractBigQueryInputFormat.getSplits(AbstractBigQueryInputFormat.java:102)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:115)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1277)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.take(RDD.scala:1272)
at org.apache.spark.api.java.JavaRDDLike$class.take(JavaRDDLike.scala:494)
at org.apache.spark.api.java.AbstractJavaRDDLike.take(JavaRDDLike.scala:47)
at com.mycompany.dataproc.BigQueryAccessExample.main(BigQueryAccessExample.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
I appears that I need to set up access credentials or permissions .. but I don't see any docs regarding that.
I downloaded credentials from https://console.developers.google.com/project//apiui/credential
and set up GOOGLE_APPLICATION_CREDENTIALS but that didn't seem to work.
Any help?
The simplest way is to create a new service account and download the .p12 file (the Hadoop connectors do not currently support Application Default Credentials or JSON keyfiles):
String serviceAccount = "foo#bar.gserviceaccount.com";
String localKeyfile = "/path/to/local/keyfile.p12";
hadoopConfiguration.set("google.cloud.auth.service.account.enable", true);
hadoopConfiguration.set("google.cloud.auth.service.account.email", serviceAccount);
hadoopConfiguration.set("google.cloud.auth.service.account.keyfile", localKeyfile);
I'm trying a simple program from the "hadoop in Action" book to merge a series of files from the local file system into one file in the hdfs. The code snippet is the same as the one provided in the book.
import java.lang.*;
import java.util.*;
import java.io.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.Path;
public class PutMerge {
public static void main(String[] args) throws IOException{
Configuration conf = new Configuration();
FileSystem hdfs = FileSystem.get(conf);
FileSystem local = FileSystem.getLocal(conf);
Path inputDir = new Path(args[0]); // First argument has the input directory
Path hdfsFile = new Path(args[1]); // Concatenated hdfs file name
try {
FileStatus[] inputFiles = local.listStatus(inputDir); // list of Local Files
FSDataOutputStream out = hdfs.create(hdfsFile); // target file creation
for (int i = 0; i<inputFiles.size; i++ {
FSDataInputStream in = local.open(inputFiles[i].getPath());
int bytesRead = 0;
byte[] buff = new byte[256];
while (bytesRead = (in.read(buff))>0) {
out.write(buff,0,bytesRead);
}
in.close();
}
out.close();
}
catch(Exception e) {
e.printStackTrace();
}
}
}
The program successfully compiled and while trying to run I'm getting the following exception
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/commons/configuration/Configuration
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:37)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:34)
at org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:217)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:185)
at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:237)
at org.apache.hadoop.security.KerberosName.(KerberosName.java:79)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:210)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:185)
at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:237)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:482)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:468)
at org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:1519)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1420)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at PutMerge.main(PutMerge.java:16) Caused by: java.lang.ClassNotFoundException:
org.apache.commons.configuration.Configuration
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 17 more
Based on inputs from some of the posts, I added the commons package. My classpath definition is
/usr/java/jdk1.7.0_21:/data/commons-logging-1.1.2/commons-logging-1.1.2.jar:/data/hadoop-1.1.2/hadoop-core-1.1.2.jar:/data/commons-logging-1.1.2/commons-logging-adapters-1.1.2.jar:/data/commons-logging-1.1.2/commons-logging-api-1.1.2.jar:.
Any clue on why this is not working?
You didnt include apache configuration in your classpath.
Really though you shouldn't need to include much besides hadoop itself. Make sure you are running your jar with hadoop itself.
> hadoop -jar myJar.jar