dependency issues with app while deploying in tomcat-server - maven

i am using hbase 0.94.7 and hadoop 1.0.4 and tomcat 7
i wrote a small res-based application which performs crud operations on hbase.
earlier i used to run the app using maven tomcat plugin.
now i am trying to deploy the war in tomcat-server.
since hadoop and hbase jars already contain org.mortbay.jetty jsp-api and servlet-api jars of older verisons,
i am getting Abstract Method Exceptions
here's the exception log
so then i made a exclusion of org.mortbay.jetty from both hadoop and hbase dependencies in pom.xml. but it started showing more and more such kind of issues like jasper.
so then i added scope provided to hadoop and hbase dependencies.
now tomcat is unable to find the hadoop and hbase jars.
can someone help me in fixing this dependecy issues.
Thanks.

Do one thing,
- Right click on project
- go to property,
- type java build path,
- go to third tab of library,
- Removed dependency of lib and maven,
- Clean build your project.
might be solve your problem.

Related

how to change flink fat jar to thin jar

can I move the dependency jars to hdfs, so I can run a thin jar without dependency jars?
the Operation and Maintenance Engineers do not allow me to move jar to flink lib folder.
Not sure what problem you are trying to solve, but you might want to consider an application mode deployment if you are using yarn:
./bin/flink run-application -t yarn-application \
-Dyarn.provided.lib.dirs="hdfs://myhdfs/remote-flink-dist-dir" \
"hdfs://myhdfs/jars/MyApplication.jar"
In this example, MyApplication.jar isn't a thin jar, but the job submission is very lightweight as the needed Flink jars and the application jar are picked up from HDFS rather than being shipped to the cluster by the client. Moreover, the application’s main() method is executed on the JobManager.
Application mode was introduced in Flink 1.11, and is described in detail in this blog post: Application Deployment in Flink: Current State and the new Application Mode.

Should I use spark-submit if using spring boot

What is the purpose of spark submit? From what I can see it is just adding properties and jars to the classpath.
If I am using spring boot can I avoid using spark-submit, and just package a fat jar with all the properties I want spark.master etc...
Can ppl see any downside to doing this?
recently I met same case - and also try to stick to spring boot exec jar which unfortunately failed finally, but I was close to end. the state when I gave up was - spring boot jar built without spark/hadoop libs included, and i was running it on a cluster with -Dloader.path='spark/hadoop libs list extracted from SPARK_HOME and HADOOP_HOME on cluster'. I ended up using 2d option - build fat jar with shaded plugin and running it as usual jar by spark submit which seems to be a bit strange solution but still works ok

spark maven dependency understanding

I am trying to understand how spark works with Maven ,
I have the following question : Do I need to have spark installed in my machine to build spark application ( in scala ) with maven ?
Or should I just add the spark dependency into the POM.xml of my maven project
Best regards
The short answer is no. At build time you all your dependencies will be collected by Maven or Sbt. There is no need for an additional Spark installation.
Also at runtime (an this might also include the execution of unit test during the build) you do not necessarily need a Spark installation. If the value of SPARK_HOME is not set to a valid Spark installation, default values will be used for the runtime configuration of Spark.
However, as soon as you want to start Spark jobs on a remote cluster (by using spark-submit) you will need a Spark installation.

Spark can't find Guava Classes

I'm running Spark's example called JavaPageRank, but it's a copy that I compiled separately using maven in a new jar. I keep getting this error:
ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
java.lang.NoClassDefFoundError: com/google/common/collect/Iterables
Despite the fact that guava is listed as one of Spark's dependencies. I'm running compiled Spark 1.6 that I downloaded pre-compiled from the apache website.
Thanks!
The error means that the jar containing com.google.common.collect.Iterables class is not in the classpath. So your application is not able to find the required class in runtime.
If you are using maven/gradle , try to clean, build and refresh the project. Then check your classes folder and make sure the guava jar is in the lib folder.
Hope this will help.
Good luck!

What is difference between spring-jdbc.jar and org.springframework.jdbc jar? Why do we need both the jars in the project?

In the project using both the jars spring-jdbc.jar and org.springframework.jdbc.jar. I want to remove one of the jars due to the mismatch of the version. Spring-jdbc has 4.1.4 jar version and other one is 3.2.5 which is latest for both the jar.
Due to jar version mismatch I am getting the error during runtime. Could anyone tell me the correct latest jar version for both the files.
Open the manifest.MF of the org.sprigframework.jdbc.jar and you will find import/export-package statements in it.
Those jars where available on the spring ebr repository until it was closed last year. If you're not using OSGi, you can drop the old jar version, otherwise look fot apache servicemix bundles.

Resources