How to exclude all vulnerabilities of hive-exec, which includes "shaded" dependency itself? - maven

I must remove all "High Severity"-Vulnerabilities in "Dependency-check", which are generated through a maven-plugin. It is difficult to remove vulnerabilities of "hive-exec".
An example of result-html-file is like this.
hive-exec-3.1.0.jar (shaded: org.apache.parquet:parquet-hadoop:1.10.0)
File Path: C:\Users\MYNAME\.m2\repository\org\apache\hive\hive-exec\3.1.0\
hive-exec-3.1.0.jar\META-INF/maven/org.apache.parquet/parquet-hadoop/pom.xml
this "...\hive-exec-3.1.0.jar\META-INF/maven/org.apache.parquet/parquet-hadoop/pom.xml" should be removed!
I could remove most of Vulnerabilities using < exclude > -tag in each < dependency > or changing the version.
I also tried to exclude this "parquet-hadoop" in my pom file...
<dependencies>
...
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>3.1.0</version>
<exclusions>
<exclusion>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-hadoop-bundle</artifactId>
</exclusion>
...
</exclusions>
</dependency>
</dependencies>
But it couldn't remove this "parquet-hadoop-bundle", because they are "shaded" in the "hive-exec".
A file called hive-exec-3.1.0.pom inside the hive-exec-3.1.0.jar shades this "parquet-hadoop". The hive-exec-3.1.0.pom has the following contents...
<plugin>
<artifactId>maven-shade-plugin</artifactId>
<executions>
<execution>
<id>build-exec-bundle</id>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<includes>
<include>org.apache.hive:hive-common</include>
...
<include>org.apache.parquet:parquet-hadoop-
bundle</include>
I also tried to remove them with shade-plugin in my pom file. But It doesn't work.
I'll be very happy if someone has experience with this kind of problem.

A college found the solution.
...
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>3.1.0</version>
<classifier>core</classifier>
<exclusions>
...
We can specify the jar of the hive-exec with the "classifier"-tag and the value-"core". This "core" includes only the essential part of the "hive-exec". In this way I could remove all high and middle vulnerabilities.

Related

What do we meant by "Unresolved requirement: Import-Package: com.google.common.collect_ [Sanitized]" in liferay 7.2

I am creating a hook in liferay 7.2 but unfortunately when I deploy it.I come across this error. I had tried increasing version of "com.google.collections" dependency and also tried adding guauva
a dependency but nothing seems to resolve this error.
My dependencies in Pom.xml is as such:
<dependencies>
<dependency>
<groupId>com.liferay.portal</groupId>
<artifactId>com.liferay.portal.kernel</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.service.component.annotations</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.google.collections</groupId>
<artifactId>google-collections</artifactId>
<version>1.0-rc2</version>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>osgi.cmpn</artifactId>
<version>6.0.0</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.1.2</version>
<configuration>
<archive>
<manifestFile>${project.build.outputDirectory}/META-INF/MANIFEST.MF</manifestFile>
</archive>
</configuration>
</plugin>
<plugin>
<groupId>biz.aQute.bnd</groupId>
<artifactId>bnd-maven-plugin</artifactId>
<version>4.3.0</version>
<executions>
<execution>
<goals>
<goal>bnd-process</goal>
</goals>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>biz.aQute.bnd</groupId>
<artifactId>biz.aQute.bndlib</artifactId>
<version>4.3.0</version>
</dependency>
<dependency>
<groupId>com.liferay</groupId>
<artifactId>com.liferay.ant.bnd</artifactId>
<version>3.2.6</version>
</dependency>
</dependencies>
Error :
org.osgi.framework.BundleException: Could not resolve module: com.allen.portal.hook [1272]_ Unresolved requirement: Import-Package: com.google.common.collect_ [Sanitized]
at org.eclipse.osgi.container.Module.start(Module.java:444)
at org.eclipse.osgi.internal.framework.EquinoxBundle.start(EquinoxBundle.java:428)
at com.liferay.portal.file.install.internal.DirectoryWatcher._startBundle(DirectoryWatcher.java:1106)
at com.liferay.portal.file.install.internal.DirectoryWatcher._startBundles(DirectoryWatcher.java:1139)
at com.liferay.portal.file.install.internal.DirectoryWatcher._process(DirectoryWatcher.java:1001)
at com.liferay.portal.file.install.internal.DirectoryWatcher.run(DirectoryWatcher.java:313)
If you have any ways to resolve this error, please help me out
Unrelated: You're using an rc2 version released in October 2009, when a release was made in December 2009? Seriously?
It looks like you're building an OSGi module, which compiles fine (because you provide the dependency). However, that does not mean that the google collections code ends up in your jar as well. The runtime expects to find it though - and as Google collections is not an OSGi bundle itself, you'll have several choices:
repackage it as OSGi bundle (and deploy it to the runtime) (or find someone who did it already)
repackage it within your own bundle
use a different implementation. Chances are that collections utility code from 2009 has found its way into more current implementations and is no longer necessary.
In short: In one way or another, you'll need to make your dependencies available at runtime. Either by fattening your own bundle (but be careful: You can't pass those collections around to other bundles if they bring their own implementation) or by relying on the implementation being available to the runtime.
The third alternative is to switch to an implementation where it's easier to make it available at runtime, preferably as OSGi bundle.

java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.reloadExistingConfigurations()V

It looks like I am again stuck on the running a packaged spark app jar using spark submit. Following is my pom file:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<parent>
<artifactId>oneview-forecaster</artifactId>
<groupId>com.dataxu.oneview.forecast</groupId>
<version>1.0.0-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>forecaster</artifactId>
<dependencies>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.module</groupId>
<artifactId>jackson-module-scala_${scala.binary.version}</artifactId>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<!--<scope>provided</scope>-->
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-hive -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>2.2.0</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-aws</artifactId>
<version>2.8.3</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.10.60</version>
</dependency>
<!-- https://mvnrepository.com/artifact/joda-time/joda-time -->
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.9.9</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.8.0</version>
<!--<scope>provided</scope>-->
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<testSourceDirectory>src/test/scala</testSourceDirectory>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>${scala-maven-plugin.version}</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<archive>
<manifest>
<mainClass>com.dataxu.oneview.forecaster.App</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Following is a simple snippet of code which fetches data from s3 location and prints it:
def getS3Data(path: String): Map[String, Any] = {
println("spark session start.........")
val spark = getSparkSession()
val configTxt = spark.sparkContext.textFile(path)
.collect().reduce(_ + _)
val mapper = new ObjectMapper
mapper.registerModule(DefaultScalaModule)
mapper.readValue(configTxt, classOf[Map[String, String]])
}
When I run it from intellij, everything works fine. the log is clear and looks good. However, when I package it using mvn package and try to run it using spark submit, I end up getting the following error at the .collect.reduce(_ + _). Following is the error I encounter:
"main" java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.reloadExistingConfigurations()V
at org.apache.hadoop.fs.s3a.S3AFileSystem.addDeprecatedKeys(S3AFileSystem.java:181)
at org.apache.hadoop.fs.s3a.S3AFileSystem.<clinit>(S3AFileSystem.java:185)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
...
I am not understanding which dependency was not packaged or what might be the issue as I did set the versions correctly expecting the hadoop aws should have all of them.
Any help will be appreciated.
The dependencies between hadoop and AWS JDK are very sensitive, and you should stick to using the correct versions that your hadoop dependency version was built with.
The first problem you need to solve is pick one version of Hadoop. I see you're mixing versions 2.8.3 and 2.8.0.
When I look at the dependency tree for org.apache.hadoop:hadoop-aws:2.8.0, I see that it is built against version 1.10.6 of the AWS SDK (same for hadoop-aws:2.8.3).
This is probably what's causing mismatches (you're mixing incompatible versions). So:
Choose the version of hadoop you want to use
Include hadoop-aws with the version compatible with your hadoop
Remove other dependencies, or only include them with versions matching the one compatible with your hadoop version.
In case anybody else is still stumbling on this error... it took me a while to find out, but check if your project has a dependency (direct or transitive) on the package org.apache.avro/avro-tools.
It was brought into my code by a transitive dependency.
Its problem is that it ships with a copy of org.apache.hadoop.conf.Configuration
that is much older than all current versions of hadoop, so it may end up being the one picked up in the classpath.
In my scala project, I just had to exclude it with
ExclusionRule("org.apache.avro","avro-tools")
and the error (finally!) disappear.
I am sure that the avro-tools coders had some good reason to include a copy of a file that belongs to another package (hadoop-common), I was really surprised to find it there and made me waste an entire day.
In my case, I was running a local Spark installation on a Cloudera edge node and was hitting this conflict (even though I made sure to download Spark with the correct hadoop binaries precompiled). I just went into my Spark home and moved the hadoop-common jar so it wouldn't be loaded:
mv ~/spark-2.4.4-bin-hadoop2.6/jars/hadoop-common-2.6.5.jar ~/spark-2.4.4-bin-hadoop2.6/jars/hadoop-common-2.6.5.jar.XXXXXX
After that, it ran... in local mode anyway.

Add a new line between dependencies via google sort pom plugin

Is there a way to add a new line between each dependency via the google sort pom plugin or any other plugin.
Current POM, sorted via google sort pom plugin
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<version>1.8.5</version>
</dependency>
<dependency>
<groupId>org.powermock</groupId>
<artifactId>powermock-core</artifactId>
<version>1.4.10</version>
</dependency>
I'm trying to achieve
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<version>1.8.5</version>
</dependency>
<dependency>
<groupId>org.powermock</groupId>
<artifactId>powermock-core</artifactId>
<version>1.4.10</version>
</dependency>
My current pom plugin
<plugin>
<groupId>com.google.code.sortpom</groupId>
<artifactId>maven-sortpom-plugin</artifactId>
<version>${com.google.code.sortpom}</version>
<configuration>
<predefinedSortOrder>custom_1</predefinedSortOrder>
<sortDependencies>groupId,artifactId</sortDependencies>
<sortPlugins>groupId,artifactId</sortPlugins>
<sortProperties>false</sortProperties>
<createBackupFile>false</createBackupFile>
<lineSeparator>\r\n</lineSeparator>
<expandEmptyElements>false</expandEmptyElements>
<keepBlankLines>true</keepBlankLines>
<nrOfIndentSpace>-1</nrOfIndentSpace>
<verifyFail>Warn</verifyFail>
</configuration>
<executions>
<execution>
<goals>
<goal>sort</goal>
</goals>
<phase>compile</phase>
</execution>
</executions>
</plugin>
At the moment the maven-sortpom-plugin does not have an automatic way of inserting blank lines between dependencies.
However, if you have a empty line between two existing dependencies, the plugin will not remove the line during sorting since you use the configuration option <keepBlankLines>true</keepBlankLines>
Its just an XML file
Just use your IDE's built in XML formatter to format the pom.xml file however you want. Both Eclipse and Intellij IDEA have formatters for XML files.

Maven shade plugin warning: we have a duplicate - how to fix?

This is my project POM (link to the paste, so you can right click > save as pom.xml)
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.zybnet</groupId>
<artifactId>excel-reporter</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>mvn1</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi</artifactId>
<version>3.8</version>
</dependency>
<dependency>
<groupId>net.sf.jasperreports</groupId>
<artifactId>jasperreports</artifactId>
<version>4.6.0</version>
</dependency>
<dependency>
<groupId>org.codehaus.groovy</groupId>
<artifactId>groovy-all</artifactId>
<version>2.0.0</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>${project.build.sourceDirectory}</directory>
</resource>
</resources>
<plugins>
<plugin>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<finalName>${artifactId}-${version}-tmp</finalName>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>1.7.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>com.zybnet.Main</mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
I followed the advice of configuring the default jar plugin as advertised in the FAQ, but still when I run mvn package about 20K warnings are issued. Running mvn clean does not help either.
According to this answer, I could manually exclude some dependencies. However, I don't know if it's the right way, and the dependency tree is rather complex so it's difficult to argue where to start.
I know that these issues are not harmful, but I'm used to treat warnings as something that must be fixed. Moreover, I'm a beginner with Maven, so I'd like to understand what's wrong with my understanding, and how to troubleshoot problems.
(Using maven assembly plugin is not an option here)
Sometimes it happens that the same class definition is found in two or more JAR files (usually these are dependency JARs). In such a case, there is nothing the developer can do except trying to figure out what to manually exclude from the dependencies or from the final artifact (maybe with the help of mvn dependency:tree -Ddetail=true). I opened an issue and submitted a patch, to help the developer with a slightly prettier output like
[WARNING] xml-apis-1.3.02.jar, xmlbeans-2.3.0.jar define 4 overlappping classes:
[WARNING] - org.w3c.dom.TypeInfo
[WARNING] - org.w3c.dom.DOMConfiguration
[WARNING] - org.w3c.dom.DOMStringList
[WARNING] - org.w3c.dom.UserDataHandler
[WARNING] maven-shade-plugin has detected that some .class files
[WARNING] are present in two or more JARs. When this happens, only
[WARNING] one single version of the class is copied in the uberjar.
[WARNING] Usually this is not harmful and you can skip these
[WARNING] warnings, otherwise try to manually exclude artifacts
[WARNING] based on mvn dependency:tree -Ddetail=true and the above
[WARNING] output
[WARNING] See http://docs.codehaus.org/display/MAVENUSER/Shade+Plugin
Using this output and that from mvn dependency:tree I added sections like
<dependency>
<groupId>net.sf.jasperreports</groupId>
<artifactId>jasperreports</artifactId>
<version>4.7.0</version>
<exclusions>
<exclusion>
<groupId>org.apache.poi</groupId>
<artifactId>poi-ooxml</artifactId>
</exclusion>
<exclusion>
<groupId>bouncycastle</groupId>
<artifactId>bcmail-jdk14</artifactId>
</exclusion>
<exclusion>
<groupId>bouncycastle</groupId>
<artifactId>bcprov-jdk14</artifactId>
</exclusion>
<exclusion>
<groupId>org.bouncycastle</groupId>
<artifactId>bctsp-jdk14</artifactId>
</exclusion>
</exclusions>
</dependency>
and managed to reduce the number of warnings from several thousands to a couple dozens. Still, this is not perfect. Still, they continue to copypaste classes that lead to name clashes (I can't understand why). Still, this solution is specific to this particular project and can't be easily ported on anything else.
As an newer update to this problem is updating to the current maven plugin.
The question is using
<version>1.7.1</version>
which will only tell you that you:
We have a duplicate org/eclipse/persistence/internal/libraries/asm/AnnotationVisitor.class in /Users/.../repository/org/eclipse/persistence/eclipselink/2.7.0/eclipselink-2.7.0.jar
Leaving you to guess or trial and error where the first entry where you have overlapping classes. Not very useful. Fortunatly,
<version>3.1.0</version>
will give you the more usefull output that includes the jars where they came from:
e.g.
annotations-3.0.1.jar, jcip-annotations-1.0.jar define 4 overlapping classes:
- net.jcip.annotations.GuardedBy
- net.jcip.annotations.NotThreadSafe
...
You can then decide to exclude one or the other, or use shade to rename the classes as appropriate to your project.
I have updated to version 3.4.1, and it works now.
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.4.1</version>

how to exclude GWT dependency code from OSGI bundle generated by MAven+BND?

I have several Maven modules with Vaadin library dependency in the root pom.xml file.
I'm trying to build a set of OSGI bundles (1 per Maven module) using Maven+BND.
I added this to my "root" pom.xml file:
<dependencies>
<dependency>
<groupId>com.vaadin</groupId>
<artifactId>vaadin</artifactId>
<version>6.6.6</version>
</dependency>
<dependency>
<groupId>com.google.gwt</groupId>
<artifactId>gwt-user</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.felix</groupId>
<artifactId>org.osgi.core</artifactId>
<version>1.0.0</version>
</dependency>
</dependencies>
unfortunately, the resulting JAR files (bundles) include GWT (com.google.gwt) classes. This
1) makes the bundles huge, with lots of duplicated dependencies.
2) generated thousands of build warnings about "split packages".
QUESTION: how to prevent adding GWT classes into my Jar files?
I tried setting "scope" of GWT to "provided", setting "type" to "bundle", and even optional=true - didn't help.
here's the part of my root pom.xml, which is responsible for Vaadin/GWT stuff:
<plugins>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<version>2.3.5</version>
<extensions>true</extensions>
<configuration>
<instructions>
<Export-Package>mycompany.*</Export-Package>
<Private-Package>*.impl.*</Private-Package>
<Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
<!-- <Bundle-Activator>com.alskor.publicpackage.MyActivator</Bundle-Activator>-->
</instructions>
</configuration>
</plugin>
<!-- Compiles your custom GWT components with the GWT compiler -->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>gwt-maven-plugin</artifactId>
<!-- Version 2.1.0-1 works at least with Vaadin 6.5 -->
<version>2.3.0-1</version>
<configuration>
<!-- if you don't specify any modules, the plugin will find them -->
<!--modules>
..
</modules-->
<webappDirectory>${project.build.directory}/${project.build.finalName}/VAADIN/widgetsets
</webappDirectory>
<extraJvmArgs>-Xmx512M -Xss1024k</extraJvmArgs>
<runTarget>clean</runTarget>
<hostedWebapp>${project.build.directory}/${project.build.finalName}</hostedWebapp>
<noServer>true</noServer>
<port>8080</port>
</configuration>
<executions>
<execution>
<goals>
<goal>resources</goal>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- Updates Vaadin 6.2+ widgetset definitions based on project dependencies -->
<plugin>
<groupId>com.vaadin</groupId>
<artifactId>vaadin-maven-plugin</artifactId>
<version>1.0.1</version>
<executions>
<execution>
<configuration>
<!-- if you don't specify any modules, the plugin will find them -->
<!--
<modules>
<module>${package}.gwt.MyWidgetSet</module>
</modules>
-->
</configuration>
<goals>
<goal>update-widgetset</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
The wildcards in your Export-Package and Private-Package statements strike me as exceedingly dangerous. It's possible that the GWT packages are being dragged in because of the *.impl.* pattern in Private-Package.
Also you should never use wildcards in Export-Package: exports should be tightly controlled and versioned.
use mvn dependency:tree to see where the gwt dependency comes from
Add an <excludes/> element with an appropriate <exclude/> to the dependency in question to suppress it.
I've had similar problem, as final war file exceeded almost 90MB !
One of the culprit was aforementioned jar, so I did this :
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>widgetset</artifactId>
<version>3.2</version>
<exclusions>
<exclusion>
<groupId>com.vaadin.external.gwt</groupId>
<artifactId>gwt-user</artifactId>
</exclusion>
</exclusions>
</dependency>
...
</dependencies>

Resources