Reverse-engineer a pom.xml given a lib folder full of jars? - maven

Title says it all.
Is it possible to reverse-engineer pom.xml dependencies given a lib folder full of jars?
i.e. Discover the publicly available Maven artifacts given a jar name and/or SHA1 checksum.
Here's a naive attempt which didn't work very well, because the search often returns too many matches:
#!/usr/bin/python
# Mavenize a lib folder, converting jars into dependencies
import os
import sys
import urllib2
import json
for file in os.listdir(sys.argv[1]):
url = "http://search.maven.org/solrsearch/select?rows=1&wt=json&q=" + file
response = urllib2.urlopen(url)
jsonStr = response.read()
jsonObj = json.loads(jsonStr)
if(jsonObj["response"]["numFound"] > 0):
print "<dependency><!-- for " + file + "-->"
print " <groupId>" + jsonObj["response"]["docs"][0]["g"] + "</groupId>"
print " <artifactId>" + jsonObj["response"]["docs"][0]["a"] + "</artifactId>"
print " <version>" + jsonObj["response"]["docs"][0]["latestVersion"] + "</version>"
print "</dependency>"

If those jar files produced by maven then yes it is possible. In other case it is not possible without extra step like trying to find group id and version from mvnrepository etc.

Related

WebSphere - Map Module to Target through Jython

We have deployed an ear file to our WebSphere instance. By default, all *.jar & *.war modules are mapped to the jvm.
We would like to map all the *.war modules also to the web server.
I have created the below jython script to map the additional modules to the web server:
modules = AdminApp.listModules('${p:appName}', '-server')
splitted = modules.splitlines()
for moduleLine in splitted:
print "Mapping module: " + moduleLine
appName, moduleUri, target = moduleLine.split("#")
print appName
print moduleUri
print target
if moduleUri.find('.war') >= 0:
print "It's a war: " + moduleUri
module, webXml = moduleUri.split("+")
print module
print webXml
AdminApp.edit('${p:appName}', ['-MapModulesToServers', [[module, module + ',' + webXml, target]]])
The above script works when the name of the module is the same name as referenced in the uri. However, in some cases, the web.xml contains another name as 'display name'. When we do a -MapModulesToServers, it seems to look at the display name of the module, not the uri.
For example:
In the WebSphere console, we would have the following line:
Module URI Module type
Demo be.fictive.company.demo.war,WEB-INF/web.xml Web Module
The 'AdminApp.listModules' method is returning the uri name (be.fictive.company.demo.war), whereas I need the name of the module (Demo).
Am I missing something or is there another way to retrieve the module name, so I can use the AdminApp.edit('${p:appName}', ['-MapModulesToServers', [[module, module + ',' + webXml, target]]]) to update the targets?
The administrative scripting console is not available, so I don't see a way to retrieve the commands that have been issued.
You can use the wildcat to match the name of the module. Replace
AdminApp.edit('${p:appName}', ['-MapModulesToServers', [[module, module + ',' + webXml, target]]])
with
AdminApp.edit('${p:appName}', ['-MapModulesToServers', [['.*', module + ',' + webXml, target]]])

Unsupported tarball from ... File located at "result" is a symbolic link to absolute path

I used cachix for nix if it has any bearing, and reinstalled stack, too. The repo in question is at: https://github.com/NorfairKing/tickler
[jk#jk tickler]$ stack build
Cloning 85ee8a577fb576e2dd7643bf248ff8fbbe9598ec from git#github.com:NorfairKing/pretty-relative-time.git
Enter passphrase for key '/home/jk/.ssh/id_rsa':
Unsupported tarball from /tmp/with-repo-archive30826/foo.tar: File located at "result" is a symbolic link to absolute path /nix/store/j0l1f389pmxdazxr0h0ax8v6c8ln578h-pretty-relative-time-0.0.0.0
Running nix build:
[jk#jk tickler]$ nix build
error: Please be informed that this pseudo-package is not the only part of
Nixpkgs that fails to evaluate. You should not evaluate entire Nixpkgs
without some special measures to handle failing packages, like those taken
by Hydra.
let
pkgsv = import (import ./nix/nixpkgs.nix);
pkgs = pkgsv {};
intray-version = import ./nix/intray-version.nix;
intray-repo = pkgs.fetchFromGitHub intray-version;
intray-overlay = import (intray-repo + "/nix/overlay.nix");
validity-version = import (intray-repo + "/nix/validity-version.nix");
validity-overlay = import (pkgs.fetchFromGitHub validity-version + "/nix/overlay.nix");
in pkgsv {
overlays = [ validity-overlay intray-overlay (import ./nix/overlay.nix) ];
config.allowUnfree = true;
}

How to delete files on Windows with Rapture IO

I'm writing Scala code. What is the correct path schema for writing a URI when using Rapture to operate with files on Windows? I have added the following dependencies:
libraryDependencies += "com.propensive" %% "rapture" % "2.0.0-M3" exclude("com.propensive","rapture-json-lift_2.11")
Here is part of my code:
import rapture.uri._
import rapture.io._
val file = uri"file:///C:/opt/eric/spark-demo"
file.delete()
but I got the message:
Error:(17, 16) value file is not a member of object rapture.uri.UriContext
val file = uri"file:///C:/opt/eric/spark-demo"
or I tried this one:
val file = uri"file://opt/eric/spark-demo"
The same error:
Error:(17, 16) value file is not a member of object rapture.uri.UriContext
val file = uri"file://opt/eric/spark-demo"
And my local path is:
C:\opt\eric\spark-demo
How can I avoid the error?
You're missing an import:
import rapture.io._
import rapture.uri._
import rapture.fs._
val file = uri"file:///C:/opt/eric/spark-demo"
file.delete()
rapture.fs is the package defining an EnrichedFileUriContext implicit class, which is what the uri macro trys to build when being provided with a file URI scheme.

Error in Apache Spark called input path does not exist

Are there any algorithms in Apache Spark to find out the frequent patterns in a text file. I tried following example but always end up with this error:
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:
/D:/spark-1.3.1-bin-hadoop2.6/bin/data/mllib/sample_fpgrowth.txt
Can anyone help me solve this problem?
import org.apache.spark.mllib.fpm.FPGrowth
val transactions = sc.textFile("...").map(_.split(" ")).cache()
val model = new FPGrowth()
model.setMinSupport(0.5)
model.setNumPartitions(10)
model.run(transactions)
model.freqItemsets.collect().foreach {
itemset => println(itemset.items.mkString("[", ",", "]") + ", " + itemset.freq)
}
try this
file://D:/spark-1.3.1-bin-hadoop2.6/bin/data/mllib/sample_fpgrowth.txt
or
D:/spark-1.3.1-bin-hadoop2.6/bin/data/mllib/sample_fpgrowth.txt
if not work, replace / with //
I assume you are running spark on windows.
Use file path like
D:\spark-1.3.1-bin-hadoop2.6\bin\data\mllib\sample_fpgrowth.txt
NOTE : Escape "\" if necessary .

Issue with Gradle task to write file

I am developing an Android application where I have a directory of JSON files and I want to create a gradle task that will combine all these files into a single JSON file.
This is the gradle task I have so far but does not create the file:
// Task that combines all JSON files in ../libraries into src/main/res/raw/libraries.json
task combineJSonFiles {
String content = ""
FileTree tree = fileTree(dir: '../libraries', include: '**/*.json')
tree.each {File file ->
content += file.getText()
}
println "[" + content.substring(0, content.length()-1) + "]" // prints out the correct contents
File libraries = file("../app/src/main/res/raw/libraries.json")
println libraries.getProperties()
}
I print out the properties and I am not sure why these are the property values:
{directory=false, canonicalFile=/Users/michaelcarrano/AndroidStudioProjects/detective_droid/app/src/main/res/raw/libraries.json, file=false, freeSpace=0, canonicalPath=/Users/michaelcarrano/AndroidStudioProjects/detective_droid/app/src/main/res/raw/libraries.json, usableSpace=0, hidden=false, totalSpace=0, path=/Users/michaelcarrano/AndroidStudioProjects/detective_droid/app/src/main/res/raw/libraries.json, name=libraries.json, prefixLength=1, absolute=true, class=class java.io.File, parentFile=/Users/michaelcarrano/AndroidStudioProjects/detective_droid/app/src/main/res/raw, absolutePath=/Users/michaelcarrano/AndroidStudioProjects/detective_droid/app/src/main/res/raw/libraries.json, absoluteFile=/Users/michaelcarrano/AndroidStudioProjects/detective_droid/app/src/main/res/raw/libraries.json, parent=/Users/michaelcarrano/AndroidStudioProjects/detective_droid/app/src/main/res/raw}
Any help is appreciated as I have not seemed to figure this out even after reading the documentation. http://www.gradle.org/docs/current/userguide/working_with_files.html
I am just posting the code for the task that now works:
task combineJSonFiles {
String content = ""
FileTree tree = fileTree(dir: '../libraries', include: '**/*.json')
tree.each {File file ->
content += file.getText()
}
def libraries = new File("app/src/main/res/raw/libraries.json")
libraries.text = "[" + content.substring(0, content.length()-1) + "]"
}
My issue was trying to use Java.io.File and having the wrong directory path set for my file.
Creating an instance of java.io.File in Groovy/Java does not create the file on disk. You will need to write something to it. Check out this tutorial for working with files in Groovy.
Also you have put your task implementation in a task configuration block, rather than a task action. This means your code will not be running when you are expecting - it will run every time you run gradle rather than when you run this task. You need to put your code in a doLast block

Resources