Hello i am new to Altova Mapforce, I wanted to know what is the procedure to add JAR files import jars in MapForce.
Regards
RBRK.
If you mean "how do I add jar files that contain user-defined functions for use in my mappings", that is covered in the documentation.
If you mean "how do I add jar files at runtime when running a map", that is what the java classpath is for.
If it's the first one, then read the documentation, make your attempts, and post the results here if it doesn't work. As posed, your question doesn't really give us enough information about what you are trying to do...Much faster time to resolution, much more likely to get a realistic answer, if you show your work when you describe what you are trying to do.
rip
Related
I want to write a program that parses yum config files. These files look like this:
[google-chrome]
name=google-chrome - 64-bit
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
This format looks like it is very easy to parse, but I do not want to reinvent the wheel. If there is an existing library that can generically parse this format, I want to use it.
But how to find a library for something you can not name?
The file extension is no help here. The term ".repo" does not yield any general results besieds yum itself.
So, please teach me how to fish:
How do I effectively find the name of a file format that is unknown to me?
Identifying an unknown file format can be a pain.
But you have some options. I will start with a very obvious one.
Ask
Showing other people the format is maybe the best way to find out its name.
Someone will likely recognize it. And if no one does, chances are good that
you have a proprietary file format in front of you.
In case of your yum repository file, I would say it is a plain old INI file.
But let's do some more research on this.
Reverse Engineering
Reverse Engineering maybe your best bet if nobody recognizes your format.
Take the reference implementation and find out what they are using to parse the format.
Luckily, yum is open source. So it is easy to look up.
Let's see, what the yum authors use to parse their repo file:
try:
ini = INIConfig(open(repo.repofile))
except:
return None
https://github.com/rpm-software-management/yum/blob/master/yum/config.py#L1304
Now the import of this function can be found here:
from iniparse import INIConfig
https://github.com/rpm-software-management/yum/blob/master/yum/config.py#L32
This leads us to a library called iniparse (https://pypi.org/project/iniparse/).
So yum uses an INI parser for its config files.
I will show you how to quickly navigate to those kind of code passages
since navigating in somewhat large projects can be intimidating.
I use a tool called ripgrep (https://github.com/BurntSushi/ripgrep).
My initial anchors are usually well known filepaths. In case of yum, I took /etc/yum.repos.d for my initial search:
# assuming you are in the root directory of yum's source code
rg /etc/yum.repos.d yum
yum/config.py
769: reposdir = ListOption(['/etc/yum/repos.d', '/etc/yum.repos.d'])
yum/__init__.py
556: # (typically /etc/yum/repos.d)
This narrows it down to two files. If you go on further with terms like read or parse,
you will quickly find the results you want.
What if you do not have the reference source?
Well, sometimes, you have no access to the source code of a reference implementation. E.g: The reference implementation is closed source.
Try to break the format. Insert some garbage and observe the log files afterwards. If you are lucky, you may find
a helpful error message which might give you hints about the format.
If you feel very brave, you can try to use an actual decompiler as well. This may or may not be illegal and may or may not be a waste of time.
I personally would only do this as a last resort.
I uses the sentense
He died in the day before yesterday.
to process corenlp NER.
On the server, I got the result like this.
And in local, I uses the same sentence, got the result of
He(O) died(O) in(O) the(O) day(TIME) before(O) yesterday(O) .(O)
So, how can I get the same result like the server?
In order to increase the likelihood of getting a relevant answer, you may want to rephrase your question and provide a bit more information. And as a bonus, in the process of doing so, you may even find out the answer yourself ;)
For example, what url are you using to get your server result? When I check here: http://nlp.stanford.edu:8080/ner/process , I can select multiple models for English. Not sure which version their API is based on (would say the most recent stable version, but I don't know). Then the title of your post suggests you are using 3.8 locally, but it wouldn't hurt to specify the relevant piece in your pom.xml file, or the models you downloaded yourself.
What model are you using in your code? How are you calling it? (i.e. any other annotators in your pipeline that could be relevant for NER output)
Are you even calling it from code (if so, Java? Python?), or using it from the command line?
A lot of this is summarised in https://stackoverflow.com/help/how-to-ask and it's not that long to read through ;)
There must exist some application to do the following, but I am not even sure how to google for it.
The dilemma is that we have to backtrace defects and in doing so this requires to see how certain fields in the output xml have been generated by the XSL. The hard part is spending hours in the XSL and XML trying to figure out where it was even generated. Even debugging is difficult if you are working with multiple XSL transformation and edits as you still need to find out primary keys that get in the specific scenario for that transform.
Is there some software program that could take an XSL and perhaps do one of two things:
Feed it an output field name and it would generate a list of all
the possible criteria that would generate this field so you can figure out which one of a dozen in the XSL meets your criteria, or
Somehow convert the xsl into some more readable if/then type
format (kind of like how you can use Javadoc to produce readable documentation)
You don't say what tools you are currently using. Tools like oXygen and Stylus Studio have some quite sophisticated XSLT debugging capability. OXygen's output mapping tool (see http://www.oxygenxml.com/xml_editor/working_with_xslt_debugger.html#xsltOutputMapping) sounds very like the thing you are asking for.
Using schema-aware stylesheets can greatly ease debugging. At least in the Saxon implementation, if you declare in your stylesheet that you want the output to be valid against a particular schema, then if it isn't, Saxon will tell you what instruction in the stylesheet caused invalid output to be generated. Sometimes it will show you the error at stylesheet compile time, before you even supply a source document. This capability is greatly under-used, in my view. More details here: http://www.stylusstudio.com/schema_aware.html
It's an interesting question. Your suggestions are also interesting but would be quite challenging to develop; I know of no COTS or FOSS solution to either, but here are some thoughts:
Your first possibility is essentially data-flow analysis from
compiler design. I know of no tools that expose this to the user,
but you might ask XSLT processor developers if they have ever
considered externalizing such an analysis in a manner that would be useful to XSLT
developers.
Your second possibility is essentially a documentation generator
against XSLT source. I have actually helped to complete one for a client in
financial services in the past (see Document XSLT Automatically), but the solution was the property of
the client and was never released publicly as far as I know. It
would be possible to recreate such a meta-transformation between
XSLT input and HTML or Docbook output, but it's not simple to do in the
most general case.
There's another approach that you might consider:
Tighten up your interface definition. In your comment, you mention uncertainty as to whether a problem's source is bad data from the sender or a bug in the XSLT. You would be well-served by a stricter interface definition. You could implement this via better typing in XSD, addition of xsd:assertion statements if XSD 1.1 is an option, or adding a Schematron-based interface checking level, which would allow you the full power of XPath-based assertions over the input. Having such an improved and more specific interface definition would help both you and your clients know what should and should not be sent into your systems.
I am writing a complex application (a compiler analysis). To debug it I need to examine the application's execution trace to determine how its values and data structures evolve during its execution. It is quite common for me to generate megabytes of text output for a single run and sifting my way through all that is very labor-intensive. To help me manage these logs I've written my own library that formats them in HTML and makes it easy to color text from different code regions and indent code in called functions. An example of the output is here.
My question is: is there any better solution than my own home-spun library? I need some way to emit debug logs that may include arbitrary text and images and visually structure them and if possible, index them so that I can easily find the region of the output I'm most interested. Is there anything like this out there?
Regardless you didn't mentioned a language applied, I'd like to propose apache Log4XXX family: http://logging.apache.org/
It offers customizable details level as well as tag-driven loggers. GUI tool (chainsaw) can be combined with "old good" GREP approach (so you see only what you're interested in at the moment).
Colorizing, search and filtering using an expression syntax is available in the latest developer snapshot of Chainsaw. The expression syntax also supports regular expressions (using the 'like' keyword).
Chainsaw can parse any regular text log file, not just log files generated by log4j.
The latest developer snapshot of Chainsaw is available here:
http://people.apache.org/~sdeboy
The File, load Chainsaw configuration menu item is where you define the 'format' and location of the log file you want to process, and the expression syntax can be found in the tutorial, available from the help menu.
Feel free to email the log4j users list if you have additional questions.
I created a framework that might help you, https://github.com/pablito900/VisualLogs
After the reduce phase in Hadoop, I wanted the output file names to be something meaningful depending on the input key value. However I'm not successful on following the example on "Hadoop: The Definative Guide" which used MultipleTextOutputFormat to do this. The reason is that it's based on old API and it doesn't work on the new API ?
Can anybody hint on the solution or point me to the relevant documentation ?
You are probably right. Most things that worked in the old API don't always work in the new one.
There is a "new way" of doing this now, called MultipleOutputs.