How do multiple languages interact in one project? - interop

I heard some people program in multiple languages in one project. I can't imagine how the languages interact with each other.
I mean there is no Java method like
myProgram.callCfunction(parameters);
never happens or am I wrong?

Having multiple languages in one project is actually quite common, however the principles behind are not always simple.
In the simple case, different languages are compiled to the same code. For example, C and C++ code typically is compiled into machine assembler or C# and VB.Net is compiled into IL (the language understood by the .NET runtime).
It gets more difficult if the languages/compilers use a differnt type system. There can be many different ways, basic data types such as integer, float and doubles are represented internally, and there is even more ways to represent strings. When passing types around between the different languages it must be sure that both sides interpret the type the same or - if not - the types are correctly mapped. This sort of type mapping is also known as marshalling.
Classic examples of interoperability between different program languages are (mostly from the Windows world):
The various languages available for the .NET platfrom. This includes C#, VB.Net, J#, IronRuby, F#, XSLT and many other less popular languages.
Native COM components written in C++ or VB can be used with a huge variety of languages: VBScript, VB, all .NET languages, Java
Win32 api functions can be called from .NET or VB
IPC (inter process communication)
Corba, probably the most comprehensive (and most complex) approach
Web services and other service-oriented architectures, probably the most modern approach

Generally, any decently sized web project will use about five languages: HTML, CSS, Javascript, some kind of server-side “getting things done” language (ASP, JSP, CGI scripts with Perl, PHP, etc.), and some variant of SQL for database connectivity.
(This is, of course, hand-waving away the argument about whether or not HTML and CSS count as programming languages – I’m the “they are, but just not Turing-complete languages” camp, but that’s a whole other thread.)
Some examples of how all those work together:
If you’re going the best-practices route, the structure of a web page is in HTML, and the instructions for how to display it are in CSS – which could be in the same file, but don’t have to be. The CSS contains a bunch of classes, which the HTML refers to, and it’s up to the browser to figure out how to click them together.
Taking all that a step further, any javascript scripts on that page can alter any of the HTML/CSS that is present (change contents of HTML entities, swap out one CSS class for another, change the behavior of the CSS, and so on.) It does this via something called the Document Object Model, which is essentially a language and platform-independent API to manipulate HTML pages in an object-like manner (at which point I’ll back away slowly and just provide a link to the relevant wiki article.)
But then, where does all the HTML / CSS / Javascript come from? That’s what the server-side language does. In the simplest form, the serer-side language is a program that returns a giant string holding an HTML page as its output. This, obviously, can get much more complex: HTML forms and query string parameters can be used as input for our server side program, and then you have the whole AJAX thing where the javascript gets to send data directly to the server language as well. You can also get fancy where the server language can customize the HTML, CSS, and Javascript that gets spit out – essentially, you have a program in one language writing a program in another language.
The Server-side language to SQL connection works much the same. There are a lot of ways to make it both more complex and safer, but the simplest way is for your server language to dynamically build a string with a SQL command in it, hand that to the database via some kind of connector, and get back a result set. (This is a case where you really do have a function that boils down to someValue = database.executeThisSQLCommand( SQLString ). )
So to wrap this up, different languages in this case either communicate by actually writing programs in each other, or by handing data around in very simple easy to parse formats that everybody can understand. (Strings, mainly.)

Multiple languages in use is called "interoperability" or "interop" for short.
Your example is wrong. Java can call C functions.
The language provides a mechanism for interoperability.
In the case of .NET, languages are compiled into IL as part of the CLI. Thus any .NET language can interop (call methods defined by) modules defined in any other .NET language.
As an example:
I can define a method in C#
static void Hello(){ Console.WriteLine("Hello World");}
And I can call it from Python (IronPython)
Hello()
And get the expected output.
Generally speaking, some languages interop better than others, especially if the language authors specifically made interop a feature of the language.

Multiple languages can interact with:
Piped input/output (ANY language can do this
because input and output must by necessity be implemented in every non-toy
language)
Having code in one language compile to a native library
while the other supports calling native code.
Communicating over a loopback network connection. You can
run into difficulties with firewall interference this way.
Databases. These can be thought of as a "universal" data
storage format, and hence can be accessed by most languages
with database extensions. This generally
requires one program to finish operation before the next program
can access the database. In addition, all 'communications' are
generally written to disk.
If the languages involved run on the same runtime (i.e. .NET,
JVM), then you generally can pass object data from one language
directly to the other with little impedence.
In almost every case, you have to convert any communication to a common
format before it can be exchanged (the exception is languages on the
same runtime). This is why multiple languages are rarely used in one
project.

I work on a large enterprise project which is comprised of (at the last count) about 8 languages. The majority of the communication is via an enterprise level message bus which contains bindings for multiple languages to tap into and pass data back and forth. It's called tibco.

There are many different ways you can use different languages in one project
There are two main categories that come to mind:
Using Different languages together to build one application. For example using Java to build the GUI and using JNI to access C API (so answering your question you can call C functions from Java ;))
Using different languages together in the one project if they are not part of the same application. For example. I am currently working on an iPhone app that has uses a large amount of text. I am currently using three languages: Python (to work with the original sources of the text), SQL (to put the results of the python app in a format easily accessible from the iPhone sqlite3 API) and Objective C to build the actual app. Even though the final product will only be Objective C, I've used two other languages to get to the final product.

You could have an app where the bulk of the work is done in Java, but there might be some portion of it, like maybe a data parser or something is written in Python or what have you. Almost two separate apps really, perhaps the parser is just doing some work on files and then your main app in Java is using them for something. If someone were to ask me what I used in this project I'd say "Java and Python."

There are various ways that multiple languages can be used in one project. Some examples:
You can write a DLL in, say, C, and then use that library from, say, a VB program.
You could write a server program in, say C++, and have lots of different language implementations of the client.
A web project often uses lots of languages; for example a server program, written in, say, Java (a programming language), which fetches data from a database using SQL (a query language), sends the result to the browser in HTML (a markup language), which the user can interact with using Javascript (a scripting language)...

Badly. If there is no urgent need, stick to a single language. You're increasing dependencies and complexity. But when you have existing code providing interesting functionality, it can be easier to glue it together than to recreate it.

It depends on the type of project. If you want to experiment, you can set up a web project, in .NET, and change the language on a page by page basis. It does not work as you show in your pseudocode, but it is multiple languages. Of course, the actual code directory must be a single language.

There are a couple of ways in which code in languages can interact directly. As long as the data being passed between the code is in the right format, at the bits and bytes level, then there is no reason why different languages can't interop. This approach is used in traditional windows DLL development. Even on different platforms, if you can get the format correct (look at big/little endian if interested) it will work as long as your linker (not compiler) knows how to join code together.
Beyond that there are numerous other ways in which languages can talk to each other. In the .Net world code is compiled down to IL code, which is the same for every language, in this way C#, VB.Net are all the same under the hood and can call/work with each other seamlessly.

Just to add to the list of examples, it's fairly common to optimize Python code in C or C++ or write a C library to bind another library to Python.

Related

How do you typically make your CPLEX Studio models more user friendly?

Non-optimization question about CPLEX Studio....
So you make your awesome OPL model in CPLEX Studio and it brilliantly solves your amazeballs problem.
Suppose you wanted to allow other users to access this model in a nice user friendly way: Basically, specify some simple parameters in a simple user interface (without having to edit code etc), then, output the solution in some arbitrary way you coded up like an Excel file, HTML report, or whatever.
1) What are the options for a user interface, without adding in too much other technology?
(eg. I currently have a Java program doing exactly this, but I'd rather not rely on Java code / programmers / compiling / hosting source code etc)
2) What are the options for triggering some user friendly output, eg. in a standard format like Excel, some HTML report you coded up, or maybe just triggering a Python script, etc?
(eg. I currently render them in a Java FX application on grids, charts and HTML windows, I would prefer something more lightweight and accessible, like Python etc, HTML5 output)
3) In industry, what is the typical role of CPLEX in a production environment: Is it just called by an external application (Java/.NET etc), or is CPLEX Studio used more actively?
Embed the optimisation model in wider business applications using Java, C#, Python, C++, whatever. Make it just part of the normal business systems that people use. It is just software. Make it so that the users really appreciate that the new software actually benefits them each time they use it. Make it easier to use the model than to not use it. Hide the model inside other software. Probably never even mention optimisation to your end users.
The best model in the world that could deliver amazing benefits will actually achieve nothing of practical value if it doesn't actually get used.
If your target audience or users have to do extra stuff or perform extra steps to use your model, then it will likely not get used very much and may wither and die. If they have to learn new applications etc to use it, it probably won't get used by most people.
By making your model part of their normal day-to-day processes, it will get used, and the practical benefits will come.
I have implemented and support a number of live optimisation applications in several large companies, making decisions that directly affect billions of pounds/dollars of products/revenues per year. Almost all of them have the real optimisation models totally hidden from the users, most of whom have no idea of optimisation or CPLEX; the software in their business systems just works.
There are many options. You may write the model with an algebraic modeling language (AML) like OPL or a general purpose language. (GPL)
If you use OPL then you may call your model from many GPL like C++, Java, Python ...
Or you could plug that model in an existing application.
You could call OPL from Excel or DSX Python Notebook as can be read at https://www.ibm.com/developerworks/community/forums/html/topic?id=306f3ded-33b8-4d9a-8568-b4288aa64265&ps=25
See the survey I mentioned in 1.
Some users use CPLEX OPL IDE in order to make decisions and simulations
Other use Decision Optimization Centre : https://www.ibm.com/us-en/marketplace/ibm-decision-optimization-center
Finally, some write new applications from zero or plug the model into an existing application.

<table>-like UI element for wxHaskell/wxWidgets

I am working on writing a GUI in Haskell (and Ur/Web, but that is another story), and have several development branches using different libraries and approaches that I am working on contemporaneously. In trying to migrate some of the code I had from browser-backed UI libraries with HTML elements (threepenny-gui, to be precise) to native GUI applets using a WX graphical backend (wxHaskell, reactive-banana), I encountered some trouble figuring out how to migrate some code I had that was based on constructing a <table> element to an equivalent wxWidgets construct. It seemed to me that there was no easy way to implement such a thing on my own, and no native equivalent. I am looking for implementation suggestions, pointers to extant implementations, suitable alternatives, and so forth. I can provide more in-depth specifics of the design I am looking for, should that be required.
The html table is merely used for aligning and displaying data, where one cell in every row is a reactive control, and the number of rows displayed at any given time can also vary reactively.
HTML table can contain just about anything in its cells so it's too rich to be represented by any native control. It's really hard to make a recommendation without knowing what exactly you have in the table, but the different possibilities are:
wxHtmlWindow: this can be used to reuse your HTML provided it's simple enough (HTML4 basically) and you can embed native controls into it if needed.
wxGrid: this is the most flexible widget, but it's not native.
wxDataViewCtrl: this is native control under GTK and OS X (but not MSW where you'd need to use wxListCtrl for 100% native approach) but it's pretty limited compared to either of the above solutions.

Most suitable language for cheque/check printing on Windows Platform

I need to create a simple module/executable that can print checks (fill out the details). The details need to be retried from an existing Oracle 9i DB on the Windows(xp or later)
Obviously, I shall need to define the pixel format as to where the details (Name, amount, etc) are to be filled.
The major constraint is that the client needs / strongly prefers a executable , not code that is either interpreted or uses a VM. This is so that installation is extremely simple. This requirement really cannot be changed.
Now, the question is, how do I do it.
(.NET, java and python are out of the question, unless there is a way around the VMs)
I have never worked with MFC or other native windows APIs. I am also unfamiliar with GDI.
Do I have any other option? Any language that can abstract the complexities and can be packed into a x86 binary?
Also, if not then any code help with GDI would be appreciated.
The most obvious possibilities would probably be C, C++, and Delphi. There are a few others such as Ada (e.g., Gnat), but offhand I don't see a lot of reason to favor them (especially for a job this small).
At least the way I'd write this, the language would be almost irrelevant. I'd have it run almost entirely by an external configuration file that gave the name of each field, and the location where it should be printed. I'd probably use something like MM_LOMETRIC mapping mode, so Windows will handle most of the translation to real-world coordinates (and use tenths of a millimeter in the configuration file, so you can use the coordinates without any translation).
Probably the more difficult part of this would/will be the database connectivity. There are various libraries around to help out with that, so this won't be terribly difficult, but it's still not (quite) as trivial as the drawing part.

Why Play! framework chose Groovy for template engine

From their website http://www.playframework.org/documentation/1.0/faq
"
The biggest CPU consumer in the Play stack at this time is the template engine based on Groovy. But as Play applications are easily scalable, it is not really a problem if you need to serve a very high traffic: you can balance the load between several servers. And we hope for a performance gain at this level with the new JDK7 and its better support of dynamic languages.
"
So there are no better choices? What about JSP?
JSP is not feasible as every JSP compiles to a Servlet and the servlet API provides things like the server side session which are not compatible with the RESTful paradigm. We don't want to go back to the dark ages of badly scalable server side sessions, back buttoning problems in the browser, reposts etc.
Japid templates are interesting, but they are not backed by a great community and perhaps didn't even exist at the time play was created (I don't know for sure though). I tried Japid as a replacement for the Groovy templates in my own application and found out in a JMeter test that the benefit would be only marginal, 10% to max. 25% improvement.
I guess in the end it all depends on your scalability requirements and the structure of your pages. I picked the 90% use case of my application and did the test. To me, the little improvement did not justify for the additional costs of the extra dependency (I like to keep dependencies to a minimum for maintainability).
Groovy templates are not bad or slow in general. Use typed variables wherever possible (instead of "def"), even in closures! Keep values of accessed properties in local variables. Do reasonable results paging. Then keep your fingers crossed that GSP might be able to run on groovy++ in the future and you're done ;)
To me, the question would not be why they used groovy in the views. That is, because I rather do miss it so much in the controller layer. Groovy would make coding the controller behaviour a lot easier IMHO.
First off, JSP was not a valid option for Play since it chose not to go down the Java EE route (which JSP is part of). Instead, Play chose to use Groovy as an intuitive, simple but powerful templating engine.
However, one of Play's greatest features is that it is a pluggable system, meaning that many parts of the core system can simply be replaced. This includes the template engine, and there are a couple that are already available.
The most popular is Japid. It claims to be 2-20x faster than the standard templating engine, and has been around for a while. For more info, see here http://www.playframework.org/modules/japid.
A second option is Cambridge, although this has only been out for a little while, but is reasonably active in the message boards (see https://groups.google.com/forum/?hl=en#!searchin/play-framework/cambridge/play-framework/IxSei-9BGq8/X-3pF5tWAKAJ).
I tend to stick to Groovy, as I like the way it works, and have not found it to be too bad in terms of performance, but every application is individual, so your own performance tests should lead you down your own particular path.
Yes there is Japid. Which is much, much faster.
http://www.playframework.org/modules/japid
I totally agree with the choice of ease over speed the Play Framework designers made here. My guess is that if the templating starts getting in the way in terms of performance, you can (and should!) measure the slow bits, and refactor them into fast tags where possible. With that, you're likely to save 80% of CPU by moving 20% into fast tags, leaving you with flexibility and adequate speed.
Having said that, I'm looking forward to an experiment I'm planning to see how well the new Scala templates (loosely "borrowed" from Razor.NET - awesome clean syntax) work with Java controllers/models. The Scala backend isn't there yet in terms of comparative ease, but the templating language certainly rocks.
I may be late to the party in 2016. Play 2 is out, and the JDK (not to mention the hardware) drastically improved. I am not using Play or Spring Boot, since my platform doesn't need them - just pure runtime text/HTML generation from templates.
First, when talking about Groovy templates, there is more than one. I use the original Groovy SimpleTemplateEngine for anything from emails to rich Web pages, whether most people nowadays favor the "advanced" MarkupTemplateEngine with its non-HTML builder syntax. I did not go that route because of the IDE (e.g. UntelliJ) support for JSPish HTML files with JavaScript - catching unclosed tags, braces, etc. Besides, how would you include JavaScript into the curly brace based "builder" style template?
Whichever you chose, both SimpleTemplateEngine and MarkupTemplateEngine statically compile their templates, even though the Groovy doc only mentions it for the latter. Why wouldn't it generates a class for the former? I didn't compare them against each other, but I'd expect SimpleTemplateEngine to be faster, since it is... well, simpler - doesn't translate any syntax into String concatenations with ifs and loops in between.
And it is indeed very fast. I was concerned about invoking it in a loop. Doesn't make any difference. There is no overhead, as the template is compiled once.
I use multiple small templates responsible for generating individual form control markup (HTML + JS) to generate a composite form, included in a higher-level container, included in another container, and so on, until the entire page is formed. Decomposing your view like that makes it, as you already guessed, modular, encapsulated, and "object-oriented" - composed of many individual MVC components building upon each other. Sort of like good old custom JSP tags, only evaluated at runtime and compatible with technologies like Spring Boot, if you cannot resist trendy resume-boosting stuff like that.
A test form with a 100 fields (all complex "smart" controls with encapsulated state management and behavior) renders in 150ms the first time, and then 10-14ms thereafter. In an IDE debugger on my memory-starved 4y.o. notebook. I also verified it is thread-safe, since Groovy never mentioned it explicitly. Why wouldn't it be, if it is compiled into a stateless Groovy class like any other? Call createTemplate() once, store the Template somewhere, and use it (call Template.make()) in your servlet or another concurrent context. Obviously I'll never have a 100-field form in a real application. Anyone who does, needs to reconsider his/her UX.
The performance is quite adequate. I'd even accept one second to render a 100-field page. Think of it, you don't need ultimate nanotrading or nuclear missile tracking performance in a Web app. If you do, pick Jamon: http://www.jamon.org/Overview.html, which generates a Java class, you'd normally write to concatenate Strings. I didn't test it, as I don't like extra compilation steps (automatically executed by Maven, but still). Compiled Groovy bytecode is good enough for me - compared to the compiled, yes, strongly-typed Java. The difference would be marginal unless you are doing something complex, which you shouldn't inside a template (see below). Playing with typed Groovy variables vs. def as suggested in this thread, only saved me a couple of milliseconds on that 100-template run.
Templates should not have much procedural logic (internal variables, ifs and loops) anyway - that's the controller's, not view's responsibility. That said, ifs and loops are a must for a template engine. Why would one use Handlebars/Mustache, if he/she can simply call String.replace()?
The rest of the template engines is also irrelevant. No String concatenation (e.g. Velocity, or Freemarker) or interpreted JS-based technology (e.g. Jade) would ever beat the most direct Jamon's approach performance-wise. And being a Java programmer, you want to use your favorite language/syntax: either directly (Jamon) or 90% close to Java, Groovy is (being a scripting-centric concise interpreted Java). I wouldn't comment on Scala - the matter of preference. Other than its allegedly "concise" syntax (less and less relevant with Java 8+) comes at a price. And only matters for complex loops. You do not want to write your entire app inside one template, like I already said. A couple of loops and up to ten if statements max.
And, like everyone mentioned, the intuitive syntax, and ease of use is the key. They drastically reduce the number of bugs. A good (additional) server costs a $1000, while developer salaries - to fix all of the bugs stemming form the complexity of marginal performance optimization, are 100 times higher.

Are there any good reference implementations available for command line implementations for embedded systems?

I am aware that this is nothing new and has been done several times. But I am looking for some reference implementation (or even just reference design) as a "best practices guide". We have a real-time embedded environment and the idea is to be able to use a "debug shell" in order to invoke some commands. Example: "SomeDevice print reg xyz" will request the SomeDevice sub-system to print the value of the register named xyz.
I have a small set of routines that is essentially made up of 3 functions and a lookup table:
a function that gathers a command line - it's simple; there's no command line history or anything, just the ability to backspace or press escape to discard the whole thing. But if I thought fancier editing capabilities were needed, it wouldn't be too hard to add them here.
a function that parses a line of text argc/argv style (see Parse string into argv/argc for some ideas on this)
a function that takes the first arg on the parsed command line and looks it up in a table of commands & function pointers to determine which function to call for the command, so the command handlers just need to match the prototype:
int command_handler( int argc, char* argv[]);
Then that function is called with the appropriate argc/argv parameters.
Actually, the lookup table also has pointers to basic help text for each command, and if the command is followed by '-?' or '/?' that bit of help text is displayed. Also, if 'help' is used for a command, the command table is dumped (possible only a subset if a parameter is passed to the 'help' command).
Sorry, I can't post the actual source - but it's pretty simple and straight forward to implement, and functional enough for pretty much all the command line handling needs I've had for embedded systems development.
You might bristle at this response, but many years ago we did something like this for a large-scale embedded telecom system using lex/yacc (nowadays I guess it would be flex/bison, this was literally 20 years ago).
Define your grammar, define ranges for parameters, etc... and then let lex/yacc generate the code.
There is a bit of a learning curve, as opposed to rolling a 1-off custom implementation, but then you can extend the grammar, add new commands & parameters, change ranges, etc... extremely quickly.
You could check out libcli. It emulates Cisco's CLI and apparently also includes a telnet server. That might be more than you are looking for, but it might still be useful as a reference.
If your needs are quite basic, a debug menu which accepts simple keystrokes, rather than a command shell, is one way of doing this.
For registers and RAM, you could have a sub-menu which just does a memory dump on demand.
Likewise, to enable or disable individual features, you can control them via keystrokes from the main menu or sub-menus.
One way of implementing this is via a simple state machine. Each screen has a corresponding state which waits for a keystroke, and then changes state and/or updates the screen as required.
vxWorks includes a command shell, that embeds the symbol table and implements a C expression evaluator so that you can call functions, evaluate expressions, and access global symbols at runtime. The expression evaluator supports integer and string constants.
When I worked on a project that migrated from vxWorks to embOS, I implemented the same functionality. Embedding the symbol table required a bit of gymnastics since it does not exist until after linking. I used a post-build step to parse the output of the GNU nm tool for create a symbol table as a separate load module. In an earlier version I did not embed the symbol table at all, but rather created a host-shell program that ran on the development host where the symbol table resided, and communicated with a debug stub on the target that could perform function calls to arbitrary addresses and read/write arbitrary memory. This approach is better suited to memory constrained devices, but you have to be careful that the symbol table you are using and the code on the target are for the same build. Again that was an idea I borrowed from vxWorks, which supports both teh target and host based shell with the same functionality. For the host shell vxWorks checksums the code to ensure the symbol table matches; in my case it was a manual (and error prone) process, which is why I implemented the embedded symbol table.
Although initially I only implemented memory read/write and function call capability I later added an expression evaluator based on the algorithm (but not the code) described here. Then after that I added simple scripting capabilities in the form of if-else, while, and procedure call constructs (using a very simple non-C syntax). So if you wanted new functionality or test, you could either write a new function, or create a script (if performance was not an issue), so the functions were rather like 'built-ins' to the scripting language.
To perform the arbitrary function calls, I used a function pointer typedef that took an arbitrarily large (24) number of arguments, then using the symbol table, you find the function address, cast it to the function pointer type, and pass it the real arguments, plus enough dummy arguments to make up the expected number and thus create a suitable (if wasteful) maintain stack frame.
On other systems I have implemented a Forth threaded interpreter, which is a very simple language to implement, but has a less than user friendly syntax perhaps. You could equally embed an existing solution such as Lua or Ch.
For a small lightweight thing you could use forth. Its easy to get going ( forth kernels are SMALL)
look at figForth, LINa and GnuForth.
Disclaimer: I don't Forth, but openboot and the PCI bus do, and I;ve used them and they work really well.
Alternative UI's
Deploy a web sever on your embedded device instead. Even serial will work with SLIP and the UI can be reasonably complex ( or even serve up a JAR and get really really complex.
If you really need a CLI, then you can point at a link and get a telnet.
One alternative is to use a very simple binary protocol to transfer the data you need, and then make a user interface on the PC, using e.g. Python or whatever is your favourite development tool.
The advantage is that it minimises the code in the embedded device, and shifts as much of it as possible to the PC side. That's good because:
It uses up less embedded code space—much of the code is on the PC instead.
In many cases it's easier to develop a given functionality on the PC, with the PC's greater tools and resources.
It gives you more interface options. You can use just a command line interface if you want. Or, you could go for a GUI, with graphs, data logging, whatever fancy stuff you might want.
It gives you flexibility. Embedded code is harder to upgrade than PC code. You can change and improve your PC-based tool whenever you want, without having to make any changes to the embedded device.
If you want to look at variables—If your PC tool is able to read the ELF file generated by the linker, then it can find out a variable's location from the symbol table. Even better, read the DWARF debug data and know the variable's type as well. Then all you need is a "read-memory" protocol message on the embedded device to get the data, and the PC does the decoding and displaying.

Resources