I have a ruby server side program that uses a specific library to send request to an olap server and receives the result dataset from the same library.
For some reason, I don't want to call the library methods for receiving result dataset and I want to receive the text XMLA result directly. But I don't know what is the code in ruby (or jruby?) to do this. I want to send a query and receive the 'text' XMLA from my rest service(which is in ruby)
Hey I see this is a bit old, but still. As far as I could tell you have two choices:
Use https://github.com/rsim/mondrian-olap - all kinds off cool stuff, but requires jruby since it uses java libraries for connecting and manipulating the cube itself
Use https://github.com/drKreso/cube - barebone as it gets, but you can connect to the Mondrian XML servlet via savon SOAP messages and get the data back. Good for educational purposes since it has the message needed (if you want to port to python it would be a breeze)
p.s. I made the choice 2) so I might be somewhat biased :)
Related
Consider a ClojureScript web application using reagent where the reagent components are subscribed to a single db atom containing a vector of maps. The contents of this vector is different for each user and has to be queried from a mongo database ( which is updated with regular intervals ). The database might be hosted by a third party. Considering that CongoMongo, Karras and Monger are Clojure ( not ClojureScript ) libraries what would be the best way to connect to MongoDB from a single page ClojureScript/React.js using Ajax?
This โanswerโ is more of a comment but here goes.
If you don't absolutely need a Clojure backend, I'd recommend having a ClojureScript-only single-page app without any Clojure wrapper to Mongo (so no need for Sente either). As Timothy Baldridge (of Cognitect, so he knows a thing or two about this ๐) pointed out, your ClojureScript app can just make HTTP REST requests to the database.
cljs-http is a ClojureScript project that uses Clojure's core.async library to make HTTP requests and is perfect for interacting with REST APIs if you know or can learn core.async.
A more conventional (i.e., callbacks) approach, but still very ClojureScript-friendly, is to use Google Closure's goog.net.XhrIo library. I have an example here of connecting to a public REST API using XhrIo and re-frame (built on top of reagent, and highly recommended) that may help show how to get started.
Using either of these ClojureScript/JS libs, you can make requests directly from the ClojureScript browser app to the database, get replies, parse the JSON with (js->clj (js/JSON.parse json-string)) or with transit-cljs, and do something with the result.
Since Mongo has a fairly simple REST interface (https://docs.mongodb.org/ecosystem/tools/http-interfaces/#simple-rest-api), I'd be tempted to just write my own CLJS code that calls the Mongo server. Depends on your security requirements. But writing the CLJS code would be no different than any other remote request. Just a bit of string concatenation and parameter serialization.
You could use sente to get communication going between the Reagent application and your web server. This SO answer references an example client/server application that consists of a web server with browser access, giving you some buttons to press that return information from the server. It is not Reagent - but you can substitute what they use. It is a starting point example that works out of the box.
Then build up the example's web server so that it communicates with the three Clojure libraries rather than just returning static text as it does.
Thrift sounds awesome but can't find some basic stuff I'm used to in RPC frameworks (as HttpServlet). Example of the things I can't find: session management, filtering, upload/download progress.
I understand that the missing stuff might be a management layer on top of Thrift. If so, any example of such a layer? Perhaps AOP (Aspect Oriented)?
I can't imagine such a layer that compiles to all languages and that's I'm missing. Taking session management as an example, there might be several clients that all need to do some authentication and pass the session_id upon each RPC. I would expect a similar API for all languages doing so.
Anyone knows of a a management layer for Thrift?
So thrift itself is not going to help you out a lot here.
I have had similar desires, and have a few suggestions:
1. Put your management objects into the IDL
Simply add an api token or common transfer data struct as a parameter to all of your service methods. Set it as parameter id 15 so that it will always be the last parameter, even if you add others in the middle.
As the first step in your handler you can validate/store/do whatever with the extra data.
This has the advantage that it is valid in any platform that thrift supports.
2. Use thrift over http
If you use http as your transport, you can include whatever data as you want as http headers, and the thrift content as the body.
This will often require a custom http client for every platform you use to inject the data, and a custom handler on the server to use the data, but neither of those are prohibitively difficult.
3. Hack the protocol
It is possible to create your own custom protocol that wraps another protocol and injects custom data. Take a look at how the multiplexed protocol works in the thrift library for most languages:
c# here. It sends the method name across the wire as service:method. The multiplexed processor unwraps this encoding and passes it on to the appropriate processor.
I have used a similar method to encode arbitrary key/value pairs (like http headers) inside the method name.
The downside to this is that you need to write a more complicated extension for each platform you will be using. Once. It varies a bit from language to language how this works, but it is generally simple enough once you figure it out once.
These are just a few ideas I have had, and I am sure there are others. The nice thing about thrift is how the individual components are decoupled from each other. If you have special needs you can swap any of them out as you need to to add specific functionality.
I use GSOAP to implement a simple program that fullfills ONVIF discovery functionality.
(The NVT, NVR part, not the Device Manager, i.e Client part)
The program needs to
1) Send "ProbeMatch" messages in response to "Probe" messages of the ONVIF DM.
2) Send "Hello" messages occasionally.
I downloaded and launched "gsoap" tool without any problem. I generated h and c files, and created a project in Eclipse.
When I generate C files in "Client" mode, I can build the Eclipse project. There are 3 functions defined in soapClient.cpp but I do not know how to use them in main function ( What are the parameters ns2_HelloType and ns2_ResolveType). And, when do I call these functions?
When I generate C files in "Server" mode, I cannot build the Eclipse project because those functions have signatures in h files but are not defined. I have to define them according to the tutorial of gsoap. (Calculator Example)
http://www.cs.fsu.edu/~engelen/soapdoc2.html
Actually, I couldn't manage to understand the concepts "Server" and "Client". Which part of the ONVIF specification is client, which is the server? Hello, Bye etc. are the functions of the "device" itself so is the device SOAP Server? Can anybody clarify those concepts?
Best Regards,
Firat
What kind of device are you trying to implement? A Network Video Transmitter (NVT)? In this case you need to implement the server.
The client is the part of the VMS that connects to the devices.
When you generate the server part, you need to implement the functions that do something when the correspondent function is invoked from a VMS. This is why you get a build failure.
You asked several questions. This will address only those concerning gsoap, client/server. Regarding your question: I couldn't manage to understand the concepts "Server" and "Client" , Can anybody clarify those concepts?.... So, in the most general terms:
If you are using gsoap, it is because you want to use C bindings to stitch together some component of a web service, either on the server side, or on the client side.
A simple web service Server/Client scenario:
The server, in simplified terms, listens for a request from a client, and based on some provided information from the requester (i.e. the client), queries its data source, usually a database, using the input data, packages up the requested data and returns it to the client. Think of getting the weather from your phone. Your phone, the client, sends some small piece of information such as postal code, to a known WSDL end point. The weather data is returned and displayed on your phone app.
Using gsoap it would look like this:, The request sent from your phone is simply entered as human readable text: 98873-1234, read in from an application using the gsoap C bindings into a C data structure. The C binding (C function) converts the struct data to XML SOAP format using the functionality in gsoap libraries, and sends the XML data via tcp/ip to the WSDL end point of the server. The server side gsoap libraries within an application receive this data, convert it from the XML SOAP format into C type data most likely as a member of a struct. The data is then used to build a query string to the database and make a query. The query is sent to the database. The response, XML SOAP, is again converted to C type data, and using the C bindings (C functions) provided by gsoap, sent back to the requesting client.
Again, in very simple terms it looks like this:
ServerSide database<->SQL<->gsoapApp<->tcp/ip<->gsoapApp<->userInterfaceDisplay ClientSide
There is a client application example here. Although this example is targeted toward client side applications, the concept for server side gsoap code generation is very similar.
I was trying to create my website using CGI and ERB, but when I search on the web, I see people saying I should always avoid using CGI, and always use Rack.
I understand CGI will fork a lot of Ruby processes, but if I use FastCGI, only one persistent process will be created, and it is adopted by PHP websites too. Plus FastCGI interface only create one object for one request and has very good performance, as opposed to Rack which creates 7 objects at once.
Is there any specific reason I should not use CGI? Or it is just false assumption and it is entirely ok to use CGI/FastCGI?
CGI, by which I mean both the interface and the common programming libraries and practices around it, was written in a different time. It has a view of request handlers as distinct processes connected to the webserver via environment variables and standard I/O streams.
This was state-of-the-art in its day, when there were not really "web frameworks" and "embedded server modules" as we think of them today. Thus...
CGI tends to be slow
Again, the CGI model spawns one new process per connection. While spawning processes per se is cheap these days, heavy web app initialization โ reading and parsing scores of modules, making database connections, etc. โ makes this quite expensive.
CGI tends toward too-low-level (IMHO) design
Again, the CGI model explicitly mentions environment variables and standard input as the interface between request and handler. But ... who cares? That's much lower level than the app designer should generally be thinking about. If you look at libraries and code based on CGI, you'll see that the bulk of it encourages "business logic" right alongside form parsing and HTML generation, which is now widely seen as a dangerous mixing of concerns.
Contrast with something like Rack::Builder, where right away the coder is thinking of mapping a namespace to an action, and what that means for the broader web application. (Suddenly we are free to argue about the semantic web and the virtues of REST and this and that, because we're not thinking about generating radio buttons based off user-supplied input.)
Yes, something like Rack::Builder could be implemented on top of CGI, but, that's the point. It'd have to be a layer of abstraction built on top of CGI.
CGI tends to be sneeringly dismissed
Despite CGI working perfectly well within its limitations, despite it being simple and widely understood, CGI is often dismissed out of hand. You, too, might be dismissed out of hand if CGI is all you know.
Don't use CGI. Please. It's not worth it. Back in the 1990s when nobody knew better it seemed like a good idea, but that was when scripts were infrequent, used for special cases like handling form submissions, not driving entire sites.
FastCGI is an attempt at a "better CGI" but it's still deficient in a large number of ways, especially because you have to manage your FastCGI worker processes.
Rack is a much better system, and it works very well. If you use Rack, you have a wide variety of hosting systems to choose from, even Passenger which is really simple and reliable.
I don't know what mean when you say Rack creates "7 objects at once" unless you mean there are 7 different Rack processes running somehow or you've made a mistake in your implementation.
I can't think of a single instance where CGI would be better than a Rack equivalent.
There exists a lot of confusion about what CGI, Rack etc. really are. As I describe here, Rack is an API, and FastCGI is a protocol. CGI is also a protocol, but in its narrow sense also an implementation, and for what you're speaking of is not at all the same thing as FastCGI. So let's start with the background.
Back in the early 90s, web servers simply read files (HTML, images, whatever) off the disk and sent them to the client. People started to want to do some processing at the time of the request, and the early solution that came out was to run a program that would produce the result sent back to the client, rather than just reading the file. The "protocol" for this was for the web server to be given a URL that it was configured to execute as a program (e.g., /cgi-bin/my-script), where the web server would then set up a set of environment variables with various information about the request and run the program with the body of the request on the standard input. This was referred to as the "Common Gateway Interface."
Given that this forks off a new process for every request, it's clearly inefficient, and you almost certainly don't want to use this style of dynamic request handling on high-volume web sites. (Starting a whole new process is relatively expensive in computational resources.)
One solution to making this more efficient is to, rather than starting a new process, send the request information to an existing process that's already running. This is what FastCGI is all about; it maintains a very similar interface to CGI (you have a set of variables with most of the request information, and a stream of data for the body of the request). But instead of setting actual Unix environment variables and starting a new process with the body on stdin, it sends a request similar to an HTTP request to an FCGI server already running on the machine where it specifies the values of these variables and the request body contents.
If the web server can have the program code embedded in it somehow, this becomes even more efficient because it just runs the code itself. Two classic examples of how you might do this would be:
Have PHP embedded in Apache, so that the "Apache server code" just calls the "PHP server code" that's part of the same process; and
Not run Apache at all, but have the web server be written in Ruby (or Python, or whatever) and load and run more Ruby code that's been custom-written to handle the request.
So where does Rack come in to this? Rack is an API that lets code that handles web requests receive it in a common way, regardless of the web server. So given some Ruby code to process a request that uses the Rack API, the web server might:
Be a Ruby web server that simply makes function calls in its own process to the Rack-compliant code that it loaded;
Be a web server (written in any language) that uses the FastCGI protocol to talk to another process with FastCGI server code that, again, makes function calls to the Rack-compliant code that handles the request; or
Be a server that starts a brand new process that interprets the CGI environment variables and standard input passed to it and then calls the Rack-compliant code.
So whether you're using CGI, FastCGI, another inter-process protocol, or an intra-process protocol, makes no difference; you can do any of those using Rack so long as the server knows about it or is talking to a process that can understand CGI, FastCGI or whatever and call Rack-compliant code based on that request.
So:
For performance scaling, you definitely don't want to be using CGI; you want to be using FastCGI, a similar protocol (such as the Tomcat one), or direct in-process calling of the code.
If you use the Rack API, you don't need to worry at the early stages which protocol you're using between your web server and your program because the whole point of APIs like Rack is that you can change it later.
I need a simple way to use XMLHttpRequest as a way for a web client to access applications in an embedded device. I'm getting confused trying to figure out how to make something thin and light that handles the XMLHttpRequests coming to the web server and can translate those to application calls.
The situation:
The web client using Ajax (ExtJS specifically) needs to send and receive asynchronously to an existing embedded application. This isn't just to have a thick client/thin server, the client needs to run background checking on the application status.
The application can expose a socket interface, with a known set of commands, events, and configuration values. Configuration could probably be transmitted as XML since it comes from a SQLite database.
In between the client and app is a lighttpd web server running something that somehow handles the translation. This something is the problem.
What I think I want:
Lighttpd can use FastCGI to route all XMLHttpRequest to an external process. This process will understand HTML/XML, and translate between that and the application's language. It will have custom logic to simulate pushing notifications to the client (receive XMLHttpRequest, don't respond until the next notification is available).
C/C++. I'd really like to avoid installing Java/PHP/Perl on an embedded device. So I'll need more low level understanding.
How do I do this?
Are there good C++ libraries for interpreting the CGI headers and HTML so that I don't have to do any syntax processing, I can just deal with the request/response contents?
Are there any good references to exactly what goes on, server side, when handling the XMLHttpRequest and CGI interfaces?
Is there any package that does most of this job already, or will I have to build the non-HTTP/CGI stuff from scratch?
If I understand correctly, the way I approach this problem would be a 3-tier (Don't get hang up so much on the 3-tier buzz words that we all have heard about):
JavaScript (ExtJs) on browsers talks HTTP, Ajax using XmlHttpRequest, raw (naked) or wrapper doesn't really matter, to the web server (Lighttpd, Apache, ...).
Since the app on the embedded device can talk by socket, the web server would use socket to talk to the embedded device.
You can decide to put more business logic on the JavaScript, and keep the Apache/Lighttpd code really thin so it wont timeout.
In this way, you can leverage all technologies that you're already familiar with. Ajax between tier 1 and 2 is nothing new, and use socket between 2 and 3.
I did not mean that you did not know socket. I just proposed a way to take a desc of a problem where I hear a lots of words: XML/HTML/Ajax/XmlHttpRequest/Java/PHP/Perl/C++/CGI and more and offer a way to simplify into smaller, better understood problem. Let me clarify:
If you want to ultimately retrieve data from the embedded devices and render on the browsers, then have the browsers making a request to the web server, the web server uses socket to talk to the embedded device. How the data is passed between browser and server, that's normal HTTP, no more, no less. Same thing between web server and embedded device, except socket instead of HTTP.
So if just you take a simple problem, like doing an addition of 2 numbers. Except these 2 input numbers would get passed to the web server, and then the web server passes to the embedded device, where the addition is carried out. The result gets passed back to the web server, back to the browser for rendering. If you can do that much, you can already make the data flow everywhere you want to.
How to parse the data depends on how you design the structure of the data that might include container which wraps around a payload.
"... whatever HTTP is coming to the server into usable bits of information, and generate the proper HTTP response"
...but that's not any different than how you handle the HTTP request on the server using your server-side language.
...how to implement a backend process in C/C++, instead of installing a package like PHP
If the embedded device is programmed in C/C++, you are required to know how to do socket programming in C/C++. On your web server, you also have to know how to socket programming, except that will be in that server-side language.
Hope this helps.