Ways of communications between Chromium container and VB application - vb6

We have a traditional VB application which are used for Organization operations. Now we are building a Hybrid application developed by using HTML5,CSS and Javascript which is targeted on Google Chromium desktop container. Now we are planning to provide a way to exchange large data like employees records between both of these 2 applications. Now my specific question is
What are the different ways to achieve communication between Chromium desktop container and VB application to exchange large chunks of data?

Sounds a bit painful no matter what.
Chrome Apps Architecture
All external processes are isolated from the app.
This would seem to suggest the obvious course is to use cloud data services, whether on public or private clouds.
I suspect that for political as well as practical reasons no cloud vendor goes to the trouble to provide VB/VBA-friendly APIs for their services. Mainly nobody wants to deal with support issues from the teeming hordes of casual coders the VB community is saddled with.
The VB6 community hasn't stepped up and taken care of this themselves either.
If you can limp along with the burdens of ".Net Inter Clop" (the usual MS answer) that might be a way to exploit existing API implementations.
Otherwise you might roll your own cloud. I see a few obvious services you'd want to implement in your cloud with lightweight APIs easily implemented in both of your development ecosystems:
Bulk Storage. I suggest WebDAV, which IIS supports. If you eschew the locking features then WebDAV API implementations are pretty easy in both JS and VB. Or buy (or scrounge open source) implementations of a more complete WebDAV client library.
DBMS. Pick any, implement a simple REST-like XML over HTTP API. Relatively easy to implement.
Push Notifications. I'd write a custom service accepting long-duration TCP connections from all clients, and with protocols and workflow à la Amazon SNS or Google Cloud Messaging. Such a service would be generally light in resource consumption but you'd probably want a dedicated box with OS tweaks to support a large number of active TCP connections.
Maybe optionally a message queue service?
Nothing novel here, these are all well established patterns.
All of the tools to do that are pretty off-the-shelf whether you want your cloud servers to be based on Windows, Linux, or generically Java anywhere.
Most of the effort will probably go into developing a consistent authentication model, access control model, and of course an integrated administration interface, monitoring, and logging to help keep operating overhead low and uptime high. Well, that and developer docs and training.
Ok, still a lot of work. Too bad there isn't a "cloud in the box" with the API libraries you'd need that you can buy off the shelf today.
Or perhaps I'm missing something obvious?

Related

Passively Logging React App Performance in Production

I'm wondering if there are any utilities/patterns/paradigms/standards for monitoring React applications in production.
I've seen a lot of documentation about React performance debugging that recommends the Chrome Dev Tools (which are great, but aren't a passive way to monitor end user performance)
How could I log data to know how long users are waiting for components to mount or render?
The only thing I've thought of so far is creating a Loggable[Pure]Component that extends React.[Pure]Component whose constructor, componentWillMount/Update, and componentDidMount/Update methods log render/mount times to a server. Then, components I want to monitor can extend these components and, if need be, call super() in the lifecycle methods before doing their own work. To specifically know which components these metrics go to, I'd have to expose a method in the Loggable[Pure]Component class that does something silly like setUniqueId and then each derived class would have to call it in the constructor.
This all seems terrible and I'm very much hoping there are some things people out there have implemented, but I haven't found anything thus far.
I would have a look at some APM tools, they handle the frontend monitoring, and the backend monitoring as well. They all support react, and folks use these all the time for that use case. It really depends on your goals in the monitoring, are you doing this for fun? Do you have a startup? Are you working for a large enterprise? There are 3 major players in this market.
AppDynamics - Enterprise APM, handles the most complex apps. Unified product offering delivered SaaS or on-premises. Has deep database, server, and other monitoring.
Dynatrace - Enterprise APM, handles complex apps well. Fragmented portfolio, but the SaaS product is good. The SaaS product has limited depth in some ways. Handles server and cloud infrastructure monitoring well.
New Relic - Easy and cheap(er than others), not as in-depth as some other options. Tends to be popular with small companies. Does a good job monitoring cloud infrastructure services.
These products all do what you are looking for, but it depends on your goals with the data and how you plan to analyze it.
If you want something free and less functional there are ways to do this with open source, but you'll have to stand up and manage a pretty complex stack. Here is one option.
Check out boomerang, which can log/extract the metrics you are looking for, it doesn't "understand" react, but it should work. This data can be posted to many different systems. The best suited is likely the ELK stack (open source log analytics, and more). Here is one of several examples which marries these two together to provide analysis of the browser performance https://github.com/naukri-engineering/NewMonk

Must Microservices based systems be all in the same network?

I have an web application that is separated in several components. For some reasons (pricing) I'm considering to deploy future components in different clouds.
Does anybody has references and experience on this to tell me if this is definitely not good? I know that components being in different networks will decrease the performance. At the same time, I do not like the idea of losing the power of choice where the new components will be.
Must Microservices based systems be all in the same network? How do you handle this problem?
Having worked with multiple services in the past I can tell you that services are made to work across separate networks. This is why there are security protocols like CAS, SAML, OAUTH, HTTPS, and HMAC to name a few.
So as long as you are able to deal with the management of the networks, and you have good security around your services (and I assume you do), then I would not be worried about breaking some unspoken microservices rule. Remember that microservices, if written well and are useful, are expected to be used across the Internet, especially for the Internet of Things, so they are expected to be used across multiple networks.
When you start trying this, I would pay very close attention to the bandwidth charges. AWS as an example you are ok if you are in the same region. Bandwidth between services will not cost much if anything. Lets say you use AWS and Google Cloud. Now you will be paying for the bandwidth between the 2 providers.
As a suggestion I would look at Docker as a possible solution to your problem/concern of vendor lock in.
You would be restricted to providers that support docker but in theory you could migrate quickly between providers easily since your application would be abstracted from each cloud providers architecture.
Performance, will take a hit with anything leaving the providers data center. I suppose with some investigation you might try researching providers that use a common internet exchange. This would help minimize a few hops at least.

Where would I go to learn write code that had to be very, very secure but DOES expose external services (running on a standard Windows or Linux OS)

Where would I go to learn write code that had to be very, very secure and that DOES expose external services (running on a standard Windows or Linux OS). Knowing what services can and cannot be safely exposed would be part of the issue. Note that I am not looking for a favorite choice between Linux and Windows, as the choice is not likely to be mine to make in any given case. However the level of security needs to be military grade.
I almost feel embarressed giving this as a for instance, but how would I know whether or not I could use, say, WCF, in such a setting.
High security is a difficult concept as it generally involves way more than just the code you wrote.
Basically every layer of the OSI model has to be taken into consideration. Things like, preventing capture of the data stream (or it being rerouted) between the end points (quantum cryptography).
At the higher levels, you have things like various things like
Physical security of the devices (all endpoints if possible).
Hardening the OS (e.g: closing ports, turning off unused services, using kerberos, VPN tunnels, and leveraging white lists of machines allowed to connect, etc);
Encrypting the data at rest (file encryption), in transmission (SSL), and in memory (column/table encryption).
Ensuring and enforcing proper authentication and authorization at every level (in app, in sql, etc).
Log EVERYTHING. At a minimal it should answer "who/what/when/where/how"
Along with the logging, Actively Monitor it. aka: intrusion detection.
Then we can move on to other things like looking at other attack vectors like sql injection, xss, internal / disgruntled employees, etc.
And once you've done all of that be prepared when a hacker gets away with everything they want simply by social engineering.
In short, the best tact to take in order to secure any computer related application is to listen to the ethos of Fox Mulder, and Trust No One. Another favorite of mine that applies is: It's only paranoia if they aren't after you.
You could use formal methods to (sort-of) prove the critical parts of your software. A tool like Frama-C (free, LGPL license, targetting embedded systems) could be relevant (at least if your software is critical, embedded, written in C).
But military grade don't mean much. Your client will (and should) define exactly the standards to respect. For instance, critical [civilian] aircraft software needs to follow something like DO-178C (or its predecessor, DO-178B). Different industries have different standards similar to that. (both railways and medical industries have their own standards, which might be different in North America than in Europe).
If your system (& client) is less demanding (i.e. no billion dollars or hundreds lives threatened by bugs) you could consider customizing your compiler or using some other tool. For example, GCC is customizable thru plugins or thru MELT extensions.
Don't forget that software reliability has a big price (that means a big cost for you, hence for your client).
Well, the question of where can be answered simply. Not in school. I suggest to create a learning path for yourself. Pick a technology that you like and learn it inside out. A basic book to get you started should suffice, however the rest of the stuff you learn as you go, or via the documentation of that technology.
For instance - learning under .NET (Microsoft) involves a basic A-Press text-book (i suggest Pro C# and The .NET 4.0 Platform). Thereafter searching through the .NET Framework Reference on MSDN will give you the rest.
If you are looking for WCF reference, I suggest the (MCTS Exam 70-503, Microsoft .NET Framework 3.5 Windows Communication Foundation) and MSDN.
Just keep in mind that not a single technology will achieve what you are looking for. For example: WCF co-mingles with WF (Windows Workflow Foundation), as well as SQL Data Services and Entity Framework. Being exposed to multiple technologies will definitely broaden your vision.
===============================================================================
WCF is a beast in this regard. Here are the advantages over some other means of communication:
Messages (data) passed between end points can be secured via message-level security (encryption). The transport channel chosen can also be secured at protocol level via transport layer security (encryption).
End points themselves can authorize and impersonate clients (client level security). You can implement end-to-end service tracing, health monitoring & performance counters, message logging, as well as forward and backward compatibility with newer/older clients (via graceful degradation of the message format, provided in WCF). If you chose to do so, you can even implement routing as fail-safe for your communications channel. WCF also supports transactions (ACID), concurrency, as well as a per-instance throttling, giving you the most flexibility in writing secure/robust military grade code.
In retrospect the security and flexibility of WCF are astonishing. A similiar technology (if not the same) is the WS-Security spec. It is part of the WS-* specifications for web services and deals with Xml signature and Xml encryption to provide secure communications channel between two end points.
The disadvantages of WS-* however is that it is a one-way means of communication. WCF can facilitate 2 way communication. A client can send a request to a server, but also a server can send requests to the client. WS-* dictates that a client can only send and receive responses to the server, but not vice versa.
I am not a WCF developer so i thought the highlights might provoke you into doing your own research. "There are hundreds of ways to skin an animal, neither of them is wrong..."

Why bother with multi-layer RIA if Internet now is fast enougth to do "traditional" fat client C/S?

Why bother with multi-layer RIA if Internet now is fast enougth to do "traditional" fat client C/S?
What just use a plain C++ / Delphi / Oracle Forms / JAVA-Swing application talking directly to RDBMS thru Internet?
A very complex compiled exe program in Delphi is about 10MB, that amount of code downloads in a couple of minutes in a decent 1MB ADSL connection.
After all what is what we are doing with AJAX / BlazeDS / JSON / etc pushing thru http/https protocol but with a lot of layers and a lot of points of failure...
Comments please...
First a bit about terminology, what you refer as "traditional fat clients" are probably desktop software. Web applications are often written as thin clients, but they can also be written as fat clients. A fat client rich internet application are client centric, which means that a lot of the work is done in the client (browser). Fat client RIAs can be written with the help of technologies such as AJAX or Adobe Flash.
To compare the advantages of web based applications over desktop software:
Maintainability: One of the advantages of web based applications is the maintainability of them. You only have to make one installation of the application and then it is directly available for all users. Same goes for updating of the software, you only need to update the software on the server and then you can be sure that every single user is using the latest version of the software. This eliminates the need to update individual installments of the application on the users' computers.
Security: There are two positive security implication in using web based application. As said previously, you only need to update the software in one place. This means that the users always have the most up-to-date version of the software in use, thus eliminating the problem of people using outdated, vulnerable version of the application.
What is more important, is that fat client applications are insecure. They expose application logic and possibly sensitive data such as database credentials. Fat clients can be reverse engineered and attacks can be crafted based on the gained information. For an application to be truly secure, the application logic should stay on the server and the client should be thin and only server as a presentation layer for the information handled in the application. Do remember the exposure of application logic can also affect rich internet applications. It is easy to write RIA in a way that it exposes application logic. Hence it is important to remember that the application's state should always stay on the server, the browser is, as said, only means for presenting the data. In other words, both web based applications and desktop applications can be (in)secure, I'd just say that there is a greater risk of pushing application logic to the client when writing desktop software.
Platform independent: Web based applications are platform independent (with the exception in application that use platform specific functionality, such as activex). This means that your users can be using the application from a mac, a windows or a linux computer, it doesn't matter. Of course, it is unfortunately easy to create web applications that do not work/only works on specific browsers, such as Internet Explorer. Although, it is much easier to make a web application cross-browser compatible than to write a desktop software to be truly cross-platform compatible.
Accessability: If you are connected to the Internet/Intranet, you have access to the application. It doesn't matter if you have borrowed your friend's laptop or if you are sitting by your desktop computer, you still have access to the application since it doesn't require you to install anything on the computer. Just browse to the application URL.

Compatibility of Comet with current technology

I hear that I can use Comet as a server push technology along with my Ajax code to increase the performance of my web applications.
How mature this Comet technology?
Is it supported by all web servers, programming languages and browsers?
What are the disadvantages of using Comet?
It is mature, though I think you should consider it more of a technique than a technology.
All web servers support it as far as I know, though you will need to research and configure your particular web server if you are building a comet application as the demands on the resources are a bit different. Specifically, there will be far more simultaneous open connections to your server. In terms of programming language support, if your server language of choice has any sort of blocking or waiting mechanism, you can support server-push. All browsers support it as well, as from the perspective of a browser, this is simply an http(s) connection that takes a long time to return.
There are a couple of disadvantages, in the browser world, the biggest is probably the fact that some browsers limit the number of open connections to a specific URL to two. So if you have a server blocking connection open waiting for some pushed data, you are down to only one connection available for the browser to get data from the server. This can be mitigated by spreading your resources over a few second level domains to allow the browser to open more connections.
"Supported by all web servers" is a bit of an odd statement. Most implementations are a server in and of themselves, and you'll need to find a server that integrates with the language you want to use.
That said, I work at a company that built one to integrate with a server, specifically IIS.
If you don't want to bother dealing with the server integration (dealing with different languages, handling scaling, etc), check out websync - the service lets you integrate any language easily, since it's hosted, but supports proxying requests through your own server so you can add your own business logic, logging, permissioning, etc.
Comet was actually in use before all the hype about AJAX started: It's just a new name for an old idea. People have been using hidden iframes to emulate server pushing for a long time without problems.

Resources