Are there any client-server frameworks similar to SETI available ?
I have such client-server model, where volunteers sign up as client (agent or node, call it whatever) and give their idle computing resources.
So I will need to write a framework to distribute and track the work-units (or jobs) given to agents.
Is there any such FW available which i could go for. Then I save time to write the job processing logic etc.
Further, I hope the framework will also support the OS compatibility issues, agent binaries updates etc.
Pl. give any other suggestions in general on such distributed computing project you think I should investigate.
Look at BOINC, which is a general framework for handling SETI style stuff.
Edit to expand: in fact, iirc BOINC is a spinoff of SETI. It'll probably handle all of your requirements.
Related
I want to provide a solution for building our large distributed control system. The current implementation is written in C++. I need to rewrite it again.
I have several questions:
The system should have hot-plugin feature, I don't know whether
it exists some OSGi implementations to support C++ programming model
Which ESB could be better if consider real-time and flexible
routing, since large volume messages will be transferred quickly
between nodes?
Since integration is very important in our system, which MOM can
be used to build my ESB according to real-time and flexible routing
constraint?
Which open source SCA implementation is suitable for C++
programming model?
Hope your answers eagerly!
Thanks very much!
If you require a C++ runtime, I would look at Trentino (http://trentino.sourceforge.net/), which is sponsored by Siemens.
There are a number of Java=based runtimes. One that supports dynamic deployment of contributions is Fabric3 (www.fabric3.org).
I'm one of those classic native programmers that has spent most of his past with .exe's and .jar's. As of the past year I've thrown my self into the world of web frameworks and technologies that seize to impress me. As of the past 1½ month I have fallen In love with Go because of it's strictness, and also how 'stand-alone' it seems to be. So now to the real question...
Go app engine application, why do we need this?
What is the difference and reasoning to choose a wrapped application (framework)?
I assume its purpose is to load some of the communication off the application to the wrapper, but sadly I can't seem to figure out (through documentation and discussion) what the specific purpose behind this modulation is.
Best regards and cyber high fives in your direction!
These really are two different questions.
1. Why GAE?
It's up to you. GAE provides cloud-based hosting that you pay rental to use. It's a bit similar to Amazon Web Services. Your Go app would be uploaded to GAE, where it provides your web service and your users can do lots of wonderful things. Meanwhile you never need to know which actual server is doing the serving at any given time - the app can migrate across their servers dynamically. GAE provides a high uptime and a low effort for you in keeping the server secure, backed up etc. It will also be elastic to cope with surges in load.
You may instead prefer to rent a private server (e.g. at Rackspace) or just a virtual machine. You'd perferably need to be a Linux expert (get lots of help at Serverfault) and you'll have to do the backup, firewall etc all yourself. It may cost (much) less. Or more.
2. Choosing a framework?
The net/http API allows you to write HTTP server code to do pretty well anything you want. But you have to do quite a lot of hard work. At the opposite extreme, frameworks like Revel make rapid server development possible, as long as it does the things you want of it. If you stray into functionality beyond what it offers, you might have to do quite a lot of digging to find out how to extend the framework.
Other interesting toolkits include Gorilla, Gocraft Web and Goji. In terms of complexity, these sit about halfway between Revel and basic net/http.
To answer your second question, here are some pros and cons of using a framework (e.g., revel) vs. something simpler like a toolkit (e.g., gorilla)
In general, the pros of using a framework are:
it provides a lot of sub-packages to handle important web-related sub-tasks like templating, generating data in specified formats like json or xml, query escaping, etc.
it handles boilerplate http handling
it (hopefully) enforces best practices like escaping strings
it helps you manage complexity by enforcing a consistent design pattern in the way you handle requests
Cons of using a framework:
frameworks tend to be "opinionated," meaning you have to buy into their general philosophy and understand their core concepts before you can make use of them; for a lot of frameworks this can be quite a bit of mental overhead
extra layer of abstraction, meaning you're another step removed from what's really going on, and there will be more stuff to understand and debug if something goes wrong
it can be brittle and hard to do something that isn't a standard use case in the framework
future maintainability: most frameworks don't tend to have a super long lifespan. Django and Rails have been around for a long time, but there's a massive graveyard of frameworks that came before them. Hindsight is 20/20, but it's hard to pick the right horse from the outset.
Recommendation
It's hard to make the call upfront. So much depends on the specifics of your problem, but I'd say in the case of Go, opt for the simpler option. Much of the value-add of frameworks in other languages is the fact that they contain useful sub-packages that handle important tasks, but Go already contains a lot of these in its standard library (e.g., encoding/json, net/http, net/url, text/template). I've built a fairly sophisticated web app using just the Gorilla toolkit and the go standard library, and it's been amazingly good, and the best part is, it's incredibly easy to understand what the code does and I can explain it to someone else without requiring them to first read through the massive About page of some third party framework.
If you want to get a sense of how other people use Gorilla, you might try looking at real-world usage examples. Compare that to how people use more sophisticated frameworks and pick whichever you like better.
Many poeple have online startups in their head that may potentially attracts millions, but most of the time you will only have minimal budget (time and resource) to start with so you want to have it delivered within a year's time. Short after launch, you are bound to perform one or a series of upgrades that may include: code refactor to newer foundation, adding hierarchy(ies) in software architecture or restructure database(s). This cycle of upgrade/refactor continues as:
New features avaiable in latest version of the language(s)/framework(s) you use.
Availability of new components/frameworks/plugins that may potentially improve the product.
Requirement has changes it's direction, existing product wasn't designed to cope with new needs.
With above as prerequisite, I want to take this discussion serious and identify the essence of an upgradable solution for a web application. In the discussion you may talk about any stages of development (initial, early upgrade, incremental upgardes) and cover one of more of the following:
Choice of language(s) for a web application.
Decision for using a framework or not? (Consider the overhead)
Choice of DBMS and its design
Choice of hardware(s) and setups?
Strategy to constant changes in requirements (, which can be a natural of web application)
Strategy/decision toward total redesign
Our company's web solution is on its fourth major generation, having evolved considerably over the past 8 years. The most recent generation introduced a broad variety of constructs to help with exactly this task as it was becoming unwieldy to update the previous generation based on new customer demands. Thus, I spent quite a bit of time in 2009 thinking about exactly this problem.
The single most valuable thing you can do is to employ an Agile approach to building software. In particular, you should maintain an environment in which a new build can be (and is) created daily. While daily builds are only one aspect of Agile, this is the practice that is most important in addressing your question. While this isn't the same thing as upgradeability, per se, it nonetheless introduces a discipline into the process that helps reduce the chance that your code base will become unwieldy (or that you'll become an Architect Astronaut).
As far as frameworks and languages go, there are two primary requirements: that the framework be long-lived and stable and that the environment support a Separation of Concerns. ASP.NET has worked well for me in this regard: it has evolved in a rational manner and without discontinuities that invalidate older code. I use a separate Business Logic Layer to manage SoC but ASP.NET does now support MVC development as well. In contrast, I came to dislike PHP after a few months working with it because it just seemed to encourage messy practices that would endanger future upgrades.
With respect to DBMS selection, any modern RDMS (SQL Server, MySQL, Oracle) would serve you well. Here is the key though: you will need to maintain DDL scripts for managing upgrades. It is just a fact of life. So, how do you make this a tractable process? The single most valuable tool from any third-party developer is my copy of SQL Compare from Red Gate. This process used to be a complete nightmare and a significant drag on my ability to evolve my code until I found this tool. So, the generic recommendation is to use a database for which a tool exists to compare database structures. SQL Server is just very fortunate in this regard.
Hardware is almost a don't care. You can always move to new hardware as long as your development process includes with a reasonable release build process.
Strategy for constant changes in requirements. Again, see Agile. I'd encourage you not to even think of them as "requirements" any more - in the traditional sense of a large document filled with specifications. Agile changes that in important ways. I don't keep a requirements document either except when working on contract for an external, paying customer so that I can be assured of appropriate billing and prevent feature creep. At this point, our internal process is so rapid and fluid that the reports from our feature request/bug management software (FogBugz if you want to know) serves as our documentation when documenting a new release for marketing.
The strategy/decision for total redesign is: don't. If you put a reasonable degree of thought into the process you'll be using, choose mainstream tools, and enforce a Separation of Concerns then nothing short of a complete abandonment of HTTP and RDBMSs should cause a total redesign.
If you are Agile enough that anything can change, you are unlikely to ever be in a position where everything must change.
To get the ball rolling, I'd have thought a language/framework that supports the concept of dependency injection (or Inversion of Control as is seems to be called these days) would be high on the list.
You will find out that RDBMS technology is not easily scalable. All vendors will tell you otherwise yet when you try multiple servers and load-balancing the inherent limitations will show up. Everything else can be beefed up with "bigger iron" and may be more efficient code but Databases cannot be split and distributed easily.
Web applications will hopefully drive the innovation in database technologies and help us break out of the archaic Relational Model mind-set. It is long overdue.
I recommend paying a lot of attention to this weak link right from the start.
On the software development projects that you have worked on, what has been the approximate cost (expressed as a percentage of total system cost) of system integration? System integration includes integrating with other software, databases, etc.
33.3% because system integration is usually associated with a fair amount of risk that is not as prevalent in other phases of the projects (coding, documentation, etc).
This is a very difficult value to estimate, especially when you are facing integrating with a system that you are not familiar with. The best you can do is track you or your team's past performance on similar projects and use those values to try to estimate how you will perform on new projects.
Generally, system integration will take longer if:
It uses a protocol, database engine, operating system, etc. that you or your team have not yet worked with.
Vendor or community support is lacking or unresponsive.
Official system documentation is not detailed enough or is out of date.
The system does not have large global market share. Such a system will not have a wide user base and a big footprint in online programming Q&A sites such as this one. This may include new, less popular, or highly domain-bound systems.
Between 0 and 99%. I have built systems with no integration at all and systems that were basically just integration of other systems. The nice thing about integration can be that it is easy to estimate. But only when the interface is fully understood. Then it is just a duplication of functionality.
There are some complicating factors, though. They can make it very expensive to impossible:
is the system you have to integrate with well understood (do the programmers who developed it still work there?)
is the system you have to integrate with well-refactored (and has automated unit and acceptance tests)?
single or multiple platform?
are domain experts available?
It depends on the integrated system's importance and other factors.
I've worked in systems with integration in a bunch of web services that were the application's core. If the web services were down, our system was simply useless.
I would list the following variables when trying to evaluate the cost:
How many systems do you integrate and how frequently are they changed?
Do you have documentation to these systems?
Is it a third party component/service that you have no control of?
If you have control over the integrated system, does it use too much "legacy" code, like COBOL; (just an example, at least where I work COBOL programmers are expensive);
Are your employees experienced with the integrated system and with the application itself?
In case of failure of the integrated service, what is the impact on your application?
How much is an employee's hour rate in these scenarios? How many hours they would need to work on these integrated systems? How much money do you have for your project? I can't say it's going to cost X% on your case without knowing these details, specially the last one.
I work in a very small shop (2 people), and since I started a few months back we have been relying on Windows Scheduled tasks. Finally, I've decided I've had enough grief with some of its inabilities such as
No logs that I can find except on a domain level (inaccessible to machine admins who aren't domain admins)
No alerting mechanism (e-mail, for one) when the job fails.
Once again, we are a small shop. I'm looking to do the analogous scheduling system upgrade than I'm doing with source control (VSS --> Subversion). I'm looking for suggestions of systems that
Are able to do the two things outlined above
Have been community-tested. I'd love to be a guinae pig for exciting software, but job scheduling is not my day job.
Ability to remotely manage jobs a plus
Free a plus. Cheap is okay, but I have very little interest in going through a full blown sales pitch with 7 power point presentations.
Built-in ability to run common tasks besides .EXE's a (minor) plus (run an assembly by name, run an Excel macro by name a plus, run a database stored procedure, etc.).
I think you can look at :
http://www.visualcron.com/
Consider Cygwin and its version of "cron". It meets requirements #1 thru 4 (though without a nice UI for #3.)
Apologize for kicking up the dust here on a very old thread. But I couldn't disagree more with what's been presented here.
Scheduled tasks in Windows are AWESOME (a %^#% load better than writing services I might add). Yes, not without limitations. But still extremely powerful. I rely on them in earnest for a variety of different things.
If you even have a slight grasp on c# you can write as custom "task" (essentially a console application) to do, well, virtually anything. If persistent/accessible logging is what you're after, why not something like Serilog or NLog? Even at the time of writing, it had a very robust feature set. This tool in and of itself, in conjunction with some c#, could've solved both your problems very easily.
Perhaps I'm missing the point, but it seems to me that this isn't really a problem. At least not anymore...
If you're looking for a free tool there is plenty of implementations for the popular Cron tool for Windows, for example CRONw. It's pretty easy to configure and maintain. You could easily write add custom WSH scripts to send your emails and add log entries.
If you're going commercial way BMC Control-M is arguably one of the best but I don't believe that it is particularly cheap.
You may also consider some upcoming packages like JobScheduler
Pretty old question, but we use Jenkins. Yes its main purpose is for CI\CD, but its also a really nice UI for CRON with a ton of plugins and integrations.