Run a Mathematica program in a cluster - wolfram-mathematica

Suppose I want to run a Mathematica program that I wrote, in a cluster or using a cloud computing solution. Can Mathematica programs run on their own without a license? And in case they don't, do cloud-computing solutions come packaged with the resources to do it?

I think you may be interested in gridMathematica: http://www.wolfram.com/products/gridmathematica/
David

It's also worthwhile pointing out http://www.nimbisservices.com/catalog/cloud-services-mathematica who offer cloud computing services using V7.

Related

Machine Learning for NPCs in games: Windows ML or DirectML?

Sometimes I read that using Windows ML and/or DirectML can improve the behavior of NPCs in games. It seems that both APIs are suitable, but which one fits better? Or is parallel use for different tasks the best way? If one of them is better for Machine Learning based NPCs, why is it better for this than the other API?
I don't think this question can be answered for all cases since it largely depends on what your goals are and what your machine learning model is doing. I'd suggest reading more docs like Extending the Reach of Windows ML and DirectML and Is DirectML appropriate for my project?. Also, note that Windows ML uses DirectML. They work together, but you can choose which one to use based on how much control of the pipeline you need.

What are recommended methods to install WAS(Websphere 9) on more than one server?

I am aware of the process to install WAS 8.5.5.x and 9.0.x versions using IM response file(s) but would like to know best practices and recommendations to perform WAS installation and upgrade on more than one server, to avoid manual errors and reduce time.
I am open to use to Ansible, Puppet or any other orchestration tools as well, but would like to know possible options if we are not allowed to use these tools.
Ultimate goal is to automate most of the setup/upgrade steps, if not all of them since when dealing with bunch of servers.
Thanks
Assuming you are referring to WebSphere Application Server traditional, take a look at the approaches described here, https://www.ibm.com/support/knowledgecenter/SSEQTP_9.0.0/com.ibm.websphere.installation.base.doc/ae/tins_enterprise_install.html, especially if you are working with larger scale deployments.
Consider creating master images and distributing them in a swinging profile-type setup. They make it easier and faster to install and apply updates since you only need to create images once and distribute many times. You have consistency across systems too.
You can then automate with your preferred automation technology.
We use ansible, simple and effectively.
True, you must of course develop a playbook that will be able to do all this.

which one is the official command line package of pacemaker? crmsh or pcs?

I am working on a Linux-HA cluster with pacemaker-1.1.10-1.el6_4.4, as you know, in this pacemaker version, cluster command line functionality is not packaged with pacemaker package, I found 2 packages: crmsh and pcs, my question is which one is the official command line interface? which one is the recommendation? and what is the relation between them?
thanks,
Emre
There is no One-True-CLI for Pacemaker.
The best suggestion is to use whatever your distribution provides support for (pcs on RHEL and its clones, crmsh for SLES).
The biggest difference is that pcs can configure the entire cluster (including corosync), not just the pacemaker portion. It also doesn't try to have a 1-1 mapping between the underlying XML constructs and its command-line, which provides a certain degree of freedom to simplify things.
While there is no official relationship between the two projects, they continue to share ideas for improvements in a usability arms race :-)

starfish or splunk

hiall
My goal is to analyze log files of Hadoop and there are two tools starfish(open source) and splunk(commercial product). Does anyone know the pros and cons as to which one to choose.
I really appreciate your answer.
Thanks
Well,
the pros and cons are the same of any open source vs commercial tool choice.
The main guideline should be, what are your prerequisites?
Splunk core is opensource, the free license allows you to index 500Mb/day,
probably its main advantage is providing a BI tool cheaper than other comercial ones,
it also has an impressive amount of plugins, including for Hadoop,
and like Hadoop relies on a (different) MapReduce implementation since Splunk 4.x.
It both has a Python and Java SDK, which may come in handy.
Its approach is, install it and after (a minimal) setup, start playing with your data.
I don't know Starfish, though it does look promissing,
it only seems to require JavaFX while Splunk comes with its own Python alternative installation.
But in the end, it all boils down to what are your most important prerequisites.
Barriers to entry is low for both. Best is to try both out for a while and see what works for you.
Depending on your use case each tool has different strengths. What is your use case?
Generally speaking Splunk is easy and modern with great community support. Answers are generally a few searches away.

Parallel programming service on internet

Here is my question:
Is tere any service or technology to run parallel algorythms on more computer without knowing them?
For example: I write a parallel algorythm. My friends install a simple client app, and if they have internet connection, they can help my calculation with their free processor capacity. I would like to see them like an additional core in my CPU.
If there is no technology like that, is there any unsolvable problems with developing one? (I know there must be a lot problems with code trasfering, operation systems, and compatibility)
I believe that you can use BOINC to set up your own volunteer computing project. But I have no experience of this to report.

Resources