Apache Cassandra and Windows - windows

What are the fine tuning configuration for Apache Cassandra for windows machine,I have seen "Unable to create new native thread" due to less number of "max user processes" in linux and the one of the solution is [1]
[1]http://vanjikumaran.blogspot.com/2014/01/unable-to-create-new-native-thread-and.html
Therefore, what are the best practices for Apache Cassandra configuration and OS settings for windows?

The best practice for "Cassandra on Windows" right now is "don't". There are a bunch of edge case issues that crop up on Windows because things like file handles behave slightly differently and do not have the same guarantees they do on Linux.
It works well enough to run a dev/test instance on your Windows box for development purposes. But for anything other than that you should really use Linux, as that is what everyone else uses, and it has the most testing.
Here is a blog post with the current status of Cassandra on Windows:
http://www.datastax.com/dev/blog/cassandra-and-windows-past-present-and-future

Related

In GCP, what is the difference between SSH'ing into a VM and using Cloud Shell?

I'm trying to learn ML on GCP. Some of the Qwiklabs and Tutorials start with Cloud Shell to setup things like env variables and install Python packages, while others start by opening an SSH terminal into a VM to do those preliminary steps.
I can't really tell the difference between the two approaches, other than the fact that in the second case a VM needs to be provisioned first. Presumably, when you use Cloud Shell some sort of VM instance is being provisioned for you behind the scenes anyway.
So how are the two approaches different?
Cloud Shell is a product that is designed to give a large number of preconfigured tools that are kept updated, as well as being quick to start, accessable from the UI, and free. Basically, its a quick way to get an interactive shell. You can learn more about this environment from its documentation.
There are also limits to Cloud Shell -- you can only use it for 60 hours a week, if you go idle your session is terminated, and there is only 5GB of storage. It is also only an f1-micro instance, IIRC. So while it is provisioned for you (and free!), it isn't really useful for anything other than an interactive shell.
On the other hand, SSHing into a VM places you directly in a terminal on that VM, much like you would on any specific host -- you only have whatever tools that the image installed onto that VM provides (and many VMs come pretty bare bones, it depends on the image). But, you're now in a terminal on the host that is likely executing the code you want to work with, and it has as much CPU and RAM as you provisioned in that instance.
As far as guides pointing you to one or the other -- thats really up to them, but I suspect they'd point client / tool type work to the cloud shell (since its easy and a reasonably standard environment, which can even be scripted with tutorials), while they'd probably point how to install necessary software for use in production to a "real" VM.

Running Jenkins slave on different OS than master (and host)

I'm trying to introduce continuous integration in an old project, and we've got quite specific situation - it's possible to put the CI server only on our test server that runs on CentOS. The server has quite a lot of unused RAM and CPU capability.
However, we need to run Ant builds on Windows (this also used to be how the project did packaging before), however it turned out that not the same output (after binary compare) is produced by just using Unix versions of Java and Ant.
I drew up a diagram of how in my mind it could work, but I'm really wondering whether that is even possible (with already given tools).
The black part is implemented, I'm curious whether the red part could be possible. Could the Jenkins slave communicate with master on different OS?
It should be possible. I have a feeling you will need to play with your network settings. But if before you start changing anything see if you can start a headless slave by following these directions: https://wiki.jenkins-ci.org/display/JENKINS/Step+by+step+guide+to+set+up+master+and+slave+machine
Using VirtualBox for CentOS, it will possible to run a Windows VM on your CentOS host.
I'm not sure you need Docker to launch your Jenkins slave.
It maybe better to use a standard JNLP Windows service to connect your Windows slave to Dockerised Jenkins master.
If the master is not able to view the Windows node using this method, you may have to tweak your network configuration on the Windows VM.
But I'm not sure it's necessary.

Login on existing Putty session on main server

I have a minecraft server running ubuntu server. What i've been doing so far is using putty to create a new session on my PC. The problem is that eventually like most human beings I go to sleep, and do not like having my PC left on cuz of the noise. If i turn off my PC obviously that session will end and the server will go down. What I want to do is create a session on the main server(not my PC), and use putty to control that existing session(from my PC). This way even if I turn off my PC the server will still be running in the other room. I appreciate any feedback. Thanks in advance.
What you are describing is referred to as a daemon in UNIX. The standard way to manage daemons is through the init system. The init system has been around for a very long time and has diverged quite a bit between BSD, Solaris, and the various Linux Distros.
All the init systems provide the same basic functionality. The init system manages long running processes which are not tied to a user login. They are frequently used for managing server oriented processes, such as web servers.
Where the init systems differ is in their usage and the features they provide. Ubuntu uses an init replacement called upstart and it is very well documented.
You could write the needed upstart scripts yourself, but a quick search provides plenty of upstart scripts other people have created for minecraft that you could use instead.
http://www.minecraftwiki.net/wiki/Tutorials/Ubuntu_startup_script
You are looking for tmux or screen.

In what OS should I host subversion?

I have decided to go with Subversion for a source control repository for my personal and side projects and I'm now trying to decide what OS to use. Currently my file server for my home network is Windows 7 beta. I'm wondering if I should wipe it and install Windows Server 2008 instead? Basically I'd like to know if there are things I could take advantage with a server OS that I can't with Windows 7. First thing that comes to mind is accessing subversion remotely with a VPN connection.
I'm a .net developer, but have dabbled in Linux a bit so I'm not completely turned off to the idea of an ubuntu or debian server...
I imagine the installation and configuration process might go off with fewer hitches if installed on Linux, just because of the package management, but that's assuming some experience with the package system of $whatever_distro. If you're comfortable with Windows, Subversion works perfectly well on there. I've set it up on both, but prefer the Linux installation process (easier Apache integration, in my view), but I had pre-existing Linux experience.
If you're familiar with Windows, I bet you'll find the installation and configuration process easier there. As others have said, many of the tools are cross-platform.
You can run a Subversion server on Windows or Linux (or whatever) so it really doesn't matter. Pick whichever one you already have and feel most comfortable with. Since you are a Windows developer I see no real reason to toss Linux into the mix though.
If your goal is to minimize the amount of work you put into the maintenance of subversion, go with the OS you are most comfortable with. Many maintenance scripts, and subversion hooks are written and available in perl and python which are available for both windows and linux.
One advantage to the Windows server OSes over their client counterparts is that the client OSes are limited as to the number of inbound connections. If you are going to be the only person working on the repo, this may not make a difference. However, if there are multiple people, then this would be an issue. XP Pro/Vista Ultimate are limited by Microsoft to 10 inbound connections. I cannot speak for Windows 7.
To make life easy, try VisualSVN Server. For personal projects there's no reason to setup a separate server just for SVN.
Windows 7 will be able to host Subversion with no problems whatsoever..
If your file-server is already setup and working under Windows 7, I'd say stick with that.. Adding SVN is no reason to install a new OS
You don't need a server at all to use subversion.
If you've already got a file server on your home network, and you're doing this only for you and your personal projects, just use a subversion client such as TortoiseSVN and create your repository (or repositories) on your file server via network share (or mapped network drive, etc).
I wouldn't recommend this for multi-user setups (unless each has their own repository), but for a single user this is the simplest option. And using this approach, to answer your question, you wouldn't gain anything by switching to a server OS such as Windows Server 2008.
I'd actually recommend going with a hosted Subversion provider instead of setting up Subversion on Windows or getting a second server for that purpose. I work for ProjectLocker, but if you Google "subversion hosting", you'll see there are a number of providers that offer free or reasonably priced solutions. The advantages:
It's a hosting provider's primary job to keep your code safe, secure, and accessible, so they focus on uptime, backups, and security monitoring so you don't have to
You don't have to learn how to be a system administrator or Subversion administrator; several providers have user interfaces that make it easy to manage users and permissions.
Hosting instead of DIY lets you focus on what you actually care about: writing great software
I suggest you take a look at ProjectLocker and some of the other providers and decide which one is right for you. You may decide that doing it yourself is the best option for you, but for many people in your situation, a hosted solution has met their needs.

How to Setup a Low cost cluster

At my house I have about 10 computers all different processors and speeds (all x86 compatible). I would like to cluster these. I have looked at openMosix but since they stopped development on it I am deciding against using it. I would prefer to use the latest or next to latest version of a mainstream distribution of Linux (Suse 11, Suse 10.3, Fedora 9 etc).
Does anyone know any good sites (or books) that explain how to get a cluster up and running using free open source applications that are common on most mainstream distributions?
I would like a load balancing cluster for custom software I would be writing. I can not use something like Folding#home because I need constant contact with every part of the application. For example if I was running a simulation and one computer was controlling where rain was falling, and another controlling what my herbivores are doing in the simulation.
I recently set up an OpenMPI cluster using Ubuntu. Some existing write up is at https://wiki.ubuntu.com/MpichCluster .
Your question is too vague. What cluster application do you want to use?
By far the easiest way to set up a "cluster" is to install Folding#Home on each of your machines. But I doubt that's really what you're asking for.
I have set up clusters for music/video transcoding using simple bash scripts and ssh shared keys before.
I manage mail server clusters at work.
You only need a cluster if you know what you want to do. Come back with an actual requirement, and someone will suggest a solution.
Take a look at Rocks. It's a fullblown cluster "distribution" based on CentOS 5.1. It installs all you need (libs, applications and tools) to run a cluster and is dead simple to install and use. You do all the tweaking and configuration on the master node and it helps you with kickstarting all your other nodes. I've recently been installing a 1200+ nodes (over 10.000 cores!) cluster with it! And would not hesitate to install it on a 4 node cluster since the workload to install the master is none!
You could either run applications written for cluster libs such as MPI or PVM or you could use the queue system (Sun Grid Engine) to distribute any type of jobs. Or distcc to compile code of choice on all nodes!
And it's open source, gpl, free, everything that you like!
I think he's looking for something similar with openMosix, some kind of a general cluster on top of which any application can run distributed among the nodes. AFAIK there's nothing like that available. MPI based clusters are the closest thing you can get, but I think you can only run MPI applications on them.
Linux Virtual Server
http://www.linuxvirtualserver.org/
I use pvm and it works. But even with a nice ssh setup, allowing for login without entering passwd to the machine, you can easily remotely launch commands on your different computing nodes.

Resources