Login on existing Putty session on main server - session

I have a minecraft server running ubuntu server. What i've been doing so far is using putty to create a new session on my PC. The problem is that eventually like most human beings I go to sleep, and do not like having my PC left on cuz of the noise. If i turn off my PC obviously that session will end and the server will go down. What I want to do is create a session on the main server(not my PC), and use putty to control that existing session(from my PC). This way even if I turn off my PC the server will still be running in the other room. I appreciate any feedback. Thanks in advance.

What you are describing is referred to as a daemon in UNIX. The standard way to manage daemons is through the init system. The init system has been around for a very long time and has diverged quite a bit between BSD, Solaris, and the various Linux Distros.
All the init systems provide the same basic functionality. The init system manages long running processes which are not tied to a user login. They are frequently used for managing server oriented processes, such as web servers.
Where the init systems differ is in their usage and the features they provide. Ubuntu uses an init replacement called upstart and it is very well documented.
You could write the needed upstart scripts yourself, but a quick search provides plenty of upstart scripts other people have created for minecraft that you could use instead.
http://www.minecraftwiki.net/wiki/Tutorials/Ubuntu_startup_script

You are looking for tmux or screen.

Related

In GCP, what is the difference between SSH'ing into a VM and using Cloud Shell?

I'm trying to learn ML on GCP. Some of the Qwiklabs and Tutorials start with Cloud Shell to setup things like env variables and install Python packages, while others start by opening an SSH terminal into a VM to do those preliminary steps.
I can't really tell the difference between the two approaches, other than the fact that in the second case a VM needs to be provisioned first. Presumably, when you use Cloud Shell some sort of VM instance is being provisioned for you behind the scenes anyway.
So how are the two approaches different?
Cloud Shell is a product that is designed to give a large number of preconfigured tools that are kept updated, as well as being quick to start, accessable from the UI, and free. Basically, its a quick way to get an interactive shell. You can learn more about this environment from its documentation.
There are also limits to Cloud Shell -- you can only use it for 60 hours a week, if you go idle your session is terminated, and there is only 5GB of storage. It is also only an f1-micro instance, IIRC. So while it is provisioned for you (and free!), it isn't really useful for anything other than an interactive shell.
On the other hand, SSHing into a VM places you directly in a terminal on that VM, much like you would on any specific host -- you only have whatever tools that the image installed onto that VM provides (and many VMs come pretty bare bones, it depends on the image). But, you're now in a terminal on the host that is likely executing the code you want to work with, and it has as much CPU and RAM as you provisioned in that instance.
As far as guides pointing you to one or the other -- thats really up to them, but I suspect they'd point client / tool type work to the cloud shell (since its easy and a reasonably standard environment, which can even be scripted with tutorials), while they'd probably point how to install necessary software for use in production to a "real" VM.

Is it possible to run programs locally from a terminal services remote app?

First, I guess I'd have to figure out if I'm running remotely and second I'd have to figure out whether my remote connection is a standalone remote app or an app running on a terminal server (that may be tricky).
But, once I've figured out all those awful things, is there a way to run a windows function like ShellExecute locally instead of remotely?
The reason I'd want to do this is because I launch a web browser to view rather high bandwidth things that require javascript and flash and certain sysadmins who administer our product aren't too keen on having to make unnecessary and insecure modifications to their terminal server farm.
Yes, if the clients are running Windows and you can install software on them.
See Remote Desktop Services Virtual Channels in MSDN.
There is a free tool that does exactly what you want. I got reference from TechNet forums, it's named Remote Executer from http://www.mqtechnologies.com
Good luck

How to spawn Linux process from Windows application?

My interactive 32-bit Windows app (now moving from Delphi [Ent] 2007 to 2009) uses command-line interactions to spawn child processes that do computationally-intensive tasks, which in turn write text files that the GUI parent app parses and analyzes - resulting in an interactive graphical display of the results.
I have access to a multiprocessor (multi-user) Linux cluster (via ssh), and would like to off-load the heavy lifting to that cluster. My question is how to spawn the processes in Linux from my Windows app. I can envision using secure FTP to put and get files, but not sure how to spawn the child processes in Linux.
Some leads for further reading would be fine - but code/pseudocode would be ideal. I can imagine that this may be more about Windows-Linux interaction than Delphi.
if you have access to ssh, one option is to issue commands through that.
For example:
ssh user#host ls -l ~
will in the ssh terminal show the files in the user's home directory. I'm not sure if this is what you really want. But it would likely work.
If you do this, you almost certainly want to setup SSH password less logins
However, A more ideal solution would likely be to setup a daemon on the linux boxes whose sole job is to run specific long running tasks in the background and let you fetch the results later.
You're going to have to install something on the Linux machine to run the process. You might find some kind of clustering or batch job submission API you can install and access from Windows. You might have to code a custom server. You might be able to run everything over ssh if you can drive an ssh process from Windows and if you have sshd installed on the Linux side. But my preference would be to write a webservice or simple CGI script on the Linux side designed to take your arguments and data and return the result over plain old http (or https as the case might be).
One way or another, this is going to encompass more than just coding on the Windows side.
I would download the full "putty" package.
As well as the excellent secure shell terminal, it includes PSCP to transfer files securely and PLINK to remote execute commands over SSH.
Hint: you will need to set up the full public/private key configuration for PLINK to work without an annoying password prompt. There is a useful guide http://unixwiz.net/techtips/putty-openssh.html>here.

Tool to monitor network connectivity in Windows

What tool would you recommend to monitor the connectivity status of a machine, this is if a given machine it is able to connect to some web servers over time. It should be able to log the status.
There is a long list of freeware at http://ping-monitors.qarchive.org/
I tend to use Nagios and OpenNMS to monitor large batches of servers (and in the Unix environment, not windows). However, some pure windows-only shops I've worked with have really liked using What's Up Gold. Alternately, a combination of a quick perl script, the LWP library from CPAN and the scheduled task manager would probably do the trick too.
When we had to do something similar, we just mocked up some VBS script to attempt to connec to the machines we needed to log. Obviously behind the firewall, on the same domain. Dumped the logs into Excel. Quick and dirty for some network diagnostics, but not a long term solution.

Simplest way to get access to a remote server for computing tasks

I'm working on some academic research projects involving scraping large data sets from the web using Python. It's been inconvenient to work on my academic institution's Linux server because (1) I don't have superuser access, meaning I'm dependent on the IT staff to install my packages, and (2) my disk quota is somewhat limited (I would ideally want ~10 GB). What is the simplest way for me to get access to a machine that solves these problems? I don't need huge processing power; I just need access to a reasonably fast machine that runs 24/7, so that my programs can run continuously, and above all, something very simple to get running, use, and maintain, since I have a few non-CS people working on this project with me. Linux would be preferable, but I'd consider Windows too.
I'm aware of Amazon Web Services, but am wondering if there's something more appropriate to my specific needs.
By the way, it would be a huge bonus if I could get some sort of remote desktop access to this machine so I wasn't limited to using SSH and SFTP.
Suggestions?
EDIT: I can't use VirtualBox or Virtual PC because I need the program to be running around the clock, and I need to turn off my laptop often, etc.
If you do want to stick with running on your CS department's machines, use virtualenv to solve your package installation woes. And if disk space is an issue, you could use S3 (and perhaps FUSE) to store huge amounts of data extremely cheaply.
However, if that's not really what you're after, I can recommend Slicehost very highly. They give you a virtual private server - so you have complete control over what gets installed, users, admin, etc.
In principle, it's very much like EC2 (which I prefer to use for "real" servers), but has a friendly interface, great customer service and is aimed at smaller projects like yours.
Use x11vnc with ssh.
'sudo apt-get install x11vnc' on your remote server.
Once you have that, you can access your remote server via vnc, but the great thing is that you can tunnel vnc over ssh like so:
ssh -X -C -L 5900:localhost:5900 remotehost x11vnc -localhost -display :0
For more details see the x11vnc manpage.
Or, just setup remote desktop -- (which is actually vnc) on your linux distribution. Most distributions come with a GUI to configure remote desktop access.
If you have a linux machine you can use, then SSH -X will allow you to start GUI programs. It's not remote desktop, but it's close.
ssh -X whoever#whatever.com
firefox
Then bam. A firefox window pops on your desktop.
I have been pretty happy with TekTonic Virtual Private Servers. It's a virtualized environment, but you have full root access to install any packages you need. I'm not sure what your CPU and memory constraints are, but if they aren't too extensive then this should fit the bill nicely for you. I don't know if you would be able to enable a remote desktop as I've never tried but it may be possible to install the requisite packages.
The plans range from $15/mo to $100/mo, the $15/mo plan comes with 294MB RAM, 13GB disk space, and 2.6GHz max CPU speed. I ran on that plan for quite a while and eventually moved up to the next level up with double the disk/cpu/mem, and I've been quite happy with it. I've been with them since 2003 and have yet to find anyone who offers equivalent plans at these prices.

Resources