I've gotten sick of how many steps it takes me to get started in the morning. Yes it only takes me a few minutes to start up my whole environment, but I'd really rather just run a single command on boot-up and be ready to go immediately.
I'm writing an app on Rails connected to SqlServer. To develop for it I have a local version of the DB I use on a VM. My manual process goes like this:
Run VirtualBox.
Start the VM.
When the VM is done booting:
Open terminal
Run `rails s`
When rails is done starting:
open browser
navigate to localhost:3000 and start developing
Run Sublime
I'd love to do this in one script:
VirtualBox Windows7 &
sublime &
google-chrome &
But I can't figure out how to run this command only once the VM is done booting:
gnome-terminal --working-directory=git/my_project --tab -e 'rails s' --tab -e 'git status'
Also, it'd be nice (but not necessary) to have chrome start after rails s has succeeded.
Is this even possible?
I'm not opposed to polling, but it feels like this is something VirtualBox should be able to do a bit more naturally.
EDIT
From Comment:
I'm using Host-Only network with two Bridged Interfaces (one for wireless and one for wired) available. (It allows me to use the VM whether or not I'm connected to a network, and lets me freely switch between wired and wireless without noticing the difference).
Here is how I would do:
In the VM, create a script which will find the default gateway, & keep pinging to it. & add it to user's startup. (needs parsing of ipconfig /all which can be done with vbscript/python.)
In host, look at the network interface between host & VM. Find the default gateway on host (parse route -n output in bash script). Since both use same physical interface, the gateway would be same (assuming NAT & ONE physical interface). Use tcpdump, to wait for the ping packets to the gateway.
"Default gateway" was chosen because that was something host & VM can find out independent of each other. Other alternative was to hard-code host's address.
After the host tcpdump on host exits, it means that the VM is alive & booted upto windows desktop.
I looked into this line of inquiry before, and I think Devil's Pie is the closest you can get to setting that up:
http://burtonini.com/blog/computers/devilspie
You could try starting with this (VBoxManager startvm):
How to automatically start and shut down VirtualBox machines?
and then look at some working scripts to add to your init.d and/or rc.local once your VM is up to finish the rest of the job in order:
Get To Know Linux: The /etc/init.d Directory
I needed to orchestrate something similar. I'm using a Windows VM (guest) as a proxy (it runs a Windows-only corporate VPN client) for my Linux laptop (host). The approach is to fully automate the guest and wait until it's ready:
The host must have no funky routes (yet)
The VM starts and runs a powershell script (via Windows Task Scheduler, run-on-startup) that connects the VPN client and sets up ICS (Internet Connection Sharing, basically routing).
The host now adds funky routes that send some traffic via the VM's host-only interface. If it added these routes too soon, step 2 would not work.
The VM also runs squid (http proxy) and its port is NAT port forwarded from the host, so localhost:3128 actually goes to the guest. So a curl using this proxy goes to the corporate network and indicates whether the guest is fully up and connected.
(Squid is also useful as a backup to this complicated but very convenient mechanism, I can still ssh via corkscrew, etc)
So, I run this script on the host (simplified version shown):
#!/bin/bash
VM=vm #Name of the Virtual Machine
SCRIPT_DIR=/some/dir
PROXY_ADDRESS=localhost:3128
REMOTE_CURL_HOST=any.corporate.hostname
function waitloop() {
echo -n "Waiting to hear from $REMOTE_CURL_HOST "
while ! curl -s -m 5 --proxy $PROXY_ADDRESS $REMOTE_CURL_HOST > /dev/null ; do
echo -n .
sleep 10
done
echo "!"
}
# a separate script that takes down my routes, you may not need this.
bash $SCRIPT_DIR/network-config-vboxnet0.sh down
# error is OK if it's already running
vboxmanage startvm $VM
waitloop && bash $SCRIPT_DIR/network-config-vboxnet0.sh up && echo "Completed"
Essentially, the script waits until curl works through the VM.
Related
I want to run MS SQL server (docker image: microsoft/mssql-server-windows-developer) in a docker container. Using Windows on the host and the container. Afterwards, the database should be accessible from the host (using SQL Management Studio) by a useful, name (so that the instructions can be re-used). However, docker generates a seemingly random IP, which is not as useful, especially as it resets on every call to run.
So, I would like to give the container a hostname that is accessible from the host machine (e.g. by SQL Management Studio). I'd like to avoid a mere IP here, but it would suffice, if no better solution presents itself.
Creating a network in docker did not work, as this functionality apparently is only supported under Linux.
--network-alias also failed.
The run command looks like this:
docker run -d -p 1433:1433 -e sa_password=1234qwerT -e ACCEPT_EULA=Y --name docker_sql microsoft/mssql-server-windows-developer
This is very similar to this question here: How to get a Docker container's IP address from the host?
I think you can achieve what you want by way of a 2 step process:
Obtain the container id for your container as part of your docker run command.
Use docker inspect to get the container's IP address.
If you really don't want to use the IP address, then you can always add the IP address to your hosts file, but simply using the IP address as a shell variable should be almost as useful.
So, for example, from a bash shell:
CID=$(docker run -d ubuntu /bin/sh -c 'while /bin/true; do sleep 10 ; done')
IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $ID)
Now you can use $IP within scripts as you see fit. (Substitute the CID=... line with whatever docker run command you are using to start your container).
As per bluescores' comment and stumbling upon this related question, I tried and verified that connecting to localhost is possible - so there actually is no need to configure a name for the container-sql-server or to configure its IP.
The general problem might persist for other applications, but for what I want to achieve currently, localhost will suffice.
I have set up an environment with AWS EC2 based on ubuntu 14.04 and configure vncserver under it. After everything is done, I am able to connect the EC2 instance with VNC viewer and see the desktop. However, after a period of time idle on vncviewer, the connection is disconnected and I have error
"Too many authentication failures"
After I restart the vncserver by going through ssh to EC2, I am able to use vncviewer to connect to the instance again. Any solution for me to not having the error and connection is not disconnected?
I faced the same scenario. For me this happened because, multiple sessions of vncserver was running on my Server. Do the following steps...
Step 1: See the multiple VNC sessions running on your server.
You will see multiple process IDs running. (If not, still proceed to the next steps)
$ pgrep vnc
72063
119177
This is because you have run vncserver command multiple times on the server.
Step 2: Kill all processes from step 1
$ kill 72063
$ kill 119177
Step 3: Restart the VNC session
$ vncserver
Step 4: Verify whether it is working.
$ nc 104.197.91.140 5901
// alternatively you can use telnet
$ telnet 104.197.91.140 5901
// the response should like this
RFB 003.008
Simply try loading the VNC viewer session again
You might try these commands:
# echo $DISPLAY
# ps -aef | grep sesman
# netstat -natp | grep vnc
If memory serves, if you get to more than ten no-longer-established vnc sessions, some VNC clients no longer allow additional connections. In this case, you need to kill the vnc processes that no longer have the established status.
for a programming project I have to do some strange setup. Now, first of all, I have root rights on both servers, and I think an ssh tunnel is the best way (if you have a better idea, please feel free to tell me)
I have to write a piece of software running on an IRC server. That is not difficult, but the IRC server is only reachable on localhost. So I have to ssh to the box first and then use irssi or similar to connect to localhost:6667
Now I tried to do an ssh-tunnel from a second server (where I have irssi running all the time) and then tunnel to the server and use localhost through the tunnel, something like:
ssh -f user#server2 -L 2000:server2:6667 -N
Now this is not working as expected when I use irssi to connect to localhost:2000. I don't understand why, do you have any hint? I would be glad if you could help me.
Regards
Remember that that address you tunnel to (server2:6667 in your case) is from the point of view of the destination. For example: I have a VPS running with ssh installed. If I do ssh -f user#vps -L 2000:localhost:3306 I can connect to the MySql server running on it (which is only listening on the loopback interface).
So assuming the IRC server is running on server2 you should do:
you#server1:~$ ssh -f you#server2 -L 2000:localhost:6667 -N
You can then connect to localhost:2000 (on server1) with your IRC client and get a connection to the IRC-server running on server2.
I need to automatically copy files from a linux machine to a windows one every day.
I'm looking for something simple and secure like scp, rsync, sftp. Unfortunately, I'm at a loss of how to set this up on the Windows machine.
Does anyone know how to do this?
You can try mounting the Windows drive as a mount point on the Linux machine, using smbfs; you would then be able to use normal Linux scripting and copying tools such as cron and scp/rsync to do the copying.
You can find rsync for windows in cygwin, with that you can setup a rsync server on the windows box and run a cron job on your linux machine rsync'ing all the files to the windows machine. We used to do that and it worked fine.
"I'm at a loss of how to set this up on the Windows machine." Windows is the client or the server? At a loss means what, specifically? What can't you do?
"linux machine to a windows" can be done two ways.
Linux is client. Windows runs an FTP or SCP or SSH server. Linux has a client and pushes the file to Windows. Look at FileZilla for free windows FTP server. Also, windows often has an FTP service that's turned off. Turn it on.
Windows is client. Windows periodically pulls the file from the linux server. This is easier, since Linux already has all the necessary servers available. You do, howeveevr, need to start them on Linux.
There are scores of sftp, scp clients for Windows. Windows comes with an ftp client. Google for sftp client. You'll find WinSCP, Putty, filezilla, and list free country list of sftp clients.
I haven't used it in years now, but you could try Unison from http://www.cis.upenn.edu/~bcpierce/unison/
It could be done with 'smbclient', which acts much like an FTP client to a Windows share. Check out the manpage: man smbclient and look for ways to script it with the -c option, or man expect to drive it.
Here's how I'd probably do it though:
Pick which user you're going to be
when you sync the files. Log in as
this user and type 'id', and get the
numeric ID. You will use this ID in
step 4
Become 'root'
mkdir /mnt/sharename
Edit your /etc/fstab file and add an entry something like this. Replace the user ID of 500 with your user ID. Replace sharename with your windows share name. Replance WINDOWSHOSTNAME with your host name or IP address. If you don't know the shares, run smbclient -L WINDOWSHOSTNAME.
//WINDOWSHOSTNAME/sharename /mnt/sharename cifs credentials=/root/smblogin,uid=500,noauto,user 0 0
Edit /root/smblogin and put the following two lines in it
username=YOUR_WINDOWS_USERNAME
password=YOUR_WINDOWS_PASSWOD
Log in as the user from step 1.
Try mounting the share: mount /mnt/sharename
If that succeeds, then write a script to do it automatically. Let's call it 'backup.sh':
#!/bin/sh
df | grep -q /mnt/sharename
if test $? -ne 0 ; then
mount /mnt/sharename
fi
cp -r /path/to/dir /mnt/sharename/destination/
Use cron to run the script.
Type crontab -e
Put the following in the file:
PATH=/bin:/usr/bin
# Backup at 2:15 A.M. every day. Run 'man 5 crontab' for help on the time format
15 2 * * * /path/to/backup.sh
You may try WinSCP and its scripting support. And Windows support some kind of cron-like operation in its management stuff, don't they?
I have a small local network. Only one of the machines is available to the outside world (this is not easily changeable). I'd like to be able to set it up such that ssh requests that don't come in on the standard port go to another machine. Is this possible? If so, how?
Oh and all of these machines are running either Ubuntu or OS X.
Another way to go would be to use ssh tunneling (which happens on the client side).
You'd do an ssh command like this:
ssh -L 8022:myinsideserver:22 paul#myoutsideserver
That connects you to the machine that's accessible from the outside (myoutsideserver) and creates a tunnel through that ssh connection to port 22 (the standard ssh port) on the server that's only accessible from the inside.
Then you'd do another ssh command like this (leaving the first one still connected):
ssh -p 8022 paul#localhost
That connection to port 8022 on your localhost will then get tunneled through the first ssh connection taking you over myinsideserver.
There may be something you have to do on myoutsideserver to allow forwarding of the ssh port. I'm double-checking that now.
Edit
Hmmm. The ssh manpage says this: **Only the superuser can forward privileged ports. **
That sort of implies to me that the first ssh connection has to be as root. Maybe somebody else can clarify that.
It looks like superuser privileges aren't required as long as the forwarded port (in this case, 8022) isn't a privileged port (like 22). Thanks for the clarification Mike Stone.
#Mark Biek
I was going to say that, but you beat me to it! Anyways, I just wanted to add that there is also the -R option:
ssh -R 8022:myinsideserver:22 paul#myoutsideserver
The difference is what machine you are connecting to/from. My boss showed me this trick not too long ago, and it is definitely really nice to know... we were behind a firewall and needed to give external access to a machine... he got around it by ssh -R to another machine that was accessible... then connections to that machine were forwarded into the machine behind the firewall, so you need to use -R or -L based on which machine you are on and which you are ssh-ing to.
Also, I'm pretty sure you are fine to use a regular user as long as the port you are forwarding (in this case the 8022 port) is not below the restricted range (which I think is 1024, but I could be mistaken), because those are the "reserved" ports. It doesn't matter that you are forwarding it to a "restricted" port because that port is not being opened (the machine is just having traffic sent to it through the tunnel, it has no knowledge of the tunnel), the 8022 port IS being open and so is restricted as such.
EDIT: Just remember, the tunnel is only open so long as the initial ssh remains open, so if it times out or you exit it, the tunnel will be closed.
(In this example, I am assuming port 2222 will go to your internal host. $externalip and $internalip are the ip addresses or hostnames of the visible and internal machine, respectively.)
You have a couple of options, depending on how permanent you want the proxying to be:
Some sort of TCP proxy. On Linux, the basic idea is that before the incoming packet is processed, you want to change its destination—i.e. prerouting destination NAT:
iptables -t nat -A PREROUTING -p tcp -i eth0 -d $externalip --dport 2222 --sport
1024:65535 -j DNAT --to $internalip:22
Using SSH to establish temporary port forwarding. From here, you have two options again:
Transparent proxy, where the client thinks that your visible host (on port 2222) is just a normal SSH server and doesn't realize that it is passing through. While you lose some fine-grained control, you get convenience (especially if you want to use SSH to forward VNC or X11 all the way to the inner host).
From the internal machine: ssh -g -R 2222:localhost:22 $externalip
Then from the outside world: ssh -p 2222 $externalip
Notice that the "internal" and "external" machines do not have to be on the same LAN. You can port forward all the way around the world this way.
Forcing login to the external machine first. This is true "forwarding," not "proxying"; but the basic idea is this: You force people to log in to the external machine (so you control on who can log in and when, and you get logs of the activity), and from there they can SSH through to the inside. It sounds like a chore, but if you set up simple shell scripts on the external machine with the names of your internal hosts, coupled with password-less SSH keypairs then it is very straightforward for a user to log in. So:
On the external machine, you make a simple script, /usr/local/bin/internalhost which simply runs ssh $internalip
From the outside world, users do: ssh $externalip internalhost and once they log in to the first machine, they are immediately forwarded through to the internal one.
Another advantage to this approach is that people don't get key management problems, since running two SSH services on one IP address will make the SSH client angry.
FYI, if you want to SSH to a server and you do not want to worry about keys, do this
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
I have an alias in my shell called "nossh", so I can just do nossh somehost and it will ignore all key errors. Just understand that you are ignoring security information when you do this, so there is a theoretical risk.
Much of this information is from a talk I gave at Barcamp Bangkok all about fancy SSH tricks. You can see my slides, but I recommend the text version as the S5 slides are kind of buggy. Check out the section called "Forward Anything: Simple Port Forwarding" for info. There is also information on creating a SOCKS5 proxy with OpenSSH. Yes, you can do that. OpenSSH is awesome like that.
(Finally, if you are doing a lot of traversing into the internal network, consider setting up a VPN. It sounds scary, but OpenVPN is quite simple and runs on all OSes. I would say it's overkill just for SSH; but once you start port-forwarding through your port-forwards to get VNC, HTTP, or other stuff happening; or if you have lots of internal hosts to worry about, it can be simpler and more maintainable.)
You can use Port Fowarding to do this. Take a look here:
http://portforward.com/help/portforwarding.htm
There are instructions on how to set up your router to port forward request on this page:
http://www.portforward.com/english/routers/port_forwarding/routerindex.htm
In Ubuntu, you can install Firestarter and then use it's Forward Service feature to forward the SSH traffic from a non standard port on your machine with external access to port 22 on the machine inside your network.
On OS X you can edit the /etc/nat/natd.plist file to enable port fowarding.
Without messing around with firewall rules, you can set up a ~/.ssh/config file.
Assume 10.1.1.1 is the 'gateway' system and 10.1.1.2 is the 'client' system.
Host gateway
Hostname 10.1.1.1
LocalForward 8022 10.1.1.2:22
Host client
Hostname localhost
Port 8022
You can open an ssh connection to 'gateway' via:
ssh gateway
In another terminal, open a connection to the client.
ssh client