I develop code designed to run with file systems on multiple computers. The data directory is synced across computers, but the particular location of the data directory on each computer is unique. To address this, I setup a paths file where the beginning of the path depends on which computer the code is running on. I determine which computer the code is running on using the hostname function.
This has started causing problems running code on my local machine, as the output depends on the particular network I connect to, which can be pretty random if I'm using wifi at a coffee shop.
Is there an alternative to hostname that will return something that identifies I am using my local machine and is not affected by my particular network connection?
There are a number of options for this. You could get the Mac's serial number or hardware UUID (see this question):
SerialNumber=$(ioreg -c IOPlatformExpertDevice -d 2 | awk -F\" '/IOPlatformSerialNumber/{print $(NF-1)}')
HardwareUUID=$(ioreg -c IOPlatformExpertDevice -d 2 | awk -F\" '/IOPlatformUUID/{print $(NF-1)}')
There's also the hardware address of the first network interface:
en0MAC=$(ifconfig en0 | awk '$1=="ether" {print $2}')
If you prefer to use the computer name as it's set in the Sharing pane of System Preferences (or the mDNS compatible version of it):
ComputerName=$(scutil --get ComputerName)
LocalHostName=$(scutil --get LocalHostName)
Warning: the computer name may contain spaces and other weird characters, so be extra-sure to double-quote any references to that variable to avoid parsing problems.
Related
I am trying to make a program that automatically lists all of the connections to my computer from outside of the router. The end goal of this script is that I would like to be able to have a clean list of the external IP addresses of every server/website I am connecting to. I am also trying to use this as a way to learn more about how networks, websites, and servers work so I am sorry for any mistakes I make with terminology and general knowledge!
My tcpdump bash script:
while :
do
# get myip and assign it to a variable
myip="$(ifconfig wlp2s0 | grep -E -o -m 1 "inet................" | grep -E -o "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)")"
# tcpdump on my ip for all packets going to or from my ip address. the ipaddress of the packets is placed in IP Address.txt
sudo tcpdump -c 1 -nn host "$myip" | grep -E -o "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)" >> IPaddress.txt
done
I thought that tcpdump would be the tool for this however I confess that I do not know how tcpdump works. This script is a bash file that I am running through ubuntu. How would I use tcpdump to collect the IP address of every website that I am connecting to? I read the tcpdump documentation and believe it can help me achieve my goal however if there are better tools out there I would love to hear it! Currently, this code only displays internal IP addresses. ;(
I'd lean more towards using ss or netstat.
ss --all --ipv4
Would show all IPv4 connections.
The same works for IPv6 of course; and you could add one of many arguments to get more detailed information if you want, such as --processes, --extended, or --info.
There's also a few more arguments to control the output format, making it more suitable for parsing:
ss --all --ipv4 --processes --no-header --oneline
Suggest to follow ss command .
Learn about ss command here.
Currently I am working with a local machine that does not have finger command built in and we do not have permission to install it either. However, there is a remote server that has it installed and can be used that way. I am using finger command to get First and Last name of the users. Here is the code below in bash:
#!/usr/bin/env bash
NAMES=("ssmith" "jnicol" "ahumph" "nkidma" "bbanne")
for name in ${NAMES[#]}; do
theName=`ssh -qX 123.45.67.89 finger $name | awk 'NR==1{if($7!="???") print $7, $8}'`
arr+=("$theName") #Appending name returned from command to global array
done
The above code works but it is super slow. Is there any simpler way to ssh over to remote server to run command and get list of all user(s) first and last name in single attempt, and then append all of those into an array like shown above? There are 100s of users in the system and doing ssh over to remote server for every single one of them is not going to be optimal.
Any help would be appreciated.
All, found the answer to this. I could do the following in my Local machine and still get user's first and last name.
FULLNAME=$(getent passwd $USER | cut -d : -f 5)
Thanks all.
I'm currently using the ad-hoc command ansible ubuntu -a "ls -l /var/run/reboot-required" to get a list of servers that require reboot. However, the end result is a list of all servers, and either the info about the indicated file or an error that the file does not exist.
I'm familiar enough with playbooks to create one that actually does the reboot, but I don't want that. I just want a nice (and relatively neat) list of servers that still require a reboot.
A more generic solution of getting a list of servers that meet some criteria (e.g. have a variable set) would also be quite helpful.
Not easy because the proper way is checking the existence of the file with stat, saving it to a variable and create a list when: var.stat.exists.
If you want to do in one line and you don't mind using bash scripting, do:
ansible ubuntu -m stat -a "path=/var/run/reboot-required" -o | grep -v '{"exists": false}' | awk -F\| '{ print $1 }'
Hope it helps
I'm trying to turn the instructions on this page about connecting to a Soft Ether VPN on OS X into a bash script, but I'm running into some issues.
When I run each of these commands individually at the command line, I'm able to initiate the connection to the VPN just fine and set up the routing appropriately, but when I put it into a script, it doesn't work.
Here is the script in question:
#!/bin/bash
GATEWAY=`route -n get default | grep gateway | awk '{print $2}'`
VPN_IP=130.158.6.123/32
VPN_GATEWAY=192.168.0.1
vpnclient start
vpncmd localhost /CLIENT /CMD AccountConnect HomeVPN;
ipconfig set tap0 DHCP;
ifconfig tap0 down; ifconfig tap0 up
echo "waiting for dhcp to get us an address..."
sleep 15
route delete default;
route -n add $VPN_IP $GATEWAY;
route add default $VPN_GATEWAY;
Upon testing, I have confirmed that GATEWAY gets the correct value and all the other variables are set correctly. The script seems to do everything correctly up until the part where it starts changing the routes. At first I thought it was because the interface hadn't had enough time to get an IP address, so I put a pretty long wait time in to make sure it had an IP before it started trying to change routes.
Any thoughts as to why this doesn't work when put into script form?
Just a guess: sudo doesn't work well in shell scripts as it's an interactive tool and need to prompt for a password. You might consider removing the sudo commands and running the entire script using sudo.
Is it possible for one to modify files on the host machine during the vagrant up process? For example, adding an entry to the host machine's /etc/hosts file to avoid having to do this manually?
The solution is to use vagrant-hostsupdater
vagrant plugin install vagrant-hostsupdater
This plugin adds an entry to your /etc/hosts file on the host system.
On up and reload commands, it tries to add the information, if its not
already existant in your hosts file. If it needs to be added, you will
be asked for an administrator password, since it uses sudo to edit the
file.
On halt, suspend and destroy, those entries will be removed again.
OK, so now the guy sitting next to you at the coffee shop can most likely ssh to port 2222 (EDIT: changed on newer versions of vagrant, unless you explicitly enable external access) on your computer, login as vagrant with the insecure key, modify your Vagrantfile, since it's mounted read-write and owned by the vagrant user, insert arbitrary ruby code to run in the host environment, and now it looks like they've got root access on the host environment as well. Brilliant.
I hope people run firewalls on their development machines.
EDIT:
So after writing the above, I bugged the author of Vagrant, the default has been changed so that port 2222 is not open by default on the external interface. Big improvement (though still something to be careful of, since external access is often opened up for various reasons).
So, having put in effort to get the situation fixed since making this comment, I'm now getting down votes, apparently because the comment is out of date. Damn. It was correct when written.
EDIT:
In response to Steve Buzonas, the point is that if there's any likelhihood of the virtual machine being compromised then giving the vagrant up process elevated permissions represents a serious risk to the security of the host environment, and also being able to modify the /etc/hosts environment file is dangerous, even without general root access. As I've pointed out, vagrant's approach to keeping the VM secure is not particularly rigorous.
I don't want to depend on some plug in to vagrant. It should be standard feature in Vagrant!!!! Untill then I use a shell script to propagate VM's in my cluster of new VMs. The key lines are :
# Obtain the hostkey based on the IP-address and add it to the known_host list
ssh-keyscan -t ecdsa ${START}.${OFFSET} >> /home/vagrant/.ssh/known_hosts
# obtain the hostname, because you might not know it yet, with the IP address:
EXTERNAL_HOSTNAME=`ssh ${START}'.'${OFFSET} 'hostname'`
# obtain the key ot the new other VM based on hostname and also add to known_hosts
ssh-keyscan -t ecdsa ${EXTERNAL_HOSTNAME} >> /home/vagrant/.ssh/known_hosts
# so now you have the IP address and the corresponding hostname
# add to /etc/hosts without being asked for "yes/no"
echo ${START}'.'${OFFSET}' '${EXTERNAL_HOSTNAME} >> /etc/hosts
Where IPADRRESS is the IP address of the master VM in the cluster with several slave node VM's with succeedding ip-addresses. (IPADDRESS=IPADDRESS + 1 untill no successfull ping)
IPADDRESS=`ip addr show eth1 | grep 'inet ' | cut -d ' ' -f 6 | cut -d '/' -f1`
START=`echo ${IPADDRESS} | cut -d '.' -f1,2,3`
OFFSET=`echo ${IPADDRESS} | cut -d '.' -f4`
And then I loop trough the next IP addresses until no more succesfull pings.
I do not want to hardcode anything (ip-address or hostname), but to find out itself.
Resulting /etc/hosts file (after
sort /etc/hosts | uniq > /tmp/hosts.uniq && sudo sh -c 'mv /tmp/hosts.uniq /etc/hosts'
:
[vagrant#master ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.0.1 master.RHEL70.local master
192.168.1.50 master.RHEL70.local
192.168.1.51 node01.RHEL70.local
192.168.1.52 node02.RHEL70.local
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
Previously I didn't know how to vagrant edit my etc/host file. But when i reinstalled window and vagrant, this feature disappeared.