How to use PipeViewer(pv) on Mac OS with dd - macos

I'm trying to copy a .img of Ubuntu 14.04.1 to my bootable usb using the command sudo dd if=~/Documents/targetUbuntu.img of=/dev/rdisk1 bs=1m but it's taking long and I can't see the progress. So i'm trying to use pv using this command sudo dd if=~/Documents/targetUbuntu.img | pv | dd of=/dev/rdisk1 bs=1m but im getting this error: dd: /dev/rdisk1: Permission denied. If I do do ctrl-C in the first scenario where its taking too long it tells me that it copied over X amount of bytes in X secs and that's it. When I try to boot the usb, it says "isolinux.iso is missing or corrupt". So I want to make sure that file is copying over properly and I want to do that by using pv to check the progress, but I keep getting that error. Any solutions?

Add sudo to the second "dd" command instead.
dd if=~/Documents/targetUbuntu.img | pv | sudo dd of=/dev/rdisk1 bs=1m
You'll also want to provide "size" to pv command (replace 123456 with file size.)
dd if=~/Documents/targetUbuntu.img | pv -s 123456 | sudo dd of=/dev/rdisk1 bs=1m

I suspect its because the sudo is only being used for the first command in your pipe, where you are using dd to read the image file.
See this answer over on the unix SE
Or you can try instead something like:
sudo "pv -tpreb myubuntu.img | dd of=/dev/sdc
per post by Greg Kroah-Hartman on g+

Related

Adjust ulimit [open file handle limit] on Mac OS X Sierra to run GATK tools

I'm trying to run the VariantsToBinaryPed tool from GATK3, but it seems that my system's 'open file handle limit' is too small for it to successfully run.
I've tried increasing the limit using ulimit, as shown below, but the command still fails.
The GATK command:
> java -jar GenomeAnalysisTK.jar \
-T VariantsToBinaryPed \
-R Homo_sapiens_assembly38.fasta \
-V ~/vcf/snp.indel.recal.splitMA_norm.vcf.bgz\
-m ~/03_IdentityCheck/KING/targeted_seq_ped_clean.fam\
-bed output.bed\
-bim output.bim\
-fam output.fam\
--minGenotypeQuality 0
Returns this error:
ERROR MESSAGE: An error occurred because there were too many files
open concurrently; your system's open file handle limit is probably too small.
See the unix ulimit command to adjust this limit or
ask your system administrator for help.
Following the advice given here, I ran:
echo kern.maxfiles=65536 | sudo tee -a /etc/sysctl.conf
echo kern.maxfilesperproc=65536 | sudo tee -a /etc/sysctl.conf
sudo sysctl -w kern.maxfiles=65536
sudo sysctl -w kern.maxfilesperproc=65536
sudo ulimit -n 65536 65536
and added this line to my .bash_profile and sourced it:
ulimit -n 65536 65536
So that now, when I run ulimit -n, I get:
65536
However, I still get the same error from GATK:
ERROR MESSAGE: An error occurred because there were too many files
open concurrently; your system's open file handle limit is probably too small.
See the unix ulimit command to adjust this limit or
ask your system administrator for help.
Is there anything else I can do to avoid this error?

Convert Amazon EC2 AMI to Virtual or Vagrant box

I'd like to copy the disk image of a running EC2 instance (grab the AMI) and import it into virtual box or eventually have it run using Vagrant. I saw that packer (http://www.packer.io/) allows you to create AMI's and corresponding Vagrant boxes to work together, however the running instance I currently have has been running for over two years and would be difficult to replicate.
I imagine that this issue is common in the devops community however have not found a solution in my research online. Are there any tools out there that let you accomplish this task?
I just wanted to note that #Drewness answered this question in the first comment to the original question. I'm just adding this answer to make it more clear because the answer is link to in an anchor tag too. The link points toward the following page: How to convert EC2 AMI to VMDK for Vagrant.
So basically you need to enable root SSH access, e.g.
$ sudo perl -i -pe 's/#PermitRootLogin .*/PermitRootLogin without-password/' /etc/ssh/sshd_config
$ sudo perl -i -pe 's/.*(ssh-rsa .*)/\1/' /root/.ssh/authorized_keys
$ sudo /etc/init.d/sshd reload # optional command<br>
Then copy the running system to a local disk image:
$ ssh -i ~/.ec2/your_key root#ec2-XX-XX-XX-X.compute-1.amazonaws.com 'dd if=/dev/xvda1 bs=1M | gzip' | gunzip | dd of=./ec2-image.raw
After that prepare a filesystem on a new image file:
$ dd if=/dev/zero of=vmdk-image.raw bs=1M count=10240 # create a 10gb image file
$ losetup -fv vmdk-image.raw # mount as loopback device
$ cfdisk /dev/loop0 # create a bootable partition, write, and quit
$ losetup -fv -o 32256 vmdk-image.raw # mount the partition with an offset
$ fdisk -l -u /dev/loop0 # get the size of the partition
$ mkfs.ext4 -b 4096 /dev/loop1 $(((20971519 - 63)*512/4096)) # format using the END number
Now you need to copy everything from the EC2 image to the empty image:
$ losetup -fv ec2-image.raw
$ mkdir -p /mnt/loop/1 /mnt/loop/2 # create mount points
$ mount -t ext4 /dev/loop1 /mnt/loop/1 # mount vmdk-image
$ mount -t ext4 /dev/loop2 /mnt/loop/2 # mount ami-image
$ cp -a /mnt/loop/2/* /mnt/loop/1/
and install Grub:
$ cp /usr/lib/grub/x86_64-pc/stage* /mnt/loop/1/boot/grub/
and unmount the device (umount /dev/loop1) and convert the raw disk image to a vmdk image:
$ qemu-img convert -f raw -O vmdk vmdk-image.raw final.vmdk
Now just create a VirtualBox VM with the vmdk image mounted as the primary boot device.
Unfortunately at this point I could not get the Amazon Linux kernel to boot inside VirtualBox.
You should export the instance.
For more details, check: How to export a VM from Amazon EC2 to VMware On-Premise.
Personally I've done this on a Windows box by installing VMWare converter on the instance and converting the local system to a VMDK. Then I posted the VMDK to S3.

nohup'ing a sudo command - doesn't seem to work

I've read this and several other articles but I can't figure this out. This is what I've put together:
sudo nohup or nohup sudo (I've tried both) the command without the background & (so you can enter your password before detaching it)
Enter the password
^Z then bg
disown the pid (doesn't work for some reason)
Log out
Here's the output:
[user#localhost ~]$ nohup sudo dd if=/dev/zero of=/dev/sda bs=1M
nohup: ignoring input and appending output to ‘nohup.out’
[sudo] password for user:
^Z
[1]+ Stopped nohup sudo dd if=/dev/zero of=/dev/sda bs=1M
[user#localhost ~]$ bg
[1]+ nohup sudo dd if=/dev/zero of=/dev/sda bs=1M &
[user#localhost ~]$ sudo ps | grep dd
2458 pts/32 00:00:00 dd
[user#localhost ~]$ disown 2458
-bash: disown: 2458: no such job
[user#localhost ~]$ logout
Connection to [server] closed.
$ remote
Last login: Mon Feb 3 11:32:59 2014 from [...]
[user#localhost ~]$ sudo ps | grep dd
[sudo] password for user:
[user#localhost ~]$
So the process is gone. I've tried a few other combinations with no success. Any ideas to make this work?
Use the -b option to sudo to instruct it to run the given command in the background. Job control doesn't work because the nohup process is not a child of the current shell, but of the sudo process.
sudo -b nohup dd if=/dev/zero of=/dev/sda bs=1M
You are not using disown correctly, the right argument should be the jobspec ('%' + job number).
In your example you should have disown %1 instead of disown 2458.
For listing your current shell jobs list, you can use the bash builtin jobs

Use input from a file for a command

I want to use the mac address stored in a file in aireplay-ng command. I want this command to be executed once with each mac address in the file. Can you please tell me how to do it?
sudo aireplay-ng -1 0 -e VMC_AP -a D4:4C:24:2B:EE:80 -h CC:AF:78:B3:E5:0F mon0 --ignore-negative-one
I want -h CC:AF:78:B3:E5:0F to be replaced by different mac address stored in a file .
Thank you!
Assuming mac.txt is the file where all mac addresses are (one per line), you can use the following:
while read mac
do
sudo aireplay-ng -1 0 -e VMC_AP -a D4:4C:24:2B:EE:80 -h $mac mon0 --ignore-negative-one &
done < mac.txt

How to get dd to print transfer stats in MacOS?

For MacOS (Mavericks), I am making a shell script to gather transfer stats over time for command dd.
The manual page says:
If dd receives a SIGINFO (see the status argument for stty(1)) signal,
the current input and output block counts will be written to the
standard error output in the same format as the standard completion
message.
Therefore, just like in Linux, I tried:
kill -INFO <pid_of_dd>
The command completes successfully with status 0, however the terminal in which dd process connected to, there is no stats information in standard output/standard error.
So what is the correct way to get dd to print stats in its output?
You can also press Ctrl+T in the Terminal tab to get the same behavior:
MacBook-Pro:~ $ dd if=~/source_image.dmg of=/dev/disk1
load: 0.87 cmd: dd 7229 uninterruptible 0.21u 3.91s
265809+0 records in
265808+0 records out
136093696 bytes transferred in 131.170628 secs (1037532 bytes/sec)
load: 0.99 cmd: dd 7229 uninterruptible 0.32u 5.89s
415769+0 records in
415768+0 records out
212873216 bytes transferred in 203.357068 secs (1046795 bytes/sec)
It seems to work for me:
$ dd if=/dev/zero of=/dev/null bs=1k &
[1] 33990
$ kill -INFO 33990
4787784+0 records in
4787784+0 records out
4902690816 bytes transferred in 4.260769 secs (1150658706 bytes/sec)
$ kill -INFO 33990
8357846+0 records in
8357846+0 records out
8558434304 bytes transferred in 7.428820 secs (1152058392 bytes/sec)
$ kill 33990
$ ps
PID TTY TIME CMD
1342 ttys000 0:00.02 -bash
2290 ttys001 0:00.17 -bash
[1]+ Terminated: 15 dd if=/dev/zero of=/dev/null bs=1k
$
I also found via commandlinefu that you can also do:
killall -INFO dd
If you had to run sudo dd to start dd you might try:
sudo killall -INFO dd
Also, I started dd in the background and with nohup so when I ran sudo killall -INFO dd and got nothing back for output I had to remember to go and look at the nohup.out file because that is where the response was logged to.
Worked great on OS X Mavericks.
You can press Ctrl+T while the dd command is running or, to have a nice progress bar, you can install pv (pipe viewer) with homebrew:
brew install pv
and then place pv in between
dd if=diskimage.img | pv | dd of=/dev/disk2
example output 1
18MB 0:00:11 [1.70MiB/s] [ <=> ]
(with size of transferred data, elapsed time and speed)
Progress bar and ETA
you can also input the size of the image (16GB in this example), to have :
dd if=diskimage.img | pv -s 16G | dd of=/dev/disk2
example output 2 (with also progress bar and estimated time):
1.61GiB 0:12:19 [2.82MiB/s] [===> ] 10% ETA 1:50:25

Resources