Creating multiple mount points using ansible - ansible

I am trying to mount the disk based in the raw device size.
For example I have 12 devices like /dev/sdb, /dev/sdc, .... Some of this slot size is different in different servers and they need to be mounted based on the size of the disk. Any help is really appreciated.
Right now I am mapping the device name and mount point as vars and using in playbook. But as in some servers disk size are varying so mount points are getting mounted to wrong devices.

Related

Datadog monitoring Disk usage

I want to use datadog for monitoring my EC2 Instance Disk utilization and create alerts for it. I am using system.disk.in_use metric but I am not getting my root mount point in from sectionavg:system.disk.in_use{device:/dev/loop0} by {host} and my root mount point is /dev/root. I can see every loop mount point in the list but can't see the root. due to this, the data I am getting in the monitor is different than the actual server, for example, df -hT is showing 99% root in the server but on datadog monitoring it is showing 60%.
I am not too familiar with how to use datadog, can someone please help?
Try to research about it but not able to resolve the issue.
You can also try to use the device label to read in only the root volume such as:
avg:system.disk.in_use{device_label:/} by {host}
I personally found the metric system.disk.in_use to equal the total and instead added a formula that calculated the utilization using system.disk.total and system.disk.free to be more accurate.

Expanding root partition on AWS EC2

I created a public VPC and then added a bunch of nodes to it so that I can use it for a spark cluster. Unfortunately, all of them have a partition setup that looks like the following:
ec2-user#sparkslave1: lsblk
/dev/xvda 100G
/dev/xvda1 5.7G /
I setup a cloud manager on top of these machines and all of the nodes only have 1G left for HDFS. How do I extend the partition so that it takes up all of the 100G?
I tried created /dev/xvda2, then created a volume group, added all of /dev/xvda* to it but /dev/xvda1 doesn't get added as it's mounted. I cannot boot from a live CD in this case, it's on AWS. I also tried resize2fs but it says that the root partition already takes up all of the available blocks, so it cannot be resized. How do I solve this problem and how do I avoid this problem in the future?
Thanks!
I don't think you can just resize the running root volume. This is how you'd go about increasing the root size:
create a snapshot of your current root volume
create a new volume from this snapshot of the size that you want (100G?)
stop the instance
detach the old small volume
attach the new bigger volume
start instance
I had a same problem before, but can't remember the solution. Did you try to run
e2resize /dev/xvda1
*This is when your using ext3, which is usually the default. The e2resize command will "grow" the ext3 filesystem to use the remaining free space.

How to copy files and directories to EBS from a S3 bucket mount with s3fs

I have a S3 bucket mount as a volume in one EC2 instance with S3FS. I've created with PHP a directory structure there with more than 20GB now.
By the moment S3FS is exhausting instance memory and uploading is really slow so I want to move all the files to an EBS attached to the same instance.
I've tried S3CMD but there are some incompatibilities since S3FS creates zero sized objects in the bucket with the same name as directories.
Also tried writing a script to recursively copy the structure skipping those zero sized objects.
None worked.
Have anyone tried to do this? Thanks in advance for your help.
#hernangarcia Dont make things complicated, use recursive wget that wget -r followed by the url of the bucket end point. You can download all the contents to a EBS volume. Also my suggestion is not to store all those files which is like 20 GB on the root volume of the instance, instead attach another volume to it and then store all those file in that extra volume and if you have a High IOPS volume for it so that operations will be faster.

Mac: How to get a BSD block device name for a mount path

I have a mount point path like "/Volumes/Something" which i already known is a root directory for a mounted local volume. I need to figure out the BSD block device node name for volume mounted at that directory for example "disk1s1". Any advice on how can i dig this up? I also wouldn't mind some additional information like device total size, but i already have a way to know it if i know a block device name.
Thank you.
Use statfs syscall. Look at http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man2/statfs.2.html

Obtaining information about the physical device from a given file path

Suppose you have a full path to an accessible file or folder on the system. How can I get some kind of unique identifier for the physical device that the file (or folder) actually resides on?
My first attempt was to use System.IO.DriveInfo which depends on having a drive letter. But UNC paths and multiple network drives mapped to the same physical device on a server add some complications. For example these 3 paths all point to the same folder on the same device.
\\myserver\users\brian\public\music\
s:\users\brian\public\music\ (here s:\ is mapped to \\myserver\)
u:\public\users\music\ (here u:\ is mapped to \\myserver\users\brian\)
Ultimately my goal is to take these multiple paths and report the amount of used and free disk space on each device. I want to combine these 3 paths into a single item in the report and not 3 separate items.
Is there any Windows API that can help find this information given any arbitrary full path?
This win API call should get you what you need regarding disk space
GetDiskFreeSpaceEx
http://msdn.microsoft.com/en-us/library/aa364937(VS.85).aspx
Also, to determine if the three mappings all are from the same physical disk, perform a call to
GetVolumeInformation
and compare the returned volume serial numbers
http://msdn.microsoft.com/en-us/library/aa364993(VS.85).aspx

Resources