adding ephemeral mount partition - amazon-ec2

When I launch m1.small I expect 160 GB instance storage. But I do not see that once I log in.
[root#ip-10-98-182-214 ec2-user]# df -HP
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 6.4G 2.9G 3.5G 45% /
none 868M 0 868M 0% /dev/shm
Last time when I checked I used to get /mnt/ partition with enough storage. Ephemeral disk is good for testing purpose. I do not want to attach EBS volume.

Generally you will need to enable this at launch. Some AMI's enable it by default, but not all of them will.
There is no way to add Ephemeral storage after the instance has launched.

Related

How to reduce opentelemetry-collector container's memory usage

I have deployed opentelemetry-collector as a container by pulling the image from https://hub.docker.com/r/otel/opentelemetry-collector/tags. I checked the container's memory usage using docker stats command and I got MEM USAGE / LIMIT -> 15.3MiB / 7.667GiB
Is there any possibility to reduce the memory usage for this default image to below 10MiB
I want to reduce the opentelemetry-collector container's memory usage to below 10MiB
You can add --memory=10m to your docker run command.
https://docs.docker.com/config/containers/resource_constraints/
Now - this isn't magic. If the process needs more than that to run then it will just crash.
If that is the case, then you will need to look at changing the configuration of the service and/or possibly its source code.

OpenZFS on Windows: less available space than capacity in single disk pool

Creating a new pool by using the instructions from readme, as follows:
zpool create -O casesensitivity=insensitive -O compression=lz4 -O atime=off -o ashift=12 tank PHYSICALDRIVE1
I get less available space showing up in file explorer and zpool, than the disk capacity itself: 1.76TiB vs 1.81TiB
zpool list and zfs list -r poolname show the difference:
zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 1,81T 360K 1,81T - - 0% 0% 1.00x ONLINE -
zfs list -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank 300K 1,76T 96K /tank
I'm not sure of the reason. Is there something that ZFS uses the space for?
Does it ever become available for use, or is it reserved, e.g. for root like on ext4?
Because it is copy on write, even deleting stuff requires using a tiny bit of extra storage in ZFS: until the old data has been marked as free (which requires writing newly-created metadata), you can’t start allocating the space it was using to store new data. A small amount of storage is reserved so that if you completely fill your pool it’s still possible to delete stuff to free up space. If it didn’t do this, you could get wedged, which wouldn’t be fixable unless you added more disks to your pool / made one of the disks larger if you are using virtualized storage.
There are also other small overheads (metadata storage, etc.) but I think most of the holdback you’re seeing is related to the above since it doesn’t look like you’ve written anything into the pool yet.

Need to check disk space on /dev/disk2

So, I have a raspberry pie 3 B. I wanted to know how much space is used so I plugged the micro sd card in my mac and opened the terminal.
When I ran the command:
df -h /dev/disk2
I got:
df: /dev/disk2: Raw devices not supported
What should I do now?
PS: I don't want to plug the RPI in.
You'd indeed need to mount the disk first, like Ferrybig said.
Note: It could be, since it's for the RPI, that you SD card is formatted with one of the EXT file system variants. To access those under MacOS, you'll need something like FUSE (free, but I have never used) or Paragon's ExtFS (commercial, I've used it quite often). Some SD cards for the RPI are FAT32 formatted - those should work just fine under MacOS.
Easiest way to mount a volume, if you don't want to mess too much with the commandline parameters, is by opening Disk Utlity. Find the disk/partition you'd like to mount, right click it and select "Mount". This works for "known" file system types.
Now after the disk has been mounted, type "mount" in terminal to see where it's mounted. It will show several lines, one of them could be (as an example):
/dev/disk2s1 on /Volumes/Untitled (ntfs, local, nodev, nosuid, read-only, noowners)
(there may be more that start with "/dev/disk2sX")
df- h /Volumes/Untitled will now show the disk space info, for example:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk2s1 15Gi 57Mi 15Gi 1% 19 15239289 0% /Volumes/Untitled
If your disk2 has multiple partitions, then you'd need to repeat the steps for all disks that start with "/dev/disk2sX" (where X is a number).

ES / JVM Memory Locking in Unpriv. Linux Container (LXD/LXC)

I've seen a good bit about docker setups and the like using unpriv containers running ES. Basically, I wan't to set up a simple "prod cluster". Have a total of two nodes, one physical (for data), and one for Injest/Master (LXD Container).
The issue that I've run into is using bootstrap.memory_lock: true as a config option to lock memory (avoid swapping) on my container master/injest node.
[2018-02-07T23:28:51,623][WARN ][o.e.b.JNANatives ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2018-02-07T23:28:51,624][WARN ][o.e.b.JNANatives ] This can result in part of the JVM being swapped out.
[2018-02-07T23:28:51,625][WARN ][o.e.b.JNANatives ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2018-02-07T23:28:51,625][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
...
[1]: memory locking requested for elasticsearch process but memory is not locked
Now, this makes sense given that the ES user can't adjust ulimits on the host. Given that I know enough about this to be dangerous, is there a way/how do I ensure that my unpriv container, can lock the memory it needs, given that there is no ES user on the host?
I'll just call this resolved - set swapoff on parent, and leave that setting to default in container. Not what I would call "the right way" as asked in my question, but good/close enough.

ext4 commit= mount option and dirty_writeback_centisecs

I'm tring to understand the way bytes go from write() to the phisical disk plate to tune my picture server performance.
Thing I don't understand is what is the difference between these two: commit= mount option and dirty_writeback_centisecs. Looks like they are about the same procces of writing changes to the storage device, but still different.
I do not get it clear which one fires first on the way to the disk for my bytes.
Yeah, I just ran into this investigating mount options for an SDCard Ubuntu install on an ARM Chromebook. Here's what I can tell you...
Here's how to see the dirty and writeback amounts:
user#chrubuntu:~$ cat /proc/meminfo | grep "Dirty" -A1
Dirty: 14232 kB
Writeback: 4608 kB
(edit: This dirty and writeback is rather high, I had a compile running when I ran this.)
So data to be written out is dirty. Dirty data can still be eliminated (if say, a temporary file is created, used, and deleted before it goes to writeback, it'll never have to be written out). As dirty data is moved into writeback, the kernel tries to combine smaller requests that may be into dirty into single larger I/O requests, this is one reason why dirty_expire_centisecs is usually not set too low. Dirty data is usually put into writeback when a) Enough data is cached to get up to vm.dirty_background_ratio. b) As data gets to be vm.dirty_writeback_centisecs centiseconds old (3000 default is 30 seconds) it is put into writeback. vm.dirty_writeback_centisecs, a writeback daemon is run by default every 500 centiseconds (5 seconds) to actually flush out anything in writeback.
fsync will flush out an individual file (force it from dirty into writeback and wait until it's flushed out of writeback), and sync does that with everything. As far as I know, it does this ASAP, bypassing any attempt to try to balance disk reads and writes, it stalls the device doing 100% writes until the sync completes.
commit=5 default ext4 mount option actually forces a sync() every 5 seconds on that filesystem. This is intended to ensure that writes are not unduly delayed if there's heavy read activity (ideally losing a maximum of 5 seconds of data if power is cut or whatever.) What I found with an Ubuntu install on SDCard (in a Chromebook) is that this actually just leads to massive filesystem stalls like every 5 seconds if you're writing much to the card, ChromeOS uses commit=600 and I applied that Ubuntu-side to good effect.
The dirty_writeback_centisecs, configures the daemons of the kernel Linux related to the virtual memory (that's why the vm). Which are in charge of making a write back from the RAM memory to all the storage devices, so if you configure the dirty_writeback_centisecs and you have 25 different storage devices mounted on your system it will have the same amount of time of writeback for all the 25 storage systems.
While the commit is done per storage device (actually is per filesystem) and is related to the sync process instead of the daemons from the virtual memory.
So you can see it as:
dirty_writeback_centisecs
writing from RAM to all filesystems
commit
each filesystem fetches from RAM

Resources