Adding disks to Minio object server - minio

I have used the minio service binary (https://dl.minio.io/server/minio/release/linux-amd64/minio) and my /etc/default/minio options are as follows:
MINIO_VOLUMES="/sdc1/minio/"
MINIO_OPTS="-C /etc/minio --address localhost:9000"
Could someone tell me how can I modify the above options to add /sdb1/minio as an additional volume?
I tried adding the volume with semicolon and spaces to the first volume but neither worked. Semicolon was ignored while spaces would cause startup failure for the service.

Apparently this is by design.
No dynamic expansion.
https://github.com/minio/minio/issues/4364

Related

Metricbeat not show volume mount under dev filesystem

Let me explain the problem and context. This is a server for a database solution. The database was created with docker, and added a volume to the server. Then all docker installation path was moved to the volume added to the server (for security and backup mantain). Then for monitoring, i added a metricbeat agent to capture data, like disk and other stuff, but for this context occurs the problem.
Im searching for a specific mount (the is a volume mount), and when in terminal type df -aTh | grep "/dev" for show filesystem, its show this image:
Then in metricbeat.yaml i have this configuration for system module:
- module: system
period: 30s
metricsets: ["filesystem"]
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|etc|host|hostfs)($|/)'
Notice in last line i omitted "dev" , because i want to obtain the mount volume "/dev/sda" high lighted in the screen shot. But when i discoverd in kibana, that device is not showed, and i dont now why, must be showed.
Thanks for reading and help :) . All of this if for monitoring, and show data in grafana. But i can't find the filesystem "/dev/sda/", for the disk dashboard...
From the documentation about the setting filesystem.ignore_types:
A list of filesystem types to ignore. Metrics will not be collected from filesystems matching these types. This setting also affects the fsstats metricset. If this option is not set, metricbeat ignores all types for virtual devices in systems where this information is available (e.g. all types marked as nodev in /proc/filesystems in Linux systems). This can be set to an empty list ([]) to make filebeat report all filesystems, regardless of type.
If you check the file /proc/filesystems you can see which files are marked as "nodev". Is it possible that ext4 is marked as nodev?
Can you try to set the setting filesystem.ignore_types: [] to see if the file is now considered?

Get DNS infos for local machine interfaces

I need the DNS suffix of all my local interfaces on my PC.
Is there way how I can achieve this via Go?
Best case would be for any OS
Necessary: working on Windows
I have tried net.Inferfaces() and all the net commands but I haven't found anything regarding the DNS server.
EDIT
I have found the solution for the Windows-specific version but it would be interesting if there is anything that works for Linux and macOS too.
I don't think there is a solution that work for any OS. In Linux the DNS suffix is not interface specific but system wide, it is configured in /etc/resolv.conf. Here is an excerpt from the man page:
search Search list for host-name lookup.
By default, the search list contains one entry, the local domain name. It is determined from the local hostname returned by gethostname(2); the local domain name is taken to be everything after the first '.'. Finally, if the hostname does not contain a '.', the root domain is assumed as the
local domain name.
This may be changed by listing the desired domain search path following the search keyword with spaces or tabs separating the names. Resolver queries having fewer than ndots dots (default is 1) in them will be attempted using each component of the search path in turn until a match is found.
For environments with multiple subdomains please read options ndots:n below to avoid man-in-the-middle attacks and unnecessary traffic for the root-dns-servers. Note that this process may be slow and will generate a lot of network traffic if the servers for the listed domains are not local, and
that queries will time out if no server is available for one of the domains.
If there are multiple search directives, only the search list from the last instance is used.
The net package standard library parses this file to get the DNS config, so the DNS resolver should behave as expected, however, the parsing functionality is not exposed.
The libnetwork.GetSearchDomains func in the libnetwork library should be able to help you out. If there are no search entries in /etc/resolv.conf, you should use the hostname, which can be gotten with the os.Hostname func.
I believe this also works for FreeBSD and Mac OS since they are both "UNIX like". But I am not 100% sure.

AppArmor: How to block pid=host container with CAP_SYS_ADMIN/CAP_SYS_CHROOT from reading (some) host files?

Given is a container that has pid=host (so it is in the initial PID namespace and has a full view on all processes). This container (rather, its process) additionally has the capabilities CAP_SYS_ADMIN and CAP_SYS_CHROOT, so it can change mount namespaces using setns(2).
Is it possible using AppArmor to block this container from accessing arbitrary files in the host (the initial mount namespace), except for some files, such as /var/run/foo?
How does AppArmor evaluate filesystem path names with respect to mount namespaces? Does it "ignore" mount namespaces and just take the specified path, or does it translate a path, for instance when dealing with bind-mounted subtrees, etc?
An ingrained restriction of AppArmor's architecture is that in case of filesystem resources (files, directories) it mediates access using the access path. While AppArmor uses labeling, as does SELinux, AppArmor derives only implicit filesystem resource labels from the access path. In contrast, SELinux uses explicit labels which are stored in the extended attributes of files on filesystems supporting POSIX extended attributes.
Now, the access path always is the path as seen in the caller's current mount namespace. Optionally, AppArmor can take chroot into account. So the answer to the second question item is: AppArmor "ignores" mount namespaces and just takes the (access) path. It does not translate the bind mounts, as far as I understand (there's nowhere any indication to be seen it would do).
As for the first question item: in general "no", due to AppArmor mediating access path (labels), not file resource labels. A limited access restriction is possible when accepting that there won't be any access path differentiation between what's inside a container and what's in the host outside the container (same for what's inside other containers). This is basically what Docker's default container AppArmor profile does: restricting all access to a few highly sensitive /proc/ entries and restricting to read-only access for many other /proc/ entries.
But blocking access to certain host file access paths always comes with the danger of blocking the same access path for a perfectly valid use inside a container (different mount namespace), so this requires great care, lots of research and testing, as well as the constant danger of things breaking in the next update of a container. AppArmor seems to not be designed for such usecases.

How do I prevent access to a mounted secret file?

I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.

spring-data-mongodb/k8s "Database name must not contain slashes, dots, spaces, quotes, or dollar signs"

I'm at a real loss on this one. I've been attempting to get my application running with a replica set in Kubernetes for awhile. I'm setting: spring.data.mongodb.uri=${MYAPP_MONGODB}:mongodb://localhost:27017/myapp
in application.properties and using Spring Data to access my objects.
Locally using a local MongoDB container it works fine even if I set the env var to my remote databases locally I can connect to them and work just fine. But when I put the value of MYAPP_MONGODB into k8s secrets when the container boots I get quoted error from the title. The value is like this:
mongodb://myuser:mypasswd#1.1.1.1:27017,2.2.2.2:27017,3.3.3.3:27017,4.4.4.4:27017,5.5.5.5:27017/myapp
I reviewed the source and still baffled as to why this is happening. Pulling the secret from the k8s environment it is correct.
Any help is much appreciated!
It sounds like your secret in k8s might be setup incorrectly. I would try uploading your secrets again and decoding them to make sure they are correct. Careful for random line breaks :)

Resources