Grafana-Loki move instance by copy bucket - minio

I was install Loki in multi tenant mode, for persistent data it have Minio. It works fine, but i need to move it to diffretn storage, so I did bucket, copy all files from old bucket and create new instance of loki witch point on new bucket. After connect to grafana bouth Loki instances, newone was empty.
Loki have some procedure to import data from bucket?

Related

Loki storage migration and data rotation

My current set-up:
I have an aws ec2 instance for monitoring services which runs dockerized grafana(grafana:8.3.4) and loki(loki:2.5.0). Logs from multiple other services running on other ec2 instances are being sent to this loki instance by dockerized promtail running on those other ec2 instances. Right now I'm using boltdb and filesystem as storage, so the data will be stored inside the container and I'm volume persisting the /loki/data folder inside the container to local filesystem so that I dont lose any data on container restart.
What I'm looking for:
Is it possible to rotate the data when I hit disk usage limit on the ec2 instance, for example move the old loki data to a remote storage like AWS S3 and then loki will continue to use filesystem as storage, and in any case that I want to browse the older logs I just copy the older loki data from S3 onto the loki instance filesystem so that I can browse them? If this is not possible is there another way to rotate the loki data for safe consumption of it later.
Is it also possible to push old logs to loki? For example I've started the grafana-loki service today. But my services have been running and generating logs for a month. So is it possible to push those older logs with their appropriate timestamps to loki?

Minio data transfer

I have a broken minio cluster and don't have access to the control plane. I still have access to the filesystem directory with data and buckets, access to .minio-sys where the broken minio config and other cluster data are located. How can I migrate all my data/buckets with all the files in them to a new minio cluster?
If you were running a single MinIO instance, this is simple - just copy the the directory containing .minio.sys onto another system and start MinIO again with the new directory.
If you were running multiple instances (i.e. distributed MinIO), copy each directory containing .minio.sys into new disks (each such directory is a "disk" for MinIO) and start MinIO on the new disks.

How to copy objects from one S3 bucket to another

I have found a solution for syncing S3 buckets data over link Copy objects between S3 buckets
aws s3 sync s3://DOC-EXAMPLE-BUCKET-SOURCE s3://DOC-EXAMPLE-BUCKET-TARGET
But I want to copy specific files from the production S3 bucket to the staging s3 bucket via the file paths that are stored in the production database. Any suggestion of how I can achieve that using Laravel aws-sdk-php, lambda function, or aws-cli?

How to increment or add storage (up to 500Gb) to new AWS instance?

I want to create new instance in the AWS cloud. The standard space is 8Gb which is not enough for my purpose.
I have my application in the /var/www-directory where files from users get uploaded. I need the additional space for these files. After the user uploaded the file i will move them to my S3 storage.
How can i increment the storage limit of a new instance or add new volume (EBS) mounted as /dev/sdb at /var?

how to restore elasticsearch indices from S3 to blank cluster using curator?

I have an S3 bucket of elasticsearch snapshots created by a curator job. I want to be able to restore these indexes to a fresh cluster using the S3 bucket. The target elasticsearch cluster does not have access to the source elasticsearch cluster by design.
I've installed cloud-aws plugin on the es client for the target cluster and I set permissions to the S3 bucket using environment variables. I have the config and action file in place for curator. I've verified the AWS permissions to the S3 bucket, but I'm not sure how to verify the permissions from the elasticsearch cluster's perspective. When I try running the curator job I get the following:
get_repository:662 Repository my-elk-snapshots not found.
I know that if I were to use elasticsearch directly I would need to create a reference to the S3 bucket so that the cluster knows about it. Is this the case for a fresh restore? I think that curator uses the elasticsearch cluster under the hood, but I'm confused about this scenario since the cluster is essentially blank.
How did you add the repository to the original (source) cluster? You need to use the exact same steps to add the repository to the new (target) cluster. Only then will the repository be readable by the new cluster. That's why you're getting the "repository not found" message. It has to be added to the new cluster so that snapshots are visible, and therefore able to be restored.

Resources