Restore loki compactor data - grafana-loki

I have configured the compactor for loki and then I archive the files produced by it. I have files like compactor.gz and loki-stack.gz.
However I cannot read these files directly and I cannot find documentation on how to restore them into loki.
How can I import them into loki to exploit the archived logs ?

Related

Docker - Unsupported redo log format. The redo log was created with MariaDB x.x.x

While trying to spin up a server using docker-compose I have an issue when I try to downgrade or upgrade the mysql image. As I am just trying to identify the right mysql/maraidb version I'm not concerned about the data at the moment.
I've been getting the following error
"Unsupported redo log format. The redo log was created with MariaDB 10.6.5."
I am unable to delete the logs ib_logfile0 ib_logfile1. How do I successfully upgrade/downgrade mysql when it is giving such an error?
When the upgraded/downgraded version of the mysql/mariadb is spun up, you can't delete the ib_logfile0 ib_logfile1 log files because the new version won't start and therefore you can't even docker exec into it.
Since data retention is not a priority, the solution here is to remove the specific container or any stopped containers and all unused images (not just dangling images), add the -a flag to the command.
docker system prune -a
Also this issue may happen if you are moving between two different project folders. in that case, try to identify the volumes that may have used the same image and delete them where necessary
docker volume rm volume_name volume_name
More on how to remove stopped images:
https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes

aws s3 glacier restore from vault

I have vault and need to restore one of the folder from the vault I have initiated the job using AWS CLI and got the inventory using JSON file but unable to get the complete folder from the inventory. Any one can help me restoring the folder?
I am able to get CSV file formate to see the archive ID of the files but is it possible to take the complete folder as it is showing separate archive ID for all files in folder?

Opening High Sierra system log files (tracev3)

How do I view High Sierra system log files? Specifically I want to do so from recovery, or remotely.
In recovery, log collect gives:
“log: failed to collect LiveData: No such file or directory (2)”
log show doesn't seem work according to its documentation even on a normally booted up system e.g.
log show --file Persist/00000000000000eb.tracev3 gives:
log: Could not open tracev3 log file: The specified URL did not refer to a valid log archive
Even though the documentation states:
--file file Display events stored in the given .tracev3 file. In order to be decoded, the file must be contained within a valid .logarchive bundle, or part of the system logs directory.
which would seem to imply that .tracev3 should not need to be in a log archive, if they’re in the system log directory.
In the case of recovery, first starting the logd daemon allows you to collect logs, which is a start e.g.
launchctl load /System/Library/LaunchDaemons/com.apple.logd.plist
log collect --last 3h
I'm still unclear how one would do this remotely i.e. I still can't work out how to read an arbitrary tracev3 in the system directory with log.

How do I force rebuild log's data in filebeat 5

I have filebeats 5.x ship logs to logstash.
How do I reset the “file pointer” in filebeat
This is a similar problem to
How to force Logstash to reparse a file?
https://discuss.elastic.co/t/how-do-i-reset-the-file-pointer-in-filebeats/49440
I cleaned all elasticsearch's data, delete the /var/lib/filebeat/registry. but filebeat is only shipping the new line.
change the registry_file is invalid, the file's offset saved to new file (delete file is the same problem)
filebeat.registry_file: registry
Stop filbeat service.
Rename the register file - usually found in /var/lib/filebeat/registry
Start filbeat service.
sudo service filbeat stop
mv /var/lib/filebeat/registry /var/lib/filebeat/registry.old
sudo service filbeat start
The Filebeat agent stores all of its state in the registry file. The location of the registry file should be set inside of your configuration file using the filebeat.registry_file configuration option.
I recommend specifying an absolute path in this option so that you know exactly where the file will be located. If you use a relative path then the value is interpreted relative to the ${path.data} directory. On Linux installations, when started as a service or started using the filebeat.sh wrapper, path.data is set to /var/lib/filebeat.
After deleting this registry file, Filebeat will begin reading all files from the beginning (unless you have configured a prospector with tail_files: true.
If you continue to have problems, I recommend looking at the Filebeat log file which will contain a line stating where the registry file is located. For example:
2017/01/18 18:51:31.418587 registrar.go:85: INFO Registry file set to: /var/lib/filebeat/registry
As already mentioned here, stopping the filebeat service, deleting the registry file(s) and restarting the service is correct.
I just wanted to add for Windows users, if you haven't specified a unique location for the filebeat.registry_file, it will likely default to ${path.data}/registry which is somewhat confusingly the C:\ProgramData\filebeat directory as mentioned by the folks at Elastic.
In my case I had to show hidden files before it was displayed.

How do I connect mongodb to an existing set of database files

I downloaded a mongoDB database, meaning I got a set of XXX.0 [xxx.1, ...] and XXX.ns files. I installed mongoDB (on Mac OS X using Homebrew) and ran mongod with the dbpath parameter pointing to a directory with these files in it. However, when I use the mongo shell and ask to see available databases or collections, I get nothing but the 'local' database and no collections. What am I doing wrong? how can I get mongoDB to 'see' the database in the directory?
Thanks,
Yariv.
Based on the database files naming that you have downloaded, the files were generated using MongoDB MMAPv1 storage engine. As if it were generated with MongoDB WiredTiger storage engine, you would have collection.*.wt files.
Now, if you have just installed through homebrew the latest stable version of MongoDB which is currently v3.2. This would have come with the new default storage engine of WiredTiger.
If you also have the local.<n> and local.ns files in the directory, MongoDB would give you an error stating incompatibility of storage files like below:
[initandlisten] exception in initAndListen: 28662 Cannot start server. Detected data files in /data/target created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating
Without these files mongod will run with WiredTiger storage engine and would ignore the existing(copied) MMAPv1 files.
If this is your case, you can run mongod with the --storageEngine option to specify MMAPv1. This would set mongod to run with MMAPv1 storage engine. Example:
mongod --dbpath <your db files dir> --storageEngine=mmapv1
Also for the record, checkout mongodump and mongorestore to export/import the database contents in binary format instead of copying the database files.

Resources