certbot - Incomplete or failed recovery for /var/lib/letsencrypt/temp_checkpoint - lets-encrypt

If you have immutable nginx conf files, you will get the following error when running certbot:
certbot
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Unable to recover files from /var/lib/letsencrypt/temp_checkpoint
Incomplete or failed recovery for /var/lib/letsencrypt/temp_checkpoint
Unable to revert temporary config

This is caused by files in the /var/lib/letsencrypt/temp_checkpoint getting out of sync. No big deal, just delete the temp files.
delete all file in the /var/lib/letsencrypt/temp_checkpoint directory
rm /var/lib/letsencrypt/temp_checkpoint/*

Related

FileAlreadyExistsException: Failed to rename temp file <temp_file> to <another_name> because file exists

I have built a streaming pipeline with spark autoloader.
Source Folder is an azure blob container.
We encountered a rare issue (could not replicate it). Below is the exception Message:
org.apache.hadoop.fs.FileAlreadyExistsException: Failed to rename temp file dbfs:/mnt/delta_checkpoints/sources/0/rocksdb/__tmp_path_dir/.2.zip.52d0723f-b803-4a8a-9533-9d6e67813641.tmp to dbfs:/mnt/delta_checkpoints/sources/0/rocksdb/2.zip because file exists
Please help on this with a resolution, as this looks like some known platform issue.

How to solve laravel file_input_error

When I log in, this error show ->
i already give permission to src/storage
file_put_contents(/var/www/html/catalog/src/storage/framework/cache/55/f9/55f98a9ae16c0c5f7c41c2f6d5435d3f37274a71): failed to open stream: No such file or directory
Create cache directory in storage directory, and give him read & write permission to it. Error saying that your cache directory does not exists

aptly can't open database leveldb

Probably after a reboot, the MANIFEST file have been cleared and is now empty
:# aptly repo show -with-packages unstable
ERROR: can't open database: leveldb: manifest corrupted (field 'comparer'): missing [file=MANIFEST-010975]
I'm looking for a way to rebuild the MANIFEST,
or to backup the db, reinstall it properly and restore my backup.
Have you tried "aptly db recover"? http://www.aptly.info/doc/aptly/db/recover/

Manually Purging Nginx Cache Causes Errors in Log File

I am attempting to clear the nginx cache when the CMS (ExpressionEngine) publishes new content. I have been just purging the entire folder and letting the cache rebuild itself. It seems to be working fine, but it is filling up the error logs with these entries:
2014/12/15 12:35:09 [crit] 21686#0: unlink() "/var/nginx/cache/default/6197dda0a6cadcec5563533cb6027580" failed (2: No such file or directory)
2014/12/15 12:35:10 [crit] 21686#0: unlink() "/var/nginx/cache/default/bb8eca6b51c655989bd717a9708b244e" failed (2: No such file or directory)
2014/12/15 12:35:10 [crit] 21686#0: unlink() "/var/nginx/cache/default/6f9b9aea38c5761a87cffd365e51e7a4" failed (2: No such file or directory)
It seems that nginx keeps track of the cache files and gets confused when it goes to purge them after I already did.
Is there a better way to be purging the cache that doesn't cause these errors?
Off the top of my head, a way of doing this is specifying secret headers in nginx which will bypass cache, thus theoretically purging the existing files.
But also, there is nothing wrong in your way of doing it. The only ugliness is these logs, which invariably show up as [crit], which they are not in case of manual purge. :)
"It appears that these errors occur when NGINX itself tries to delete cache entries after the time specified by the inactive parameter of the fastcgi_cache_path directive. The default for this is only 10 minutes, but you can set it to whatever value you want. I’ve set it to 7 days myself, which seems to work well as I haven’t seen this error at all after changing it."
Source: https://www.miklix.com/nginx/deleting-nginx-cache-puts-critical-unlink-errors-in-error-log/

SVN Committing is broken since today

When committing changes to already existing files I receive the following error messages since today, although no one did change anything I know of at the server or client side.
The server is running SUSE Linux Enterprise Server 10 (i586). We're using use mod_dav_svn 1.6.4 in apache 2.2.13. The svn server is running behind a reverse-proxy whose settings are also said to have not changed.
Me and the persons who also do have the problem also are using Tortoise svn on windows as client.
Updating and creating new files also works without problems.
mod_dav_svn close_stream: error closing write stream [500, #2]
Can't open file '/var/lib/svn/repos/project/db/transactions/1744-1gq.txn/next-ids': No such file or directory [500, #2]
mod_dav_svn close_stream: error closing write stream [500, #2]
Can't open file '/var/lib/svn/repos/project/db/transactions/1744-1gr.txn/node.c-293.0-1732': No such file or directory [500, #2]
Could not MERGE resource "/repos/project/!svn/act/48c175a7-c2dc-624d-a16d-c50c9a4f1679" into "/repos/project/folder/branches/CR008/folder/folder/WebContent/custom/webtop/admin2". [409, #0]
An error occurred while committing the transaction. [409, #2]
Can't open file '/var/lib/svn/repos/project/db/transactions/1744-1gs.txn/props': No such file or directory [409, #2]
I also checked for disk space and restarted the svn server and run svnadmin recover. What else could I try?
The problem occured due to an http proxy server which was between the reverse proxy and the client. As soon as it got deactivated SVN worked again :-).
I'd check into those "no such file or directory" messages and see if they're true. These error log things are often true!
This problem is d'nt copy all directories
sudo mkdir repodir/db/transactions
sudo mkdir repodir/db/txn-protorevs
sudo chmod 775 repodir/db -R
sudo chgrp www-data repodir/db -R
If this was a reverse proxy issue using 'BrowserMatch "SVN" redirect-carefully' in your Apache configuration should solve the problem.

Resources