Error trying use duplicity for backups - rackspace-cloudfiles

I was asked to help with broken backup script on legacy system.
There is a script file on server which should run hourly to put DB backup to Rackspace cloud. And here is the result.
START cf-postgresql-dump ...
Wed 04 Nov 2015 06:49:09 AM EST
Synchronizing remote metadata to local cache...
Copying duplicity-full-signatures.20130622T180407Z.sigtar.gpg to local cache.
Download of 'duplicity-full-signatures.20130622T180407Z.sigtar.gpg' failed (attempt 1): CloudFiles returned: 404 Not Found
Download of 'duplicity-full-signatures.20130622T180407Z.sigtar.gpg' failed (attempt 2): CloudFiles returned: 404 Not Found
Download of 'duplicity-full-signatures.20130622T180407Z.sigtar.gpg' failed (attempt 3): CloudFiles returned: 404 Not Found
Download of 'duplicity-full-signatures.20130622T180407Z.sigtar.gpg' failed (attempt 4): CloudFiles returned: 404 Not Found
Download of 'duplicity-full-signatures.20130622T180407Z.sigtar.gpg' failed (attempt 5): CloudFiles returned: 404 Not Found
Giving up downloading 'duplicity-full-signatures.20130622T180407Z.sigtar.gpg' after 5 attempts
what is the source of the problem and what is the way to fix it?
As I can see there is a missing file on our backup storage so we cannot produce incremental backup. Am I right?

smells like the cf backend is broken. probably some changes in the cf API.
try to update to the latest duplicity 0.6.x or 0.7.x and retry.
..ede/duply.net

Related

Error when I started to create a symfony project

I'm new to symfony and I have a little problem. When I create the project with composer, there is this error "https://repo.packagist.org could not be fully loaded (curl error 7 while downloading https://repo.packagiste.org/package.json: Failed to connect to 127.0.0.1 port57481 after 2028 ms: Connection refused), package information was loaded from the local cache and may be out of date"enter image description here
If someone can help me, how can I resolve it so that I can move on.

Why VestaCP installer not working on EC2 instance

I'm following the official installation instructions.
When I run the last command, bash vst-install.sh, I get the following error:
Retrieving http://rpms.remirepo.net/enterprise/remi-release-.rpm
curl: (22) The requested URL returned error: 404
error: skipping http://rpms.remirepo.net/enterprise/remi-release-.rpm - transfer failed
Error: Can't install REMI repository
I see that the url that the script is trying to curl, is missing a release number towards the end after remi-release-.
Why is this happening and what could I do?

Error during installation of Jasper Reportt Server

I'm trying to install Jasper Report Server but I keep getting always the same error: during the post-install actions I get the following
Problem running post-install step. Installation may not complete correctly
Error running C:\Jaspersoft\jasperreports-server-cp-8.0.0/buildomatic/js-ant.bat
load-sugarcrm-db:[create-ks] Failed to create the keystore C:\Users\Paolo\.jrsks
BUILD FAILED
C:\Jaspersoft\jasperreports-server-cp-8.0.0\buildomatic\build.xml:377: The
following error occurred while executing this line:
C:\Jaspersoft\jasperreports-server-cp-8.0.0\buildomatic\bin\setup.xml:377:
Keystore may have been tampered with.
Total time: 1 second
I'm using the default installation and windows 11. I also tried running it in compatibility mode for Windows 7 but nothing changed. Can someone help me?
My problem was that js-install.sh created the keystore and placed it in a user directory, where it then couldn't access it again.
You could check the access rights for the folder the keystore is placed, and check if your user has the right ones.

EC2 yum 404 error

I'm using an instance of Amazon EC2, Redhat.
Since I updated yum last time, it doesn't work any more, always get error messages of 404 not found, for example:
failure: repodata/repomd.xml from rhui-REGION-rhel-server-releases: [Errno 256] No more mirrors to try.
https://rhui2-cds01.us-east-2.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/%24releasever/x86_64/os/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
https://rhui2-cds02.us-east-2.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/%24releasever/x86_64/os/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
Anyone knows the solution?
According to redhat https://access.redhat.com/articles/1320623
Typical Cause This issue generally occurs if client system is able to
communicate with given server but could not find or access the
requested package on the server.
Resolution This issue can occur due to corruption of local client
cache, try to clear cache on client system:
Try
rm -fr /var/cache/yum/*
yum clean all
then
yum update

Cert already in hash table exception

I am using chef dk version 12 and i have done basic setup and uploaded many cookbooks , currently i am using remote_directory in my default.rb
What i have observed is whenever there are too many files /hierarchy in the directory the upload fails with the below exception :-
ERROR: SSL Validation failure connecting to host: xyz.com - SSL_write: cert already in hash table
ERROR: Could not establish a secure connection to the server.
Use `knife ssl check` to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
`knife ssl fetch` to make knife trust the server's certificates.
Original Exception: OpenSSL::SSL::SSLError: SSL_write: cert already in hash table
As mentioned earlier connection to server isnt a problem it happens only when there are too many files/the hierarchy is more .
Can you please suggest what i can do? I have tried searching online for solutions but failed to get a solution
I have checked the question here but it doesnt solve my problem
Chef uses embedded ruby and openssl for people not working with chef
Some updates on suggestion of tensibai,
The exceptions have changed since adding the option of --concurrency 1 ,
Initially i had received,
INFO: HTTP Request Returned 403 Forbidden:ERROR: Failed to upload filepath\file (7a81e65b51f0d514ec645da49de6417d) to example.com:443/bookshelf/… 3088476d373416dfbaf187590b5d5687210a75&Expires=1435139052&Signature=SP/70MZP4C2U‌​dUd9%2B5Ct1jEV1EQ%3D : 403 "Forbidden" <?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Access Denied</Message>
Then yesterday it has changed to
INFO: HTTP Request Returned 413 Request Entity Too Large: error
ERROR: Request Entity Too Large
Response: JSON must be no more than 1000000 bytes.
Should i decrease the number of files or is there any other option?
Knife --version results in Chef: 12.3.0
Should i decrease the number of files or is there any other option?
Ususally the files inside a cookbook are not intended to be too large and too numerous, if you got a lot of files to ditribute it's a sign you should change the way you distribute thoose files.
One option could be to make a tarball, but this makes harder to manage the deleted files.
Another option if you're on an internal chef-server is to follow the advice here and change the client_max_body_size 2M; value for nginx but I can't guarantee it will work.
I had same error and i ran chef-server-ctl reconfigure on chef server then tried uploading cookbook again and all started working fine again

Resources