Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Trying to backup Ubuntu 18.04.1 server using duplicity to a FTPS (FTP over SSL) server. The password is stored in the FTP_PASSWORD environment variable as suggested. The duplicity command is:
duplicity /path/to/backup ftps://user#hostname/some/dir
The problem is that this translates into the following when it turns around and calls lftp
open -u 'user,pass` ftps://hostname
This will not work until you change the open command to (without the ftps:// prefix on the hostname:
open -u 'user,pass` hostname
What I cannot figure out is either:
How to tell duplicity not to build up the open command with the ftps:// prefix.
How to get lftp to work with the prefix
Note: The FTPS server works fine with other FTP clients, and even works properly with lftp as long as I build up the open command correctly.
I had the same problem that lftp worked fine with ftps when I just wrote the hostname.
Duplicity whereas did fail with some TLS unexpected packet errors.
Solution was:
instead of writing ftps:// write ftpes://
duplicity /path/to/backup ftpes://user#hostname/some/dir
This changes how and when credentials will be encrypted by lftp.
that seems wrong, https://lftp.yar.ru/lftp-man.html clearly states urls are viable
open [OPTS] site
Select a server by host name, URL or bookmark. When an URL or bookmark
is given, automaticallycally change the current working directory to the
directory of the URL. Options:
...
--user user use the user for authentication
--password pass use the password for authentication
--env-password take password from LFTP_PASSWORD environment variable
site host name, URL or bookmark name
also
cmd:default-protocol (string)
The value is used when `open' is used with just host name without
protocol. Default is `ftp'.
so removing ftps:// simply makes lftp connect via ftp which is probably not what you want.
i'd suggest you to enable duplicity max. verbosity '-v9' and find out why lftp fails to connect via ftps://
..ede/duply.net
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm developing a web application for use inside our internal Windows domain. I have 3 servers: apps.mycompany.com (primary), api.mycompany.com, and files.mycompany.com. Right now, everything works fine over HTTP. But I need to have these accessible over SSL/https to Windows desktop clients on the network (Chrome/Firefox/Edge) and iOS (Safari/Chrome).
I've set up self-signed certs using OpenSSL, and have configured nginx to where they respond correctly, and serve data. But, I'm constantly running into "not secure" / "invalid certificate" errors and "mixed content" (http/s) warnings that stymie my development. The errors on api and files are especially pernicious, as they just "break" things not obvious to the user.
I need a solution where everyone can simply hit https://apps.mycompany.com... and everything "just works", without user intervention (allowing insecure connections, manually adding certs, adding certificates to Trust stores, etc.)
Advice?
EDIT: I see this question was closed. Isn't setting up SSL/https an integral part of modern web development? (and yes, I had already asked my question on Server Fault).
You need to create a root certificate that would be trusted by all your clients. Then you can sign server certificates with that "root" key so that server certificates would also be trusted.
This is the example how you can issue such certs.
More challenging task is to install this root cert to all your clients. You can ask your domain administrator to help you with that. Otherwise you will have to ask all your users to install that root cert (they will also probably have to be local administrators..)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am trying to connect to my Amazon EC2 Instance using bash on Windows 10. I have already downloaded MyKey.pem It show me this message
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0555 for 'MyKey.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: MyKey.pem
Permission denied (publickey).
While searching I found this issue Trying to SSH into an Amazon Ec2 instance - permission error but didn't help me. I almost followed every single step on Amazon Documentations with no results again.
I tried to change the mode of the key using and didn't work out with me
chmod 400 MyKey.pem
I also tried to connect using PuTTy but it tells me server refused our key
and shows me this
How do I fix this?
Seems you're storing key file at your usual filesystem. By default Windows 10 don't accept creating 400 permissions at /mnt/driveletter/blablabla. If you'll try to do that it'll automatically switch to 555. If you want to configure 400 permissions, you can transfer key file to emulated Linux filesystem. For example to /home/username and run chmod 400 key.pem. After that ssh to AWS should work as usual.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a passwordless ssh setup for ubuntu. It works fine, I can give ssh commands from the command line. But if I have a script that contains an ssh command, it still asks for my password.
Example of command in script:
ssh ubuntu#localhost 'mkdir -p mydir'
Any ideas how to solve this problem?
Thanks,
Serban
I think what you mean is:
I can ssh to my server and run this script without a password as my own public key is in the server's authorized_keys file, but when I run the script and it ssh's to itself, it asks for a password. Why?
If so, the answer is that the server does not your have your private key, so the entry in the authorized_keys file is insufficient.
You can test this by seeing whether when logged into the server ssh ubuntu#localhost asks for a password.
Either:
copy your private key to the server (in general a bad idea); or
generate a new private/public keypair on the server (with ssh-keygen), and put the public element of that keypair into authorized_keys.
In the server:
cd /home/ubuntu/.ssh/
cat id_rsa.pub >> authorized_keys
where id_rsa is your public key in the client from which you want to access the server.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm running:
WAMP on Windows 8
Apache Version: 2.4.2
PHP Version: 5.4.3
MySQL Version: 5.5.24
I've all the module needed for Magento active. I try to install a fresh copy of Magento and it works without problems (except the login issue that i fixed).
I did the following steps:
copy all files from remote server by ftp to www/mysite (mysite is a sub folder of the WAMP's www folder)
dump the remote db (adding drop table istruction) and imported in the local db (checked the data in local.xml)
replaced all the ocurrency in the db (http://www.mysite.it) with
(http://localhost/mysite)
the tables secure_url and unsecure_url both have the final /
Now I would just run "magento-cleanup.php" to set permission but I receive the following error:
Not Found
The requested URL /mysite/ was not found on this server.
The same thing for every page (home, admin).
Can anyone help?ell.
Not Found
The requested URL /mysite/ was not found on this server.
FYI - you are getting this error because you using wrong address
Here is the clean way of migrating magento from Remote server to localhost.
As you have mentioned that you have copied all the file from remote server to www/mysite.
SO while accessing the localhost site you should use this link localhost/mysite
and before accessing instead of changing the secure and unsecure url manually, you should let the mogento do all the necessary changes by itself.
for that first take backup and delete the following file app/etc/local.xml and than access localhost/crespigioielli, after that magento installation will be triggered. Good Luck
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a headless server running Windows Server 2003, and administer it via VNC. It is set to auto login to a specific user account.
I want to change to using Remote Desktop/Terminal Services. However, when I log in remotely a new user session is created (in addition to auto logged in session). Essentially I want a remote desktop connection to take over the default session in the computer (how XP does it). Does anyone know how?
P.S. I am NOT after a single remote session, instead a single GLOBAL session :)
To logon as the 'console' user (the one to be used for logging in locally) then you use a parameter for mstsc.exe From a command prompt type in mstsc /h to see the help. MSTSC /ADMIN /V:YOURSERVERNAME
or
MSTSC /CONSOLE /V:YOURSERVERNAME
(depending on the version that you have)
Please excuse the self answer, but for those using OS X and Remote Desktop Connection, all you need to do is append " /console" to the IP address of the computer you wish to connect to.
Here's how you can switch over.
Start task manager
Switch to the users tab
There should be two users listed. The one you logged on with and the original session you are trying to connect to.
Right click on the one you want to connect to and select "Switch" or "connect". I can't remember the exact one.
On the server: Settings > Control Panel > Administrative Tools > Terminal Services Configuration > Server Settings > Restrict each user to one session
Alternately, you can log in to the console (the session that would display on the monitor, if present). From XP-era clients, that's (command-line) "mstsc /console /v:host.to.connect.to". For Vista-era clients, it's "mstsc /admin /v:host.to.connect.to". That option is probably present somewhere in the RDP client settings screen, and tools like Terminals also expose it.