passwordless ssh ubuntu in script [closed] - shell

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a passwordless ssh setup for ubuntu. It works fine, I can give ssh commands from the command line. But if I have a script that contains an ssh command, it still asks for my password.
Example of command in script:
ssh ubuntu#localhost 'mkdir -p mydir'
Any ideas how to solve this problem?
Thanks,
Serban

I think what you mean is:
I can ssh to my server and run this script without a password as my own public key is in the server's authorized_keys file, but when I run the script and it ssh's to itself, it asks for a password. Why?
If so, the answer is that the server does not your have your private key, so the entry in the authorized_keys file is insufficient.
You can test this by seeing whether when logged into the server ssh ubuntu#localhost asks for a password.
Either:
copy your private key to the server (in general a bad idea); or
generate a new private/public keypair on the server (with ssh-keygen), and put the public element of that keypair into authorized_keys.

In the server:
cd /home/ubuntu/.ssh/
cat id_rsa.pub >> authorized_keys
where id_rsa is your public key in the client from which you want to access the server.

Related

How to setup trusted SSL for web servers in internal Windows network? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm developing a web application for use inside our internal Windows domain. I have 3 servers: apps.mycompany.com (primary), api.mycompany.com, and files.mycompany.com. Right now, everything works fine over HTTP. But I need to have these accessible over SSL/https to Windows desktop clients on the network (Chrome/Firefox/Edge) and iOS (Safari/Chrome).
I've set up self-signed certs using OpenSSL, and have configured nginx to where they respond correctly, and serve data. But, I'm constantly running into "not secure" / "invalid certificate" errors and "mixed content" (http/s) warnings that stymie my development. The errors on api and files are especially pernicious, as they just "break" things not obvious to the user.
I need a solution where everyone can simply hit https://apps.mycompany.com... and everything "just works", without user intervention (allowing insecure connections, manually adding certs, adding certificates to Trust stores, etc.)
Advice?
EDIT: I see this question was closed. Isn't setting up SSL/https an integral part of modern web development? (and yes, I had already asked my question on Server Fault).
You need to create a root certificate that would be trusted by all your clients. Then you can sign server certificates with that "root" key so that server certificates would also be trusted.
This is the example how you can issue such certs.
More challenging task is to install this root cert to all your clients. You can ask your domain administrator to help you with that. Otherwise you will have to ask all your users to install that root cert (they will also probably have to be local administrators..)

Duplicity does not work with lftp+ftps backend [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Trying to backup Ubuntu 18.04.1 server using duplicity to a FTPS (FTP over SSL) server. The password is stored in the FTP_PASSWORD environment variable as suggested. The duplicity command is:
duplicity /path/to/backup ftps://user#hostname/some/dir
The problem is that this translates into the following when it turns around and calls lftp
open -u 'user,pass` ftps://hostname
This will not work until you change the open command to (without the ftps:// prefix on the hostname:
open -u 'user,pass` hostname
What I cannot figure out is either:
How to tell duplicity not to build up the open command with the ftps:// prefix.
How to get lftp to work with the prefix
Note: The FTPS server works fine with other FTP clients, and even works properly with lftp as long as I build up the open command correctly.
I had the same problem that lftp worked fine with ftps when I just wrote the hostname.
Duplicity whereas did fail with some TLS unexpected packet errors.
Solution was:
instead of writing ftps:// write ftpes://
duplicity /path/to/backup ftpes://user#hostname/some/dir
This changes how and when credentials will be encrypted by lftp.
that seems wrong, https://lftp.yar.ru/lftp-man.html clearly states urls are viable
open [OPTS] site
Select a server by host name, URL or bookmark. When an URL or bookmark
is given, automaticallycally change the current working directory to the
directory of the URL. Options:
...
--user user use the user for authentication
--password pass use the password for authentication
--env-password take password from LFTP_PASSWORD environment variable
site host name, URL or bookmark name
also
cmd:default-protocol (string)
The value is used when `open' is used with just host name without
protocol. Default is `ftp'.
so removing ftps:// simply makes lftp connect via ftp which is probably not what you want.
i'd suggest you to enable duplicity max. verbosity '-v9' and find out why lftp fails to connect via ftps://
..ede/duply.net

Connecting to Amazon EC2 Instance on Windows 10 bash - Permission denied (publickey) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am trying to connect to my Amazon EC2 Instance using bash on Windows 10. I have already downloaded MyKey.pem It show me this message
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0555 for 'MyKey.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: MyKey.pem
Permission denied (publickey).
While searching I found this issue Trying to SSH into an Amazon Ec2 instance - permission error but didn't help me. I almost followed every single step on Amazon Documentations with no results again.
I tried to change the mode of the key using and didn't work out with me
chmod 400 MyKey.pem
I also tried to connect using PuTTy but it tells me server refused our key
and shows me this
How do I fix this?
Seems you're storing key file at your usual filesystem. By default Windows 10 don't accept creating 400 permissions at /mnt/driveletter/blablabla. If you'll try to do that it'll automatically switch to 555. If you want to configure 400 permissions, you can transfer key file to emulated Linux filesystem. For example to /home/username and run chmod 400 key.pem. After that ssh to AWS should work as usual.

What will i have to do after registration for domain [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Actually, today morning purchased my new domain in Bigrock. I dont know what to do here after. I want to upload my files to my registered domain. But how can i do this?
I'm trying to connect via filezilla to upload content of registered domain.. but getting this error
Status: Resolving address of domainname.com
Status: Connection attempt failed with "EAI_NODATA - No address associated with nodename".
also tried ftp.domainname.com
Status: Resolving address of ftp.domainame.com
Status: Connection attempt failed with "EAI_NONAME - Neither nodename nor servname provided, or not known".
What will i have to do now?
Question 1 : First step what to do?
Question 2 : what is user name and password have to give in filezilla to upload contents. Whether it is site password or acc password?
Before you get any further, ask yourself this question. did you setup a FTP user and/or hosting? Sounds like you haven't.
If not, that is step 1. You need to get hosting setup first, during that process you should be able to use your user and pass that was setup for FTP login.

How to log in as "postgres" user with PostgreSQL 8.4.7 on Windows 7 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I installed PostgreSQL on my windows 7 desktop.
Usually PostgreSQL will create new account on my desktop, but when i want to switch user, there's no user named postgres. Then I checked the user folder in directory "C" and there's already a folder user named postgres.
But why can't I log in into my desktop using postgres account?
The postgres account is a service account. It doesn't have the login right, and cannot be logged into. You can use runas.exe to run commands as the PostgreSQL user account, or shift-right-click on a program and use "Run as...".
In PostgreSQL 9.2 and above the installer puts PostgreSQL in the NETWORKSERVICE by default, so no postgres user account needs to be created.
In general, there is no need to run programs as the postgres user on Windows. Just specify the user to connect to the PostgreSQL server as, eg:
psql -U postgres -h localhost dbname

Resources