Synology default SSH login directory - bash

When I connect to my Synology server via SSH, by default I get into the root directory. I want to change this to my home user folder. I am kind of able to do this via name#server 'cd /volume1/<user> ; bash', but then I get into a different bash interface compared to how I normally 1) log in via name#server and 2) then do cd /volume1/<user>. I would like to get the result of the latter, but than in the same line of code. What is the best method for doing this?

Related

SCP file from ssh session to localhost

I have a headless file server on which I store and manage downloads and media, but occasionally I have to transfer small files back to my computer (Mac, using bash shell). The problem is that some files have more user-friendly names and commonly have spaces in them, and they are buried in the file directory hierarchy I have set up on my server.
When I'm using scp from my local machine, I don't have tab completion, so I have to manually type out the entire path and name with spaces escaped. When I ssh into the server first, the command:
scp /home/me/files/file\ name\ with\ spaces.png Me#localhost:/Users/Me/MyDirectory
fails with the error "Permission denied, please try again" even though I'm entering my local machine user password properly.
I've learned a little bit of sftp since I've been told that may be a better tool for file transfer. However, the utility seems outdated and I still don't have tab completion after establishing a connection to the server (on my Terminal when pressing Tab I just get a tab character).
My question is this: what can I do to allow tab completion while using scp from my Mac? Or am I using incorrect syntax for scp while in an ssh session, and is there something in that command I should fix? Or, is there a (better? newer?) tool other than sftp that would offer tab completion on a server?
Finally, if none of these problems have simple solutions, is there some package I could install (e.g. a completion package from Homebrew or the like) that would facilitate better tab-completion with any of these commands?
Looks to me like this is some incorrect scping.
This is the format of the command
scp ./localFile.txt remoteUser#remoteHost:/remoteFile.txt
You were so close, but you have localhost set where you should have your remoteHost.
localhost is the name that resolves to the machine that you are currently on - so in your workflow, you are sshing to a machine, and then trying to scp that file to the same machine you are already sshd into.
What you need to do, is figure out the IP address, or the physical host name of the computer that you are trying to connect to, and use that instead.
scp ./localFile.txt remoteUser#192.168.1.100:/remoteFile.txt
# where 192.168.1.100 would be the IP of your Mac
I am assuming the reason you were getting permission denied, was because you were using your the login credentials for you mac, but unknowingly trying to login again to your headless machine.

Bash script to ssh to particular server

I'm wondering how I would go about creating my own bash script to ssh to a server. I know it's lazy, but I would ideally want not to have to type out:
ssh username#server
And just have my own two letter command instead (i.e. no file extension, and executable from any directory).
Any help would be much appreciated. If it helps with specifying file paths etc, I am using Mac OS X.
You can set configs for ssh in file ~/.ssh/config:
Host dev
HostName mydom.example.com
User myname
Then, just type
$> ssh dev
And you're done. Also, you can add your public key to the file ~/.ssh/authorized_keys so you won't get prompted for your password every time you want to connect via ssh.
Use an alias.
For example: alias sv='ssh user#hostname', then you can simply type sv.
Be sure to put a copy of the aliases in your profile, otherwise they will disappear at the end of your session.
you could create an alias like this:
alias ss="ssh username#server" and write it into your .bash_profile. ".bash_profile" is a hidden file is located in your home directory. If .bash_profile doesn't exist yet (check by typing ls -a in your home directory), you can create it yourself.
The bash_profile file will be read and executed every time you open a new shell.
You can use ssh-argv0 to avoid typing ssh.
To do this, you need to create a link to ssh-argv0 with the name of the host you want to connect, including the user if needed. Then you can execute that link, and ssh will connect you to the host of the link name.
Example
Setup the link:
ln -s /usr/bin/ssh-argv0 ~/bin/my-server
/usr/bin/ssh-argv0 is the path of ssh-argv0 on my system, yours could be different, check with which ssh-argv0
I have put it in ~/bin/ to be able to execute it from any directory (in OS X you may need to add ~/bin/ manually to your path on .bash_profile)
my-server is the name of my server, and if needed to set the user, it would be user#my-server
Execute it:
my-server
Even more
You can also combine this with mogeb answer to configure your server connection, so that you can call it with a shorter name, and avoid to include the user even if it is different than on the local system.
Host serv
HostName my-server
User my-user
Port 22
then set a link to ssh-argv0 with the name serv, and connect to it with
serv

Injecting bash prompt to remote host via ssh

I have a fancy prompt working well on my local machine. However, I'm logging to multiple machines, on different accounts via ssh. I would love to have my prompt synchronized everywhere by ssh command itself.
Any idea how to get that? In many cases I'm accessing machines using root account and I can't change permanently any settings there. I want the prompt synchronized.
In principle this is just setting variable PS1.
Try this :
ssh -l root host -t "bash --rcfile /path/to/special/bashrc"
maybe /path/to/special/bashrc can be /tmp/myrc by example

How do I connect to my ec2 instance using Cyberduck with privileges?

I try to login using the ec2-user but for some reason the login fails:
Using the username: ubuntu I am able to login just fine, however, I don't have any privileges and I can't sudo su for the privileges to write to my files. I tried using the cyberduck terminal and send command options but sudo su doesn't work with them. Cyberduck just spins.
I don't think the ec2-user account works on recent Ubuntu AMIs, which may explain the failed login.
You can approach this in a few ways. The first is to create a new user account specifically for FTP and give it permissions only to the necessary folders. First create the user, then create a public/private key pair for non-interactive login. This will allow you to operate your FTP client like normal.
My preferred solution is to upload the files to the ubuntu home directory and then run a script as root that moves the files to the correct location. You won't have to modify the system configuration this way, but you will have to do the file transfer in two steps.
Create a staging folder in /home/ubuntu and copy the files there. Create a /home/ubuntu/copy.sh script on the server like this:
#!/bin/bash
sudo su #this will only work if sudo doesn't prompt for a password
cp -r /home/ubuntu/stage/* /var/www/html/
Then from your dev machine, call the script:
$ ssh -i ~/path/to/key.pem ubuntu#ec2.hostname.com /home/ubuntu/copy.sh
If you want to get really fancy, you could set up a git repository and use a post-receive hook to handle this all for you when you push. No need for an FTP client at all.

How to upload files and folders to AWS EC2 instance?

I use SSH to connect to my Ubuntu instance. With SSH I can administer files and folders on the instance, but how do I upload files and folders from my local machine to the instance?
Is it possible to do right from SSH session, without using SFTP clients?
Just to add a bit more detail to the scp command (included in OSx and most linux/unix):
scp -i myssh.pem local_file username#200.200.200.200:/home/username
Obviously - replace the pem file with the one used for ssh access. Obviously replace "username" and "200.200.200.." with valid values for your setup.
You can try kitten utility which is a wrapper around boto3. You can easily upload/download files and run commands on EC2 server or on multiple servers at once for that matter.
kitten put -i ~/.ssh/key.pem cat.jpg /tmp [SERVER NAME][SERVER IP]
Where server name is e.g ubuntu or ec2-user etc.
This will upload cat.jpg file to /tmp directory of server
As mentioned already, I've used WinSCP, which logs me in as "ec2-user" - then make sure to adjust that user's permissions via SSH. Example:
chown -R ec2-user /path/to/files
(Authenticate as the root user first.)
Whatever folder or files you need to edit via WinSCP, allow permissions on them (otherwise you will get a permission denied error when trying to upload/edit files in WinSCP).
you cannot copy files using ssh. you can use scp/sftp.
scp if you are on linux or winscp if you are on windows
You can use this:
scp -i yourkeypair.pem source destination
This Works Fine
scp -r -i myssh.pem /local/directory remote_username#10.10.0.2:/remote/directory
-r for recursive
You could also install and set up an FTP Server, which will allow you to set up users, and directories for them to upload to. That being said, I've upvoted the above because scp/sftp is the ideal method.
The easiest way is to install webmin and user the file manager (java plugin) from your browser.
//Go to home folder
cd ~
//Download the latest version
wget http://prdownloads.sourceforge.net/webadmin/webmin-1.660-1.noarch.rpm
//install
sudo rpm -U webmin-1.660-1.noarch.rpm
//Change default password of root user
passwd
Finally, open port 10000 in the security groups
Then, log into
https://server_name:10000
with user:root password:what_you_set_before

Resources