I am noob in Rclone, so here is my question:
I have a raspberry Pi with Seafile working fine. I would like to install Rclone there to backup the content of the Seafile to an Azure storage blob. I saw in the documentation that Rclone supports both Seafile and Azure blob, but I've no idea if it is possible to configure the scenario at the same time.
Can you advise?
Thanks
Related
My application is hosted on Amazon Elastic Compute Cloud by the developer. I need to retrieve the source code for my web application. I am a new user so I need to know how can i download the source codes in my local host.
You need to log into the instance using SSH. If you're familiar with SSH then you can SCP from your local machine.
Of you're not familiar, you can use Systems Manager and transfer the data to S3 then download from there:
https://aws.amazon.com/premiumsupport/knowledge-center/systems-manager-ssh-vpc-resources/
We are using container instances deployed with and arm-template (docs: https://learn.microsoft.com/en-us/azure/templates/microsoft.containerinstance/containergroups?tabs=json) and want to mount an on-premises volume into this container, as our environment is now both on-prem and in Azure. The on-prem environment is windows. How can we do this?
Suggestions so far that we have been looking into:
Mount a volume through the ARM-template. (Is this even possible with on-prem volumes?)
Run container instances with priviliges to be able to mount later with commands. (seems to be able through docker desktop, but is it possible through container instances?)
Use SMB-protocol to reach files on-prem
Which of these suggestions should be the best one/is possible? And is there another option that is better?
First of all, you need to consider couple of limitations when mounting volume to Azure Container Instance:
You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the overview.
Azure file share volume mount requires the Linux container run as root .
Azure File share volume mounts are limited to CIFS support.
Unfortunately there is no way to mount on-premise storage to Azure Container Instance.
It is possible to mount only following types of volumes into Azure Container Instances:
Azure Files
emptyDir
GitRepo
secret
You may try to sync your files from on-premise to Azure Storage Account File Share using Azure File Sync and mount File Share to Container Instances.
I have a Azure CI pipeline, that deploys .NET Core API to a Linux docker image and pushes it to our Azure Container Registry. The files are deployed to /var/lib/mycompany/app using docker-compose and dockerfile. This is then used as an image for an App Service which provides our API. The app starts fine and works, but if I go to advanced tools in the app service and run a bash session, I can see all the logs files generated by docker, but I can't see any of the files I deployed in the locations I deployed them. Why is this, and where can I find them? Is it an additional volume somewhere, a symbolic link, a layer in docker I need to access by some mechanism, a host of some sort, or black magic?
Apologies for my ignorance.
All the best,
Stu.
Opening a bash session using the Advanced Tools will open the session in the underlying VM running your container. If you want to reach your container, you need to install an ssh server in it and use the SSH tab in the Advanced Tools or the Azure CLI.
az webapp create-remote-connection --subscription <subscription-id> --resource-group <resource-group-name> -n <app-name> &
How to configure your container
How to open an SSH session
Today is my second day of trying to use amazon and i have started to pull my hairs. I want to set up ftp with amazon. I have signed up with them and and created an instance with amazon EC2. I have downloaded the key and I am able to login with ssh using the through Terminal in my mac. I can create files in the instance through terminal.
The instance is something like following:
Public DNS: ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com
I have created a index.html file at this location via terminal. But I am not able to view it in the browser using following url:
ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com/index.html
I just want to create web services here which I will be using in iPhone.
Also I am not sure how to go forward. How will I get to my local files and upload them to server. In other ftps I could do it using lcd, get, put etc but these commands are not working here. can some one please help me how should I go ahead because at this moment I am just banging my head to wall. Someone please help me.
Thanks
Pankaj
Use scp to copy files over ssh:
scp -i key-pair-file file-to-upload ec2-user#instance-public-DNS:
Notice the colon at the end!
With plain EC2 instances, you also need to install some sort of Web server software to power your Web service, and open the HTTP port in the firewall.
Just in case you plan to write your Web service in Java, I have put together a series of articles (Part I, Part II, Part III) guiding through the basics of installing Apache Tomcat on an Amazon Linux EC2 instance.
EDIT 2014/11/20
Dmitry Leskov is actually the better one. You should use his approach.
Answer from 2012
You first have to setup a LAMP (Linux, Apache, mySQL, PHP) stack on your EC2 instance to run any kind of web service.
This means you have to go trough the following steps:
Create an EC2 instance
Setup EBS Storage for mySQL data
Install mySQL
Configure mySQL
Install Apache
Configure Apache
Install PHP
Configure PHP
If you need a detailed instruction, I'd recommend you to take a look at this: Building EC2 with LAMP.
To transfer files to your EC2 instance you can use any FTP client, which supports SFTP and key pairs (you can also enable PasswordAuthentication for SSH to login with credentials). I'm using Transmit with no problems.
On a related note, I encountered a strange problem where I could not FTP from a PHP script running under apache - but I could if I ran the PHP script as root from the command line. After a day of googling, I found this, which solved the problem.
Disable SELinux. (Security Enhanced Linux)
The temporary solution is:
echo 0 >/selinux/enforce
..which will prove the concept, but will not survive a reboot. There are plenty of resources out there that describe how to permanently disable SELinux.
I have a question about the direct way to access file in EBS.
If I have an EC2 and EBS already and have a file 'a.pdf', can I access 'a.pdf' through URI? even out of EC2?
for example, my friend Mike wants to get 'a.pdf' file at his house just using Web browser or teminal program, etc. Please tell me what action does Mike have to do!
Thanks!
If you have an EC2 instance running you can set up a web server on your EC2 instance, mount the EBS volume and access the file via the web server.
AFAIK there is no way to access files in EBS directly - if this is what you need you should be using S3.