nfs issue on AWS Amazon Linux2 - amazon-ec2

I'm running multiple websites on Amazon AWS. I mounted and EBS on the master server, the mount dir hold the website's files.
Also, I configured the application load balancer, which installs small instances when there is a load on the master. The clone servers running NFS clients to connect to the master server and mount the website's files.
Everything works perfectly, but the issue many times the clones server cannot mount the NFS server even when I try to mount manually. I have to run exportfs -f to flush the NFS table on the master instance.
I do not know why this happens. If you need any further information just give the CMD for it.

As I understand You are trying to mount the EBS from multiple ec2 instances.
This can be done using the multi-attach capability of EBS. However the are some limitations to this capability (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html). So, in short, if You have more than 16 instanced trying to mount to this EBS, you hit the limit.
My suggestion to solve that - Use EFS instead. EFS is an elastic file system managed service by AWS. Really simple to use, can be mounted from multiple Linux instances and elastic (so you pay-as-you-grow). Check it here: https://docs.aws.amazon.com/efs/latest/ug/mount-multiple-ec2-instances.html

Related

mount on-prem volumes into azure container instance

We are using container instances deployed with and arm-template (docs: https://learn.microsoft.com/en-us/azure/templates/microsoft.containerinstance/containergroups?tabs=json) and want to mount an on-premises volume into this container, as our environment is now both on-prem and in Azure. The on-prem environment is windows. How can we do this?
Suggestions so far that we have been looking into:
Mount a volume through the ARM-template. (Is this even possible with on-prem volumes?)
Run container instances with priviliges to be able to mount later with commands. (seems to be able through docker desktop, but is it possible through container instances?)
Use SMB-protocol to reach files on-prem
Which of these suggestions should be the best one/is possible? And is there another option that is better?
First of all, you need to consider couple of limitations when mounting volume to Azure Container Instance:
You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the overview.
Azure file share volume mount requires the Linux container run as root .
Azure File share volume mounts are limited to CIFS support.
Unfortunately there is no way to mount on-premise storage to Azure Container Instance.
It is possible to mount only following types of volumes into Azure Container Instances:
Azure Files
emptyDir
GitRepo
secret
You may try to sync your files from on-premise to Azure Storage Account File Share using Azure File Sync and mount File Share to Container Instances.

jelastic Tomcat 8 access to storage

I am evaluating jelastic for use with Tomcat 8 and Postgres 9.5.
Does a user have ssh access to the instance that is running these services?
Does Tomcat have access to the local storage, or can you attach storage that Tomcat can create and read files?
Does a user have ssh access to the instance that is running these services?
Yes, a user have ssh access to the any instance. The authentication procedure in Jelastic SSH Gateway is divided into two independent parts:
connection from end user to Gateway (external authentication)
connection from Gateway to users’ container (internal authentication)
Both parts of the authentication procedure are based on a standard SSH protocol, using public/private keypairs.
With Jelastic SSH Gateway, you can easily access:
the whole account where you can navigate across your environments and containers using an interactive menu without extra authentication
separate containers directly while working with them remotely via additional tools (e.g. Capistrano) or using SFTP and FISH protocols.
While accessing containers via SSH, a user receives all required permissions and additionally can manage the main services with sudo commands of the following kind (and others):
sudo /etc/init.d/jetty start
sudo /etc/init.d/mysql stop
sudo /etc/init.d/tomcat restart
sudo /etc/init.d/memcached status
sudo /etc/init.d/mongod reload
sudo /etc/init.d/nginx upgrade
sudo /etc/init.d/httpd help
Using our documentation you’ll find out how to:
generate SSH key
add SSH key
access environments and containers
Does Tomcat have access to the local storage, or can you attach storage that Tomcat can create and read files?
Jelastic supported the local storage and the dedicated storage container.
Jelastic Dedicated Storage Container is a special type of node, based on Docker centos7 image. Being developed specially for data storing, it provides a number of the appropriate benefits:
being delivered with the corresponding software (i.e. NFS & RPC) already pre-installed, so such a container can be used as a storage immediately after the creation without any additional configurations required
compared to other common-purposed Jelastic nodes, Dedicated Storage Container provides the enlarged amount of disk space, which allows to persist a comparatively bigger data volumes (herewith, the particular value depends on your service provider’s settings and can vary according to your account type).
Some tips on this container type usage and examples it can be leveraged in the best way are revealed within the corresponding use case description.
And below we'll consider how to set up such Storage server inside your Cloud and some tips on its management:
Storage container creation
Storage container management
If you don't have root permissions, please contact your hosting provider.
Applications you run on tomcat have access to storage on the running system is based on several things. There are layers of security. Tomcat literally has access to whatever user you run it under has access to. That's true in both Windows and Linux environments. A running service has operable services defined as soon as you decide to log in.

Creating a RethinkDB cluster on Amazon ECS

I am using the official Docker image for RethinkDB. I am trying to use AWS EC2 Container Services to create a RethinkDB cluster. I can easily get stand alone instances to run, but have had no luck creating a RethinkDB cluster.
I have tried various security group settings. I even made everything wide open, but no luck. When I launch the Docker image, I pass in --bind all and --join [ip]:29015, but nothing.
Has anyone got this to work?
The default networking for docker on amazon ECS is the docker0 bridge. This means multiple containers on the same EC2 instance can talk to each other through the bridge but not to other EC2 instances and containers across the ECS cluster.
You could set the networkMode in your task definition to 'host' which should then let you use the network on your EC2 instances directly and use the security groups you have defined See http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#network_mode.
The alternative is to setup an overlay network using something like flannel, weave, openvswitch etc. See https://aws.amazon.com/blogs/apn/architecting-microservices-using-weave-net-and-amazon-ec2-container-service/ for an example using weave.

Using Virtual Servers for Backups

I have three server machines inbound and I have hosted following services on them
Active Directory
DHCP
DNS
File Server
Web Server
I have access to a virtual server too. I want to ask how wise it would be to use that Virtual server for backups in disaster recovery point of view?
Yes it is definitely a good way to backup your data backup on VMs, however you have question of size of your storage where you keep your backups.
More importantly you can go for snapshot options for Vms and bare metal recovery for physical windows Vms.
I hope the answer will be useful..Thanks..

How can I access any files in Amazon EBS through URI?

I have a question about the direct way to access file in EBS.
If I have an EC2 and EBS already and have a file 'a.pdf', can I access 'a.pdf' through URI? even out of EC2?
for example, my friend Mike wants to get 'a.pdf' file at his house just using Web browser or teminal program, etc. Please tell me what action does Mike have to do!
Thanks!
If you have an EC2 instance running you can set up a web server on your EC2 instance, mount the EBS volume and access the file via the web server.
AFAIK there is no way to access files in EBS directly - if this is what you need you should be using S3.

Resources