We are using container instances deployed with and arm-template (docs: https://learn.microsoft.com/en-us/azure/templates/microsoft.containerinstance/containergroups?tabs=json) and want to mount an on-premises volume into this container, as our environment is now both on-prem and in Azure. The on-prem environment is windows. How can we do this?
Suggestions so far that we have been looking into:
Mount a volume through the ARM-template. (Is this even possible with on-prem volumes?)
Run container instances with priviliges to be able to mount later with commands. (seems to be able through docker desktop, but is it possible through container instances?)
Use SMB-protocol to reach files on-prem
Which of these suggestions should be the best one/is possible? And is there another option that is better?
First of all, you need to consider couple of limitations when mounting volume to Azure Container Instance:
You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the overview.
Azure file share volume mount requires the Linux container run as root .
Azure File share volume mounts are limited to CIFS support.
Unfortunately there is no way to mount on-premise storage to Azure Container Instance.
It is possible to mount only following types of volumes into Azure Container Instances:
Azure Files
emptyDir
GitRepo
secret
You may try to sync your files from on-premise to Azure Storage Account File Share using Azure File Sync and mount File Share to Container Instances.
Related
I have a Azure CI pipeline, that deploys .NET Core API to a Linux docker image and pushes it to our Azure Container Registry. The files are deployed to /var/lib/mycompany/app using docker-compose and dockerfile. This is then used as an image for an App Service which provides our API. The app starts fine and works, but if I go to advanced tools in the app service and run a bash session, I can see all the logs files generated by docker, but I can't see any of the files I deployed in the locations I deployed them. Why is this, and where can I find them? Is it an additional volume somewhere, a symbolic link, a layer in docker I need to access by some mechanism, a host of some sort, or black magic?
Apologies for my ignorance.
All the best,
Stu.
Opening a bash session using the Advanced Tools will open the session in the underlying VM running your container. If you want to reach your container, you need to install an ssh server in it and use the SSH tab in the Advanced Tools or the Azure CLI.
az webapp create-remote-connection --subscription <subscription-id> --resource-group <resource-group-name> -n <app-name> &
How to configure your container
How to open an SSH session
We have use case where we want the ability to create shareable drive that would link our ec2 windows instance with any of the storage service (s3 or any other service), such that our user would upload their pdf files in that storage and will be accessible by our windows ec2 instance in which we have program that does pdf files processing. So is there way we can achieve this in aws?
Since your Windows software requires a 'local drive' to detect input files, you could mount an Amazon S3 bucket using utilities such as:
Cloudberry Drive
TntDrive
Mountain Duck
ExpanDrive
Your web application would still be responsible for authenticating users and Uploading objects using presigned URLs - Amazon Simple Storage Service directly to Amazon S3. Your app would also need to determine how to handle the 'output' files so that users can access their converted file.
I have a local Kubernetes Cluster running under Docker Desktop on Mac. I am running another docker-related process locally on my machine (a local insecure registry). I am interested in getting a process inside the local cluster to push/pull images from the local docker registry.
How can I expose the local registry to be reachable from a pod inside the local Kubernetes cluster?
A way to do this would be to have both the Docker Desktop Cluster and the docker registry use the same docker network. Adding the registry to an existing network is easy.
How does one add the Docker Desktop Cluster to the network?
As I mentioned in comments
I think what you're looking for is mentioned in the documentation here. You would have to add your local insecure registry as insecure-registries value in docker for desktop. Then after restart you should be able to use it.
Deploy a plain HTTP registry
This procedure configures Docker to entirely disregard security for your registry. This is very insecure and is not recommended. It exposes your registry to trivial man-in-the-middle (MITM) attacks. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment.
Edit the daemon.json file, whose default location is /etc/docker/daemon.json on Linux or C:\ProgramData\docker\config\daemon.json on Windows Server. If you use Docker Desktop for Mac or Docker Desktop for Windows, click the Docker icon, choose Preferences (Mac) or Settings (Windows), and choose Docker Engine.
If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
Also found a tutorial for that on medium with macOS. Take a look here.
Is running the registry inside the kubernetes cluster an option?
That way you can use a NodePort service and push images to an address like
"localhost:9000/myrepo".
This is significant because Docker allows insecure (non SSL) connections for localhost.
I'm running multiple websites on Amazon AWS. I mounted and EBS on the master server, the mount dir hold the website's files.
Also, I configured the application load balancer, which installs small instances when there is a load on the master. The clone servers running NFS clients to connect to the master server and mount the website's files.
Everything works perfectly, but the issue many times the clones server cannot mount the NFS server even when I try to mount manually. I have to run exportfs -f to flush the NFS table on the master instance.
I do not know why this happens. If you need any further information just give the CMD for it.
As I understand You are trying to mount the EBS from multiple ec2 instances.
This can be done using the multi-attach capability of EBS. However the are some limitations to this capability (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html). So, in short, if You have more than 16 instanced trying to mount to this EBS, you hit the limit.
My suggestion to solve that - Use EFS instead. EFS is an elastic file system managed service by AWS. Really simple to use, can be mounted from multiple Linux instances and elastic (so you pay-as-you-grow). Check it here: https://docs.aws.amazon.com/efs/latest/ug/mount-multiple-ec2-instances.html
I am evaluating jelastic for use with Tomcat 8 and Postgres 9.5.
Does a user have ssh access to the instance that is running these services?
Does Tomcat have access to the local storage, or can you attach storage that Tomcat can create and read files?
Does a user have ssh access to the instance that is running these services?
Yes, a user have ssh access to the any instance. The authentication procedure in Jelastic SSH Gateway is divided into two independent parts:
connection from end user to Gateway (external authentication)
connection from Gateway to users’ container (internal authentication)
Both parts of the authentication procedure are based on a standard SSH protocol, using public/private keypairs.
With Jelastic SSH Gateway, you can easily access:
the whole account where you can navigate across your environments and containers using an interactive menu without extra authentication
separate containers directly while working with them remotely via additional tools (e.g. Capistrano) or using SFTP and FISH protocols.
While accessing containers via SSH, a user receives all required permissions and additionally can manage the main services with sudo commands of the following kind (and others):
sudo /etc/init.d/jetty start
sudo /etc/init.d/mysql stop
sudo /etc/init.d/tomcat restart
sudo /etc/init.d/memcached status
sudo /etc/init.d/mongod reload
sudo /etc/init.d/nginx upgrade
sudo /etc/init.d/httpd help
Using our documentation you’ll find out how to:
generate SSH key
add SSH key
access environments and containers
Does Tomcat have access to the local storage, or can you attach storage that Tomcat can create and read files?
Jelastic supported the local storage and the dedicated storage container.
Jelastic Dedicated Storage Container is a special type of node, based on Docker centos7 image. Being developed specially for data storing, it provides a number of the appropriate benefits:
being delivered with the corresponding software (i.e. NFS & RPC) already pre-installed, so such a container can be used as a storage immediately after the creation without any additional configurations required
compared to other common-purposed Jelastic nodes, Dedicated Storage Container provides the enlarged amount of disk space, which allows to persist a comparatively bigger data volumes (herewith, the particular value depends on your service provider’s settings and can vary according to your account type).
Some tips on this container type usage and examples it can be leveraged in the best way are revealed within the corresponding use case description.
And below we'll consider how to set up such Storage server inside your Cloud and some tips on its management:
Storage container creation
Storage container management
If you don't have root permissions, please contact your hosting provider.
Applications you run on tomcat have access to storage on the running system is based on several things. There are layers of security. Tomcat literally has access to whatever user you run it under has access to. That's true in both Windows and Linux environments. A running service has operable services defined as soon as you decide to log in.