I'm using Shiny Server on an Amazon EC vm (4cpus 8gb of ram running ubuntu 16.04 LTS).
The problem I have is the following :
when I run my shiny App locally, everything is good, and I don't get any errors loading my 250MB .rda file. However, when run it on the Amazon instance, my app just disconnects after hitting the button that triggers the prediction using the model stored on my .rda file.
I tried a dummy model stored in a small (25MB) .rda file and the App worked correctly. So my question is, Does the free version of Shiny server have a limitation on the size of the files ? and if not, any workaround for this problem ?
EDIT: I tried the App on a VM with the same specs, but running ubuntu 14.04, I still have the same problem.
Related
I am completely new to openstack. I am having some problem. Googling about it didn't help me. I am doing a project on implementing a private cloud using Openstack.
I have deployed multinode openstack (xena) on centos 8 stream (one controller and two compute nodes).
windows vms work normally but when i installed ubuntu 18.4 vm ,it work slowly specially the graphic is so slow. any solutions please and thank you
I have been developing and maintaining some windows on windows Docker containers that run ASP.NET core applications on Windows 2016 (using Docker EE) for some time now. I was planning on turning over all ongoing updates/maintenance to server administrators, but I have hit a problem. When I started I believe I was using SAC builds, but now none of the SAC (or LTS for that matter) builds pull on Windows 2016, and though I have spent a good deal of time googling, this whole thing seems to be a big cluster. With docker on Linux, I would just use any LTS distro and apply updates when building the container. Does Microsoft have a clear plan on doing the same? It seems like they are missing the point of docker. I want to run a windows on windows container in windows server 2016, and I want to make sure when I recreate it that I am getting the latest security updates.
https://devblogs.microsoft.com/dotnet/net-core-container-images-now-published-to-microsoft-container-registry/
This page talks about the big changes made in Docker images recently and specifically says the following:
.NET Core images for Nano Server 2016 are still available on Docker Hub and MCR and will not be deleted. You can continue to use them but they are not supported and will not get new updates. If you need to do this and previously used manifest tags, like 1.1-sdk, you can now use the following MCR tags (Docker Hub variants are similar)
Does this mean the new tags listed get updates? I would assume they would tag it with LTS instead of SAC2016 to better convey the notion that they are continuing to update.
This page seems to be really helpful, but none of the images listed pull on windows server 2016:
https://andrewlock.net/exploring-the-net-core-mcr-docker-files-runtime-vs-aspnet-vs-sdk/
This is what I get when I attempt to pull any of the images:
1709: Pulling from windows/nanoserver
no matching manifest for unknown in the manifest list entries
To clarify, I can currently run all my applications using such images as these:
mcr.microsoft.com/dotnet/core/runtime 2.2-nanoserver-sac2016 4a3bbafea836 3 months ago 1.27GB
mcr.microsoft.com/dotnet/core/sdk 2.2-nanoserver-sac2016 9773d80bdd64 3 months ago 2.62GB
I am looking for clarity on support of these images, or a clearer direction to migrate.
Right now, for LTS, the image you want to pull is: mcr.microsoft.com/dotnet/core/aspnet:2.1. Since 2.1 is the LTS release of ASP.NET Core. The underlying server reference doesn't matter, honestly, and all the .NET Core images are multi-arch, so the right underlying images are pulled automatically (linux for linux host, Windows for Windows host, and AMD64, x86, ARM, etc.).
The OS of the image (aside from being the right architecture and platform) is really kind of meaningless. It's mostly a translation layer. Images aren't VMs, the OS is on the host, and that's where your security patches and such apply. As long as your host is patched up, you're good.
UPDATE
This has apparently led to some pedantic arguments in the comments, so let me be a little more clear. What I'm talking about here is best described via this graphic from the Docker site:
Whereas a VM has a copy of the OS on each instance, containers utilize a shared host OS. The OS base image is basically a proxy. It provides the API, but everything at an OS-level happens on the host OS, not in the container.
As such, yes, the OS base image matters to a certain extent. You can't target a Linux base image and deploy to Windows Server. You'd have issues targeting Windows Server 2019 and deploying to 2016, as well. However, assuming that the OS base image is remotely compatible with the host OS, then everything above and beyond that is meaningless.
Specifically to the discussion of patches and LTS versions, you don't need to care, because again, what's actually running is components of the host OS, not anything from the image itself. You can actually see this if you open Task Manager on the host OS. You'll see duplicate system-level processes tied to each running container. Even though the container shows running processes as well, it is these host-level processes that are actually doing the work, and therefore, it is only important that they are patched and supported. If everything is good on your host, you need not worry about the containers, at least for the OS part of things.
https://github.com/docker/for-win/issues/3761
I was working around Mar 12 when all the docker pulls stopped working because of the changes MS did. So I am sure I saw this page before, but on rereading the entire thing again, I see this comment:
docker pull mcr.microsoft.com/windows/servercore:ltsc2016
That seems like a reasonable tag name for long term support. Lo and behold it works. I am currently theorizing that nanoserver is only for the latest and greatest, and am thinking of opening an issue on github to see if someone will answer that definitively.
I think one of the comments on that page from the github maintainer settles the debate in the Chris Pratt's answer. I think mis-information floating around about security is dangerous, so I am reposting here to help future souls who stumble on this question:
Yes, when running with process-isolation, the version must match the Windows kernel version you're running on. Unlike Linux, the Windows kernel does not have a stable API, so container images running on Windows must have libraries that match the kernel on which they will be running to make it work (which is also why those images are a lot bigger than Linux images).
Vulnerable libraries in a docker container DO matter. You cannot rely on the host OS being up to date to protect you.
Further Research
Still researching this, so adding my updates for your benefit as I find them:
Article about Migrating
https://www.altaro.com/hyper-v/nano-server-no-longer-supported-for-infrastructure/
TLDR - Move to servercore
Server 2016 14393 Tags on Docker Hub
The main docker hub nano server page does not list any 14393 tags, but visiting the full tags list at the bottom of the page shows many. I was able to pull mcr.microsoft.com/windows/nanoserver:10.0.14393.1066 and it is only 1GB instead of 14GB for server core
What is the difference between Docker os image with web server installed with a web server and Docker webserver image ?
For eg Docker image of Ubuntu-16.04 running as container with NginX installed and other container running Nginx as Nginx Docker image?
Whose Performance will be better and stable ?
Usually a container with nginx runs in alpine os . a very lightweight os. While in the other hand you have ubuntu os and nginx.
So , the difference? ... the OS.
If you have good Docker/Unix/shell-scripting skills, a continuous-integration (CI) system, and the willingness to do ongoing maintenance, you might prefer building your own images. You will be in control of the exact version of the software used, and any build options or extensions required, and you will control when it gets security patches. But, this is a harder path to get started with, and if you don't periodically update your custom images they'll never get any sort of bug fixes or security patches at all.
If you're new to this space, you might prefer standard Docker Hub images. They're pre-packaged, usually have "enough" customization options, and are generally fairly good quality. But, if you need some extra customization, you might wind up needing to build a custom image anyways. I've also run into a situation where I've pinned an image to a specific upstream version image:1.2.3, and noticed several months later that image:1.2.7 is out, and the six-month-old Docker Hub image hasn't gotten a critical security fix because it's not getting built any more.
If none of this especially concerns you (and if you don't have a DevOps team at your disposal), I'd suggest just using the prebuilt nginx image and focusing on building and deploying your actual application.
I am trying to setup YouTrack and TeamCity on a VM with less than 1GB running on Windows. There will be a very low usage (both users and requests). This is a POC environment, if it works I may push it onto an extra-small or small Azure or Amazon VM instance.
Anyone has got this to work?
PS: I understand that this is way below JetBrains recommended settings.
I have a running YouTrack instance with only 256MB RAM allocated (never tried a smaller value), on an old server with only 1GB RAM, under Debian. It feels pretty responsive, but I'm the only user so far :D
If you're using Windows XP, it might work ok, if Team City would run with only 256MB RAM.
Is there a specific need for using TeamCity, or you need it only for integrating YouTrack with Git/Mercurial/SVN?
I tried installing WARs under TomCat and I could not get TeamCity to play nice in TomCat 7. I ended up using the out of the box installers provided by JetBrains and all worked fine.
I have resolved same problem in the next way:
Installed application on VM with more than 1 GB memory.
Configured my application.
Reduced size(memory) of the VM to 700 Mb available
As application was used JetBrains YouTrack 6.0 with 250 issues and 3 users. It was failing to install from msi package on VM with 700 Mb of memory. After processing mentioned steps it works fine.
Amazon released EC2 - Cluster GPU Instances and I wonder what's your experience with it? Is it stable, does it require a lot of time to install new drivers, SDK etc. before you deployed your CUDA code?
I haven't yet deployed a gpu instance, but I can tell you that the OS image already has the drivers setup for you.
Now in terms of installing CUDA, and getting your code ran thats anotother stody. If you haven't tried EC2 at all then I can tell you on a normal instance - I can install gcc/g++ and svn; setup a repository and have my code run in 5-10 minutes.
EDIT: I was looking through the documentation and found this: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Cluster_GPUs_Install_Driver.html#d0e18924 this talks about reinstalling or updating the NVIDIA drivers