prevents app service on linux recycling if azure storage mount fails - azure-blob-storage

we have a small nodejs site hosted on app service on linux serving static content (+ some middleware actions) from an azure storage blob preventing whole site to recycle if new content are pushed onto the blob.
For some reasons, probably network ones, azure storage mount is sometimes detected as faulty and when connection is restaured, it triggers the whole app service to recycle blocking serving any contents meanwhile and prevents more friendly waiting message.
2019-11-30 15:33:16.726 ERROR - Azure storage volume kiteblob for site kite faulted.
2019-11-30 15:34:01.615 INFO - Azure storage volume kiteblob for site kite became healthy. Recycling site.
any way to prevent this behavior without hardening the whole service ? it's a small website serving few number of pages/users for internal purpose.

Related

Need a way to mount or share s3 or any other aws storage service in windows ec2 instance

We have use case where we want the ability to create shareable drive that would link our ec2 windows instance with any of the storage service (s3 or any other service), such that our user would upload their pdf files in that storage and will be accessible by our windows ec2 instance in which we have program that does pdf files processing. So is there way we can achieve this in aws?
Since your Windows software requires a 'local drive' to detect input files, you could mount an Amazon S3 bucket using utilities such as:
Cloudberry Drive
TntDrive
Mountain Duck
ExpanDrive
Your web application would still be responsible for authenticating users and Uploading objects using presigned URLs - Amazon Simple Storage Service directly to Amazon S3. Your app would also need to determine how to handle the 'output' files so that users can access their converted file.

Streamlit: How to control the second instance and access locally hosted services?

I have a streamlit app that is fully cached. In fact, I am using a 3rd party caching system running on a local port, and I can see that the cache is being used.
However, streamlit runs two distinct instances when it is started up:
# Windows Subsystem for Linux version 2.0, Running Ubuntu
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://172.21.141.16:8501
From my print statements, it appears as if only the local instance is accessing my cache hosted on a local port.
Simultaneously, the Network URL instance does not leverage the cache hosted on the local port at all and recalculates all of the long running processes that are happily caching themselves on the local side.
I am not exposing this system outside of a virtual machine, so I don't care about security.
My question amounts to: how do I force or eliminate the streamlit's double-instance runtime situation such that all running instances (if there must be two) are able to access the locally running cache?
At the same time, though, I need to preserve streamlit's network topology, as I can only access the external URL (http://172.21.141.16:8501) from the Windows host operating system.
Note that I am using a locally running cache because I want to access and accumulate cached values from multiple processes -- some front and some back.
Streamlit isn't running two instances, the URLs are indicating the relative URL (localhost) and your computer's public/network IP address.

How to deploy a Hugo website from a Google Cloud VM?

I've started up a Google Cloud VM with the external IP address 35.225.45.169:
Just to check that I can serve a website from there, I've cloned a Hugo started project and run hugo server --bind=0.0.0.0 --baseURL=http://0.0.0.0:1313:
kurt_peek#mdm:~/synamdm$ hugo server --bind=0.0.0.0 --baseURL=http://0.0.0.0:1313
Building sites … WARN 2020/01/02 04:36:44 .File.Dir on zero object. Wrap it in if or with: {{ with .File }}{{ .Dir }
}{{ end }}
| EN
+------------------+----+
Pages | 16
Paginator pages | 0
Non-page files | 0
Static files | 20
Processed images | 0
Aliases | 0
Sitemaps | 1
Cleaned | 0
Built in 112 ms
Watching for changes in /home/kurt_peek/synamdm/{content,layouts,static,themes}
Watching for config changes in /home/kurt_peek/synamdm/config.toml
Environment: "development"
Serving pages from memory
Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
Web Server is available at http://0.0.0.0:1313/ (bind address 0.0.0.0)
Press Ctrl+C to stop
Now I would expect to be able to go to http://35.225.45.169:1313/ in my browser and for the website to be visible, but I find that it is not; instead the operation times out (as shown below with a curl command):
> curl http://35.225.45.169:1313
curl: (7) Failed to connect to 35.225.45.169 port 1313: Operation timed out
Am I missing something here? How should I deploy this static website from the Google Cloud Compute instance to the internet?
Update
Following Ahmet's comment, I edited the VM to allow HTTP and HTTPS traffic. This appears to have created several Firewall Rules in the VPC Network tab (see below).
However, I'm still not able to access http://35.225.45.169:1313/ after this; are there specific rules that I must define?
You have to create a new Firewall rule which allows tcp:1313 port.
But why do you want to host Hugo website on a GCP VM?
Did you checked out hosting Hugo website on GCS or using Firebase?
https://gohugo.io/hosting-and-deployment/hosting-on-firebase/
How to deploy a Hugo website from a Google Cloud VM?
As pradeep mentioned, you will need to create a new Firewall rule that allows the port tcp:1313 to receive and egress traffic.
Here you will find more details on how to create Firewalls rules in Google Cloud Platform.
Nonetheless, I think there are better approaches depending on the website that you would like to serve. Here you will find the different options available for serving websites in Google Cloud Platform, but mainly there are three:
Using Google Cloud Storage.
Using Google App Engine.
Firebase Hosting.
Google Cloud Storage
If you are serving a static website, I highly recommend you to go with Google Cloud Storage or Firebase Hosting. It is true that they do not have either Load Balancing capabilities or Logging, but they are an easy way if you are new to Google Cloud Platform.
As shown here if you would like to host a static site, you could do it within Cloud Storage, but you will need to create a Cloud Storage Bucket, and upload the content to it.
Here you will find more information and a tutorial on how host static websites within Google Cloud Platform using Google Cloud Storage.
Google App Engine
Another option would be to use App Engine, not only is fully managed by Google's infrastructure but also is more simpler than spinning up a VM and making sure that X ports are open or not, Google does it for you.
I attached you a tutorial on how to host Hugo on Google App Engine.
Firebase Hosting
Finally, you could also use Firebase Hosting in order to serve your Hugo website. I attached some documentation regarding more detail information about Firebase Hosting here.
I hope it helps.

Azure Web App PaaS (App Service Plan) with Azure SQL connection inside vNet performance vs outside vNet

I have the following setup:
|--------------------- Internet -------------------------|
WebApp <---- non-vNet traffic ----> Azure SQL Db
WebApp and Azure SQL Db are in the same data centre.
There is currently no vNet.
There's a lot of unavoidable "chatter" going back and forth between Azure SQL Db and WebApp
The connection string in WebApp is a DNS name for Azure SQL Db (e.g. mydatabase.database.windows.net), so it's resolving to an external IP.
I'm trying to squeeze as much performance out of my app as possible by reducing any network overhead incurred with the "chatter".
I can't seem to find any docs specifically talking about network performance inside vs outside a vNet on Azure.
1. Is it possible to place a Web App and an Azure SQL Db inside a vNet and if so what caveats are there to this?
2. Will I get better network performance by doing this?
|----------------------- vNet -----------------------------|
WebApp <---- vNet traffic ----> Azure SQL Server
If anything you'll probably get worse latency given the Web App has to do SSTP (point-to-site VPN) to reach that VNET. You'll need to bench both setups but i wouldn't bother.
What i would bother with is adding a caching layer if you don't already have one, in-process or distributed (Redis). Now, that's going to be a dramatic change of events for your fetch latency.
We can definitely deploy the web app inside a VNET using the azure app service environment instead of hosting it in a app service plan.
This is appropriate for application workloads that require:
Very high scale.
Isolation and secure network access.
High memory utilization.
More info here - https://learn.microsoft.com/en-us/azure/app-service/environment/

Windows 2008 R2 Failover Cluster FTP with IIS 7.5 (0x80070490 Element Not Found)

I setup a Failover FTP using a script service/application on our 3 node cluster. I have followed the following guide which seems to be fairly complete: http://support.microsoft.com/kb/974603
However the FTP site I've added which is linked to the storage for that service will not start. I get the following error: 0x80070490 Element Not Found. I think it may be related to this kbb, but I'm not sure: http://support.microsoft.com/kb/2720218
Failing over/moving the service around the 3 nodes seems to work fine (except the FTP doesn't start, and starting it manually fails). The IP, computer name, and 2 mount points for storage get moved successfully. The only way I can get it to start is to go into IIS on the owning node, remove the FTP site and set it up again. As soon as I fail it over to another node however, I'm back to the error.
I believe it has something to do with IIS not seeing the storage despite it being available. I've made the storage a prerequisite for the script so the storage must be online before the script tries to start the FTP site. Nevertheless, it doesn't work.
Summary: Windows 2008 R2 Cluster FTP Server is set to broadcast on the service IP. It's root directory is the a root drive of assigned storage in the cluster service. The other storage is a MP mounted underneath this drive. FTP site works fine on initial setup but fails when failing over with Element Not Found error. Seems to be related to disk not being available despite it existing as if you go to of of the other nodes without the disks, the FTP site in IIS has the red 'X' on it and attempting to start it gives the same error.
This was my fault for not setting up Offline files. Once I completed that it worked. Offline files requires two server restarts and I didn't want to go through that process without testing how the Clustered FTP would work (this Cluster is in Production use). Unfortunately, once the share hosting the IIS shared configuration goes offline it will NOT come back online until you recycle the Microsoft FTP Service (which is why offline files is required). I could have modified the script to perform a recycle in the StartFTPSVC function (instead of just checking if it was started and if not starting it).

Resources