GKE "Enable auto-upgrade" setting cannot be toggled off - settings

I use GKE to host and manage a kubernetes cluster. GKE will update it's nodes automatically by default. Per this documentation the setting can be toggled on or off. (https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades)
However, I am unable to disable this feature. The option is grayed out. Does anyone know why? I have been unsuccessful in finding documentation or answers online.
Screenshot of option

Surely as soon as I humble myself and publicly ask a question I find the answer on my own. :-)
First the Cluster "Release Channel" needs to be set to a specific version. Once this is set you will be able to change the behavior of the node pools.

Previous speaker is right. Below the answer from google support service to my request:
Node-auto-upgrades are always enabled if your cluster is enrolled in a release channel. You will find this information on this part of the document. You will need to set that configuration to a static channel before trying to disable the nodes auto-upgrades. You can find this configuration to be edited here in your cluster information panel.

Related

Can I join a Kubernetes Windows Node to an Active Directory?

Is anyone aware of challenges or restrictions on joining the Kubernetes Windows Nodes to an Active Directory?. I mean my question is not about integrating the active directory with the K8s RBAC, but rather from a lifecycle management perspective, patching and whatnot?
Thank you
EN
In short we did joined the windows nodes to our AD. So far it seems that there are no impacts on Kubernetes, We'll continue to monitor the behaviour of those nodes and report back if we hit any hiccups.

Dashboard for Websphere Instance or JVM status with HTML and php

Am not getting the perfect answer for my requirement. Please find the detailed requirement in Lyman English.
I have an application which is installed in Websphere Application Server 8.5 version.
Got a requirement for me to create a dashboard where in we can see the server status like whether the JVM is up or down, EAR deployed date etc.
Dashboard needs to be accessed from Internet explorer on Windows Desktop.
Could you let me know how to achieve this?
Note: Websphere is installed on Linux and IE is on Windows.
Thanks,
Nithin
This is quite broad question, so I'll just give you options that you will have to explore further by your own and choose the one that suits you best.
From the easiest one:
Use built-in admin console - WebSphere provides admin gui, if you dont want to allow users to change anything just give the user monitor role. He will be able to check server status, application status etc...
Use monitoring tool already available, like IBM Health Center, JConsole or 3rd party - I know, not the browser solution but maybe will fit your need
Install and use PerfServlet - it will give you WebSphere statistics in XML format. You can write your app to query that servlet for required params, then parse and present output
Finally use MBean API and write your custom monitoring app - the most difficult but also the most flexible.
Looking at your question, I'd suggest you to stay with option 1).

How to make mail notification/alert

I'm using Spring-Insight and Pivotal tc server to deploy and monitor two Spring applications but I have to create some alerts in case of threshold violation. Do you know if this is possible to make this alerting system without creating a customized plugin? I just can't find anything in the documentation.
Finally, I didn't find any solution to my problem but I think this is possible with a customized plugin. I personally chose another option : monitor my application with Dynatrace even if Dynatrace isn't free.

Restrict any new installation of already published google-marketplace-app

We have a google-marketplace-app which is already published and actively used by consumers. But there is a new requirement, where we need to block any new installations of the app without impacting the existing consumers.
Is there a straight forward option to achieve this? Or do we have to unpublish existing app and republish with some specific options (i.e: "visibility-options")?
The ideal expectation from our perspective is not to let existing app consumers/domain-admins to perform anything on this regard. But only that existing domains needs to whitelisted from our end (by app developers) to allow installation of the app to admins of those domains, where as any other domains shouldn't have install access (even with direct app installation link).
Appreciate any recommendations on this.
In the Chrome Developer Dashboard there is an option to add trusted testers to the app. The accounts that are in that list will have the visibility to the application.
You can also create a group and add that group to the list, and the people inside that group will also have visibility to the app.
Here you can find the documentation related to this. Hope this helps.

Windows Azure Caching (Preview) ErrorCode<ERRCA0017>:SubStatus<ES0006>:

I'm using the role-based caching feature for a windows azure web role.
Configured as co-located. I've followed the steps given by windows azure docs for caching (preview). I get the following error:
ErrorCode <ERRCA0017>:SubStatus<ES0006>:There is a temporary failure.
Please retry later. (One or more specified cache servers are
unavailable, which could be caused by busy network or servers. For
on-premises cache clusters, also verify the following conditions.
Ensure that security permission has been granted for this client
account, and check that the AppFabric Caching Service is allowed
through the firewall on all cache hosts. Also the MaxBufferSize on the
server must be greater than or equal to the serialized object size
sent from the client.). Additional Information : The client was trying
to communicate with the server: net.tcp://127.255.0.4:20010/.
I'm running everything as localhost, using the local development storage, my cache client is in the same role as the server. Changed many configuration attributes, but I always get that excpection or similar like "cannot connect to tcp....".
I'd appreciate some help. Thanks.
There are couple of things which could go wrong with your application.
Very first thing to make sure that you have SDK 1.7 in your machine even with Windows Azure Caching Services and then verify that you have reference set from Windows Azure Cache (not from Windows Server App Fabric SDK). I have seen such misconfiguration in past which lead to such errors.
Now have you changed your dataCacheClient, identifier to your ROLE Name as described in the documentation link here. If you follow the documentation as described to you should not hit any error so for the sake of checking what could be wrong, you can create exact same application as described in this link and see if that works or not.
To get more details error, please be sure to increase the DataCacheFactoryConfiguration.ChannelOpenTimeout value to longer i.e. 2 minutes then default 20 seconds as described here. This step will help you to get details about inner exception which may lead to actual root cause to your problem.
We use Azure co-located caching (not in preview anymore) as our session backer and have fairly regular outages. About once a month.
We tried using the Enterprise library Transient Fault Handling but our instances still hang when caching experiences problems. I think that the transient fault code would work for data caching, but for session backing there is some activity closer to the metal that we can't seem to code against.
The error codes have become more informative over the last year and go something like...
ErrorCode:SubStatus:The request timed out..
Additional Information : The client was trying to communicate with the
server: net.tcp://10.xx.xxx.xx:xxxxx/.
Our best guess so far from experimenting and MS support is that each, or at least one co-located cache role/instance needs to know about all the other instance's IPs, since Azure can destroy and re-up instances whenever they want, this sometimes fails to update the dependent instances. This is secret sauce for Azure, but it is not a secret when our site goes down. I'm looking for any more information on this and to see how others are working around this issue.
One possible work-around. One of our talented platform administrators found that resetting IIS on the instances and scaling up two more instances seem to help the problem. This makes sense to me because it gives caching another chance to gather all the required info about the other instances. This is NOT CONFIRMED to solve the problem but if we repeat this during the next outage it could be a valid work around.

Resources