I have a requirement to set up Kubernetes on-prem and have Windows worker nodes that run .NET 4.5 containers. Now, while I found this link, I don't particularly like the idea of upgrading the control plane and rotating needed certificates manually.
Has anyone tried to use kubespray to bootstrap a Kubernetes cluster and manually add a Windows worker? Or can share any insight to setting this up?
Thanks for sharing.
This is an opinion question so I'll answer in an opinionated way.
So kubespray will give you more automation and it actually uses kubeadm to create the control plane and cluster components including your network overlay.
It also provides you with capabilities for upgrades.
Certificate rotation is an option on your kubelet and kubespray also supports it.
The downside of using kubespray is that you may not know how all the Kubernetes components work but if you want something more fully automated and like ansible it's a great choice.
Also the latest kubeadm supports certificate rotation on all your Kubernetes components as per this PR
Related
When reading articles on the Internet and textbooks, they use the above terms and make us confused. When we try to understand emerging technologies, I believe the vocabulary behind them is a key point.
So, please help to clarify the following confusion on these terms.
Is the usage of "Container image" and "Docker image" similar and interchangeable?
Are they different on usage?
What is the purpose of the usage of "Container image"?
Docker, Inc. originally created the Docker product, which is a specific implementation of containerization technology,
Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers.
There are many alternatives to Docker, like Podman, Containerd and etc.
Nowadays, Docker as a product is getting so popular, people often refer to containerisation technology as Docker. Almost like when we talk about search engines, we use Google.
So, I would use Container Image when talking about general containerization technology and use Docker Image when talking about the Docker product specifically.
we are planning to install directly from kubernetes.io, instead getting it through vendor, for example open shift, rancher, etc.
How should we go about support if we have problem with our kubernetes cluster?
Of course, vendors also gets their kubernetes source code from kubernetes.io and don't change it.
Thank you.
Using OSS software directly means that whenever you face a problem you need to solve it yourself.
Having said that, there is a very wide array of communities filled with friendly people who would probably be happy to lend a hand, at least that's what I've learned through my experience.
A few places you should try are the issues section of the kubernetes project, kubernetes slack workspace, r/kubernetes.
I need to install and set up step by step the varnish cache in an application in OpenShift , but I do not know where to start and not the steps for this. Can anyone help ?
If you mean OpenShift Online in current production version, it's older version 2.x, based on RedHat's own container technology.
If you have OpenShift Origin or OpenShift Enterprise or OpenShift Online Developer Preview, probably it's version 3.x, and it's based on Docker.
You can definitely tell which version are you using by looking at CLI tool's name. If it's rhc, it's older version 2.x, if it's oc, it's newer Docker based.
For newer Docker base, you should be able to deploy any docker image, so varnish should be no problem at all.
You just have to build your own docker image and follow OpenShift tutorial to deploy it on your platform.
I started to play with it, but don't have enough know-how to provide you with step-by-step tutorial right now, maybe in a week or so.
However, if you are using older public version of OpenShift Online, I have a bad news for you.
I've tried to compile static version of varnishd, no luck till now. And I'm not going to try anymore, because fully static version of varnishd is not possible at all, as it's based on dynamically loading of compiled VCLs, thus should be dynamically linked to OS libraries.
And this could be a bit hard to achieve. You have to match correct versions of OS libraries and still it's fragile as it could possibly break after upgrade of underlying OS.
I wouldn't try this in production.
I suggest you to try another could provider, either IaaS solution with full OS with varnishd installed from packages, or choose any Docker hosting provider.
Or, if you can afford it and it's worth for you, you may try http://fastly.com/, a CDN provider.
Their technology is based on customized older version of varnishd, with easy to use GUI, a lot of fancy built-in stats, etc... But the most important feature is you can deploy your own VCLs upon request. If they enable it to you, you can upload new VCL in few seconds.
Good luck.
Currently, I am working on the automation testing framework, which combines with both Selenium Grid and Sikuli API.
I already implemented a library which includes functionality of Selenium and Sikuli, and it works well when I set up my hub and node on the same machine. However, this is just the same as running Selenium RC on the machine.
So, in order to achieve parallel testing, my next step is to launch the nodes from other machines and register them to the hub machine. The idea environment is Amazon EC2 instance.
Hub: Linux box
Nodes: win server 2008
It works fine if I just ran the tests using the library only contains Selenium functions. However, I
got error message that
"NO X11 DISPLAY variable was set, but this program performed an operation which requires it."
Should I export DISPLAY variable to the node's ip address? And do I need to set the node machine as an X server? What if there are many instances registered to the hub machines?
Sorry for the vague question... but any idea that how to implement this framework is appreciated. I am using selenium grid since there are many action performing graph verification. It would be very efficient if I can do it parallel testings.
Thanks a lot for any help and advice.
There is a project aiming at providing Sikuli capabilities on Selenium Grid.
https://github.com/sterodium/selenium-grid-extensions
It works by adding extensions both on Selenium Grid hub and nodes.
See my blog post about the topic of integrating tools like Sikuli and AutoIt with Selenium Grid. It provides a theoretical approach to implement said automation, though to my knowledge no one has yet implemented a working solution to demonstrate.
http://autumnator.wordpress.com/2011/12/22/autoit-sikuli-and-other-tools-with-selenium-grid/
On a side note, not sure how your X.11 issue came into play, it would be best you work on the framework using local network of machines with Selenium Grid before you convert to an Amazon EC2 deployment. It helps in the design and debug process as EC2 may present its own issues so you want to have the simplest basic Grid setup working first (non-EC2).
Hi
we are upgrading websphere application server(WAS) from v6.x to 7.x
currently WAS is installed at:
/usr/IBM/WebSphere/AppServer - default location.
What is the best way to upgrade to 7.x, and recreate profiles with least downtime?
If you are talking just about WAS with no extra IBM products (like Portal, Quickr, Connections, etc) on top of it your best off setting up a new fresh one next to your old one and manually making the configurations (data sources etc). It takes only couple hours to install and update one and if your applications have decent documentation about their requirements it shouldn't take more than a few hours to set up the rest... Then you can simply test it and redirect the traffic.