Is anyone aware of challenges or restrictions on joining the Kubernetes Windows Nodes to an Active Directory?. I mean my question is not about integrating the active directory with the K8s RBAC, but rather from a lifecycle management perspective, patching and whatnot?
Thank you
EN
In short we did joined the windows nodes to our AD. So far it seems that there are no impacts on Kubernetes, We'll continue to monitor the behaviour of those nodes and report back if we hit any hiccups.
Related
I'm currently using a the trial of Elastic Cloud for my project.
I would like to be able to monitor 2 infrastructures at the same time, I have created a space for each infrastructure as well as 2 agent policies linked to the agents of their own infrastructure.
I was wondering if there was a way to separate the agents by agent policy, for example with a filter, to get only the agents belonging to the space of the chosen infrastructure or by another way.
Thanks in advance for your help
It's definitely possible to create filtered aliases and then in each Kibana space you can create an index pattern over each alias to only show the data relevant to the underlying agent in the relevant space.
I am deploying a packaged liberty server into Bluemix that contains my application.
I want to update my application but before I do so, I'm wondering what's the best way to backup what I have currently up and running? If my update is bad, I would like to restore the previous version of my app.
In other words, what is the best practice or recommended way to update a web application running on a Liberty server in Bluemix. Do I simply keep a backup of the zip I pushed to Bluemix and restore it if something goes wrong? Or is there management capability provided by Bluemix for backup and restore?
It's understood that manual backup of the pushed zip is an acceptable strategy. Additionally, I found the Bluemix documentation Blue-green deployments to be a reasonable solution, as it's a deployment technique that utilizes continuous delivery and allows clients to rollback their app in the case of any issues.
The Cloud Foundry article Using Blue-Green Deployment to Reduce Downtime and Risk succinctly explains the deployment steps (since Bluemix is based on Cloud Foundry, the steps are similar to the Example: Using the cf map-route command steps in the previously cited Bluemix documentation).
I agree with Ryan's recommendation to use the blue/green approach, though the term may be unfamiliar to those new to cloud server deployments. Martin Fowler summarizes the problem it addresses in BlueGreenDeployment:
One of the challenges with automating deployment is the cut-over
itself, taking software from the final stage of testing to live
production. You usually need to do this quickly in order to minimize
downtime. The blue-green deployment approach does this by ensuring you
have two production environments, as identical as possible. At any
time one of them, let's say blue for the example, is live. As you
prepare a new release of your software you do your final stage of
testing in the green environment. Once the software is working in the
green environment, you switch the router so that all incoming requests
go to the green environment - the blue one is now idle.
Solving this problem is one of the main benefits of PaaS.
That said, for historical context, it's worth noting this blue/green strategy isn't new to cloud computing. Allow me to elaborate on one of the "old" ways of handling this problem:
Let's assume I have a website hosted on a dedicated server, myexample.com. My public-facing server's IP address ("blue") would be represented in the DNS "#" entry or as a CNAME alias; another server ("green") would host the newer version of the application. To test the new application in a public-facing manner without impacting the live production environment, I simply update /etc/hosts to map the top-level domain name to the green server's IP address. For example:
129.42.208.183 www.myexample.com myexample.com
Once I flush the local DNS entries and close all browsers, all requests will be directed to the green pre-production environment. Once I've confirmed all works as expected, I update the DNS entry for the live environment (myexample.com in this case). Assuming the DNS has a reasonably short TTL value like 300 seconds, I update the A record value if by IP or CNAME record value if by alias and the change will be propagated to DNS servers in minutes. To confirm the propagation of the new DNS values, I comment out the aforementioned /etc/hosts change, flush the local DNS entries, then run traceroute. Assuming it correctly resolves locally, I perform a final double-check all is well in the rest of the world with the free online DNS checker (e.g., whatsmydns.net).
The above assumes an update to the public-facing content server (e.g., an Apache server connecting to a database or application server); the switch over from pre-production to production is more involved if the update applies to a central database or similar transactional data server. If it's not too disruptive for site visitors, I disable login and drop all active sessions, effectively rendering the site read-only. Then I go about updating the backend server in much the same manner as previously described, i.e., switching a pre-production green front end to reference a replication in the pre-production green backend, test, then when everything checks out, switch the green front end to blue and re-enable login. Voila.
The good news is that with Bluemix, the same strategy above applies, but is simplified since there's no need to fuss with DNS entries or separate servers.
Instead, you create two applications, one that is live ("blue") and one that is pre-production ("green"). Instead of changing your site's DNS entries and waiting for the update to propagate around the world, you can update your pre-production application (cf push Green pushes the new code to your pre-production application), test it with its own URL (Green.ng.mybluemix.net), and once you're confident it's production-ready, add the application to the routing table (cf map-route Green ng.mybluemix.net -n Blue), at which point both applications "blue" and "green" will receive incoming requests. You can then take the previous application version offline by unmapping it (cf unmap-route Blue ng.mybluemix.net -n Blue).
Site visitors will experience no service disruption and unlike the "old" way I outlined previously, the deployment team (a) won't have to bite their nails waiting for DNS entries to propagate around the world before knowing if something doesn't work and (b) can immediately revert to the previous known working production version if a serious problem is discovered post-deployment.
You should be using some sort of source control, such as Git or SVN. Bluemix is nicely integrated with IBM DevOps Services (IDS) which can leverage git or an external Github repo to manage your project. When you open your app's dashboard, you should see a link in the upper right-hand corner that says "ADD GIT". That will automatically create a git repo for your project in IDS.
Using an SCM tool, you can manage versions of your code with relative ease. IDS provides you with an ability to deploy directly to Bluemix as part of your build pipeline.
After you have your code managed as above, then you can think about green/blue deployments, etc. as recommended above.
Apart from Technology support , what are all the business benefits for oracle web logic server. For example in area of security,support etc.
What are all the new features supported by weblogic ?
TL;DR:
Support is great when you open ticket with Oracle Support (Weblogic strictly).
Great admin/read-only user implementation. We authenticate to Windows Active Directory. Developers get read-only accounts, reduces churn for them to wait for ops to transfer logs and validate settings.
Dashboard useful out-of-box to do real-time monitoring without additional tools or installs. Easily accessed by any one who is authenticated to login. We could give it to our CIO if he wanted in about 3 minutes by adding him to the right authorized group in AD.
Easier to clone environments.
I haven't worked with OC4J but I believe Oracle's roadmap is picking Weblogic as their preferred Java application server. You can see it is the base technology for some of their other products, such as Oracle Service Bus, Oracle Enterprise Manager (OEM), and Oracle Line Planning.
I have opened 3 Oracle tickets in the past month. I was surprised at how fast they answered. For a Severity 3 ticket (medium), they usually have responded in 2-3 days. I can't say the same for their other services (over 2 weeks for a ticket on OEM).
Security is a pretty broad scope... so you'd have to be a little more specific on some of the topics of security.
One thing that is pretty awesome is the Dashboard. http://docs.oracle.com/cd/E14571_01/web.1111/e13714/dashboard.htm You can obviously add read-only monitor accounts so other users can get insight to the performance. We add developers to this so that they can validate any settings, or see performance whenever there is a production issue.
We used Microsoft Active Directory authentication in our Weblogic domains. People are not using the default weblogic administrator user so configuration changes are audited. When someone's account gets disabled when leaving the company, it disables their access to Weblogic similarly. You don't have to change the password.
Other useful settings I like in it is the ability to automatically archive config changes. Each time someone makes a config change, a backup is automatically created. This allows me to go fix something when developers break their environment without having to majorly reverse-engineer what they did.
I also like the fact that you can pack and unpack the domains. I've used it to move entire domains from staging to production with some minor changes... i.e. change all stg to prod variables. This should likewise make it easier to 'clone' environments when you want to build out a new one.
Although not related, I should mention Oracle Enterprise Manager. We are an Oracle shop because they seem to have given us a good deal on licencing. So we get to run Oracle Enterprise Manager, which is a tool slowly becoming more and more useful. The agent also reports how our RedHat Linux hosts are behaving, network input/output, CPU utilization, memory utilization, java heap stacks. We are going to move to defining groups within that has all the targets related to an application stack. This will give our operations team the insight to see where the bottleneck might be... the Oracle Weblogic web layer, network, Oracle Service Bus, or Oracle Database performance.
Supposedly, you can add jBoss, other JMX monitoring as well to OEM. It's on our to-do list for non-Weblogic instance. We're slowly rolling OEM out.
In the past month I've seen my cloudbees app being down due to cloudbees issues with its providers. Yesterday AWS East had problems and last summer this happened: http://blog.cloudbees.com/2012/07/cloudbees-postmortem-on-two-recent.html
In order to achieve higher availability, I am wondering if would be a viable solution, and supported by cloudbees, to have always two instances open on different regions. Best would be if one of them could be EU.
Thanks.
You're right that a Cloud application to be Highly Available must be multi-region on AWS. This has some serious impact on app architecture, with master/backup, data replication and such issues to address.
We (cloudbees) don't provide an out-of-the-box solution to this complex issue, that really depends on your requirements, data weight and update frequency, etc
Deployin in EU region is only available on Cloudbees for "dedicated servers" (contact sales#cloudbees.com for details and princing) but could be an option to get such a multi-region HA application
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have thought a lot recently about the different hosting types that are available out there. We can get pretty decent latency (average) from an EC2 instance in Europe (we're situated in Sweden) and the cost is pretty good. Obviously, the possibility of scaling up and down instances is amazing for us that's in a really expansive phase right now.
From a logical perspective, I also believe that Amazon probably can provide better availability and stability than most hosting companies on the market. Probably it will also outweigh the need of having a phone number to dial when we wonder anything and force us to google the things by ourselves :)
So, what should we be concerned about if we were about to run our web server on EC2? What are the pro's and cons?
To clarify, we will run a pretty standard LAMP configuration with memcached added probably.
Thanks
So, what should we be concerned about if we were about to run our web server on EC2? What are the pro's and cons?
The pros and cons of EC2 are somewhat dependent on your business. Below is a list of issues that I believe affect large organizations:
Separation of duties Your existing company probably has separate networking and server operations teams. With EC2 it may be difficult to separate these concerns. ie. The guy defining your Security Groups (firewall) is probably the same person who can spin up servers.
Home access to your servers Corporate environments are usually administered on-premise or through a Virtual Private Network (VPN) with two-factor authentication. Administrators with access to your EC2 control panel can likely make changes to your environment from home. Note further that your EC2 access keys/accounts may remain available to people who leave or get fired from your company, making home access an even bigger problem...
Difficulty in validating security Some security controls may inadvertently become weak. Within your premises you can be 99% certain that all servers are behind a firewall that restricts any admin access from outside your premises. When you're in the cloud it's a lot more difficult to ensure such controls are in place for all your systems.
Appliances and specialized tools do not go in the cloud Specialized tools cannot go into the cloud. This may impact your security posture. For example, you may have some sort of network intrusion detection appliances sitting in front of on-premise servers, and you will not be able to move these into the cloud.
Legislation and Regulations I am not sure about regulations in your country, but you should be aware of cross-border issues. For example, running European systems on American EC2 soil may open your up to Patriot Act regulations. If you're dealing with credit card numbers or personally identifiable information then you may also have various issues to deal with if infrastructure is outside of your organization.
Organizational processes Who has access to EC2 and what can they do? Can someone spin up an Extra Large machine and install their own software? (Side note: Our company http://LabSlice.com actually adds policies to stop this from happening). How do you backup and restore data? Will you start replicating processes within your company simply because you've got a separate cloud infrastructure?
Auditing challenges Any auditing activities that you normally undertake may be complicated if data is in the cloud. A good example is PCI -- Can you actually always prove data is within your control if it's hosted outside of your environment somewhere in the ether?
Public/private connectivity is a challenge Do you ever need to mix data between your public and private environments? It can become a challenge to send data between these two environments, and to do so securely.
Monitoring and logging You will likely have central systems monitoring your internal environment and collecting logs from your servers. Will you be able to achieve the monitoring and log collection activities if you run servers off-premise?
Penetration testing Some companies run periodic penetration testing activities directly on public infrastructure. I may be mistaken, but I think that running pen testing against Amazon infrastructure is against their contract (which make sense, as they would only see public hacking activity against infrastructure they own).
I believe that EC2 is definitely a good idea for small/medium businesses. They are rarely encumbered by the above issues, and usually Amazon can offer better services than an SMB could achieve themselves. For large organizations EC2 can obviously raise some concerns and issues that are not easily dealt with.
Simon # http://blog.LabSlice.com
The main negative is that you are fully responsible for ALL server administration. Such as : Security patches, Firewall, Backup, server configuration and optimization.
Amazon will not provide you with any OS or higher level support.
If you would be FULLY comfortable running your own hardware then it can be a great cost savings.
i work in a company and we are hosting with amazon ec2, we are running one high cpu instance and two small instances.
i won't say amazon ec2 is good or bad but just will give you a list of experiences of time
reliability: bad. they have a lot of outages. only segments mostly but yeah...
cost: expensive. its cloud computing and not server hosting! a friend works in a company and they do complex calculations that every day have to be finished at a certain time sharp and the calculation time depends on the amount of data they get... they run some servers themselves and if it gets scarce, they kick in a bunch of ec2's.
thats the perfect use case but if you run a server 24/7 anways, you are better of with a dedicated rootserver
a dedicated root server will give you as well better performance. e.g. disk reads will be faster as it has a local disk!
traffic is expensive too
support: good and fast and flexible, thats definately very ok.
we had a big launch of a product and had a lot of press stuff going on and there were problems with the reverse dns for email sending. the amazon guys got them set up all ripe conform and nice in not time.
amazon s3 hosting service is nice too, if you need it
in europe i would suggest going for a german hosting provider, they have very good connectivity as well.
for example here:
http://www.hetzner.de/de/hosting/produkte_rootserver/eq4/
http://www.ovh.de/produkte/superplan_mini.xml
http://www.server4you.de/root-server/server-details.php?products=0
http://www.hosteurope.de/produkt/Dedicated-Server-Linux-L
http://www.klein-edv.de/rootserver.php
i have hosted with all of them and made good experiences. the best was definately hosteurope, but they are a bit more expensive.
i ran a CDN and had like 40 servers for two years there and never experienced ANY outage on ANY of them.
amazon had 3 outages in the last two months on our segments.
One minus that forced me to move away from Amazon EC2:
spamhaus.org lists whole Amazon EC2 block on the Policy Block List (PBL)
This means that all mail servers using spamhaus.org will report "blocked using zen.dnsbl" in your /var/log/mail.info when sending email.
The server I run uses email to register and reset passwords for users; this does not work any more.
Read more about it at Spamhaus: http://www.spamhaus.org/pbl/query/PBL361340
Summary: Need to send email? Do not use Amazon EC2.
The other con no one has mentioned:
With a stock EC2 server, if an instance goes down, it "goes away." Any information on the local disk is gone, and gone forever. You have the added responsibility of ensuring that any information you want to survive a server restart is persisted off of the EC2 instance (into S3, RDS, EBS, or some other off-server service).
I haven't tried Amazon EC2 in production, but I understand the appeal of it. My main issue with EC2 is that while it does provide a great and affordable way to move all the blinking lights in your server room to the cloud, they don't provide you with a higher level architecture to scale your application as demand increases. That is all left to you to figure out on your own.
This is not an issue for more experienced shops that can maintain all the needed infrastructure by themselves, but I think smaller shops are better served by something more along the lines of Microsoft's Azure or Google's AppEngine: Platforms that enforce constraints on your architecture in return for one-click scalability when you need it.
And I think the importance of quality support cannot be underestimated. Look at the BitBucket blog. It seems that for a while there every other post was about the downtime they had and the long hours it took for Amazon to get back to them with a resolution to their issues.
Compare that to Github, which uses the Rackspace cloud hosting service. I don't use Github, but I understand that they also have their share of downtime. Yet it doesn't seem that any of that downtime is attributed to Rackspace's slow customer support.
Two big pluses come to mind:
1) Cost - With Amazon EC2 you only pay for what you use and the prices are hard to beat. Being able to scale up quickly to meet demands and then later scale down and "return" the unneeded capacity is a huge win depending on your needs / use case.
2) Integration with other Amazon web services - this advantage is often overlooked. Having integration with Amazon SimpleDB or Amazon Relational Data Store means that your data can live separate from the computing power that EC2 provides. This is a huge win that sets EC2 apart from others.
Amazon cloud monitoring service and support is charged extra - the first one is quite useful and you should consider that and the second one too if your app is mission critical.