Can a non owner of the clearcase activity deliver activity - clearcase-ucm

We are planning to setup Clearcase UCM like below
Int (uat/prod env)
- test (for test env)
- dev (for dev env)
Developer will be delivering their activities to dev stream. Then we as an administrator deliver the activities from dev stream to test stream. Do a build deploy in test environment. Everything goes fine then deliver activities to int stream do a build there, test in uat environment and then deploy to production.
My question is should we as admin be able to deliver developers activities from dev stream to test stream and then int stream or only owner of the activity is allowed to deliver.
Thanks in advance.

or only owner of the activity is allowed to deliver.
No, any ClearCase user can deliver activities or baselines from one Stream to another.
You just need to have a CLEARCASE_PRIMARY_GROUP environment variable correctly set to the group (main or secondary) of the vob and pvob involved in your UCM project.
Beware though of partial deliveries: if you deliver only some activities (and not all of them each time) from dev to test, you risk to introduce artificial dependencies between those (delivered) activities and the other (not yet delivered) ones on the dev stream.
See "ClearCase : Making new baseline with old baseline activities", and the notion of "timeline". That was true for ClearCase 6.x and 7.x, it may have changed for ClearCase 8.x.

Related

What is there to do on a Heroku staging app?

I understand how to create and deploy code to a staging environment on Heroku before deploying it to production. I understand that it is good to see if the code will "blow up" in a very similar environment than the one in production.
However, I can't figure out any tangible ways or mechanisms to determine if the application is broken on the staging environment.
In other words, I don't understand how having a running application on the staging environment is supposed to give me the confidence to deploy my application to production.
Hence my questions:
What additional steps are there to do on the staging environment on Heroku?
Are the integration tests supposed to be run on the staging environment?
If I note that the application is running on the staging environment, is it good enough?
A staging environment is intended just for that - to stage, or to "rehearse" an app version, before you promote it to production.
Heroku pipelines enable you to deploy a staging app in an environment as identical as possible to your production app, to reduce "but it worked in my dev environment" problems.
What you do with your app on the staging environment is up to you. You can consider using various simulation tools to simulate live users who would be accessing your production app. You might have migrations that you need to perform on your production app's data, so you could run them first in your staging app. You can have testers working with your staging app to verify everything works as intended before promoting to production.
Bottom line: Whatever you can do with your production app, you should be able to do with your staging app. Therefore, you should strive to achieve a deployment flow whereby BEFORE promoting a version to production, whatever scenarios may occur on production should first be tested on staging. How you achieve that is up to you.
What additional steps are there to do on the staging environment on Heroku?
Visually catch any broken CSS + HTML (which is a very difficult task with currently testing tools)
You might have other members in your team (SEO specialist, Marketing Manager, etc) which wants to inspect text, meta tags, etc. Having a staging environment will just make their life much easier as they can navigate and probe, propose improvements, etc
In your development machine you and other team members may have a different environment than you have running on production (different DB OS versions and third party services (.i.e Elasticsearch, Redis, etc) just to name a few.
You want to test deployment itself (you might have a series of tasks that are triggered during deployment)
The list of benefits and advantages of having a staging environment as close as possible as your production environment goes on...
Are the integration tests supposed to be run on the staging environment?
Normally you run your automated tests in a CI server (not on staging). Staging is just there to visually test and maybe to catch an error not covered yet by your test scenarios
If I note that the application is running on the staging environment, is it good enough?
Good enough is difficult to say (your staging environment may not be a perfect copy of your production environment), but if it is well tested, it should be very close to what you expect to see running in production

Visual Studio Team Services: Release Management for layered infrastructure in an enviroment

I am trying to configure release management using VSTS (release hub). My Applications are distributed across several zones per environment. I have two different zones, each zone is considered a different network.
UI Applications will be on zone 1 (for this purpose I created Build
Definitions "UI Build Definition" for UI Projects)
Service APIs on zone 2 (also created Different Build Definitions
"ServiceAPI Build Definition)
Now, when new builds exits for each of them, I need to release each one to the corresponding Zone ( taking into consideration that the zones are different networks)
I am thinking that one agent will exist per zone to download the releases.
How to do the release management part? Configuration and specifying zones or servers per environment, link each release to a server in different zones?
I'm afraid there isn't any way to do this for now since you can only select one "Agent Queue" for one environment and there isn't any way to specify which Agent in the queue to use base on different networks.
The alternative way for this would like what you think in the comments: Use two pipeline: Dev(Zone1)->QA(Zone1)->Prod(Zone1) and Dev(Zone2)->QA(Zone2)->Prod(Zone2). With this, you can specify different agents for different zones.
A specific Release in VSTS corresponds to a specific deployment pipeline, ie the route your application needs to take to become live (something like DEV > QA > PROD). A Release consists of Environments, and in each Environment you need to deploy the components of your application that must work together such that if you missed deploying a component your application world break. Think Unit of Work.
The specifics will depend on how you are doing your deployments. If you are copying artifacts to target nodes and then running PowerShell on that node to do the deployment then the agent needs to be able to see the node - typically using WinRM. If an agent can see all nodes in the different zones you only need one agent.

Is there a way to backup a Liberty server deployment in Bluemix?

I am deploying a packaged liberty server into Bluemix that contains my application.
I want to update my application but before I do so, I'm wondering what's the best way to backup what I have currently up and running? If my update is bad, I would like to restore the previous version of my app.
In other words, what is the best practice or recommended way to update a web application running on a Liberty server in Bluemix. Do I simply keep a backup of the zip I pushed to Bluemix and restore it if something goes wrong? Or is there management capability provided by Bluemix for backup and restore?
It's understood that manual backup of the pushed zip is an acceptable strategy. Additionally, I found the Bluemix documentation Blue-green deployments to be a reasonable solution, as it's a deployment technique that utilizes continuous delivery and allows clients to rollback their app in the case of any issues.
The Cloud Foundry article Using Blue-Green Deployment to Reduce Downtime and Risk succinctly explains the deployment steps (since Bluemix is based on Cloud Foundry, the steps are similar to the Example: Using the cf map-route command steps in the previously cited Bluemix documentation).
I agree with Ryan's recommendation to use the blue/green approach, though the term may be unfamiliar to those new to cloud server deployments. Martin Fowler summarizes the problem it addresses in BlueGreenDeployment:
One of the challenges with automating deployment is the cut-over
itself, taking software from the final stage of testing to live
production. You usually need to do this quickly in order to minimize
downtime. The blue-green deployment approach does this by ensuring you
have two production environments, as identical as possible. At any
time one of them, let's say blue for the example, is live. As you
prepare a new release of your software you do your final stage of
testing in the green environment. Once the software is working in the
green environment, you switch the router so that all incoming requests
go to the green environment - the blue one is now idle.
Solving this problem is one of the main benefits of PaaS.
That said, for historical context, it's worth noting this blue/green strategy isn't new to cloud computing. Allow me to elaborate on one of the "old" ways of handling this problem:
Let's assume I have a website hosted on a dedicated server, myexample.com. My public-facing server's IP address ("blue") would be represented in the DNS "#" entry or as a CNAME alias; another server ("green") would host the newer version of the application. To test the new application in a public-facing manner without impacting the live production environment, I simply update /etc/hosts to map the top-level domain name to the green server's IP address. For example:
129.42.208.183 www.myexample.com myexample.com
Once I flush the local DNS entries and close all browsers, all requests will be directed to the green pre-production environment. Once I've confirmed all works as expected, I update the DNS entry for the live environment (myexample.com in this case). Assuming the DNS has a reasonably short TTL value like 300 seconds, I update the A record value if by IP or CNAME record value if by alias and the change will be propagated to DNS servers in minutes. To confirm the propagation of the new DNS values, I comment out the aforementioned /etc/hosts change, flush the local DNS entries, then run traceroute. Assuming it correctly resolves locally, I perform a final double-check all is well in the rest of the world with the free online DNS checker (e.g., whatsmydns.net).
The above assumes an update to the public-facing content server (e.g., an Apache server connecting to a database or application server); the switch over from pre-production to production is more involved if the update applies to a central database or similar transactional data server. If it's not too disruptive for site visitors, I disable login and drop all active sessions, effectively rendering the site read-only. Then I go about updating the backend server in much the same manner as previously described, i.e., switching a pre-production green front end to reference a replication in the pre-production green backend, test, then when everything checks out, switch the green front end to blue and re-enable login. Voila.
The good news is that with Bluemix, the same strategy above applies, but is simplified since there's no need to fuss with DNS entries or separate servers.
Instead, you create two applications, one that is live ("blue") and one that is pre-production ("green"). Instead of changing your site's DNS entries and waiting for the update to propagate around the world, you can update your pre-production application (cf push Green pushes the new code to your pre-production application), test it with its own URL (Green.ng.mybluemix.net), and once you're confident it's production-ready, add the application to the routing table (cf map-route Green ng.mybluemix.net -n Blue), at which point both applications "blue" and "green" will receive incoming requests. You can then take the previous application version offline by unmapping it (cf unmap-route Blue ng.mybluemix.net -n Blue).
Site visitors will experience no service disruption and unlike the "old" way I outlined previously, the deployment team (a) won't have to bite their nails waiting for DNS entries to propagate around the world before knowing if something doesn't work and (b) can immediately revert to the previous known working production version if a serious problem is discovered post-deployment.
You should be using some sort of source control, such as Git or SVN. Bluemix is nicely integrated with IBM DevOps Services (IDS) which can leverage git or an external Github repo to manage your project. When you open your app's dashboard, you should see a link in the upper right-hand corner that says "ADD GIT". That will automatically create a git repo for your project in IDS.
Using an SCM tool, you can manage versions of your code with relative ease. IDS provides you with an ability to deploy directly to Bluemix as part of your build pipeline.
After you have your code managed as above, then you can think about green/blue deployments, etc. as recommended above.

What is the business benefit for Oracle Weblogic Server over OC4J?

Apart from Technology support , what are all the business benefits for oracle web logic server. For example in area of security,support etc.
What are all the new features supported by weblogic ?
TL;DR:
Support is great when you open ticket with Oracle Support (Weblogic strictly).
Great admin/read-only user implementation. We authenticate to Windows Active Directory. Developers get read-only accounts, reduces churn for them to wait for ops to transfer logs and validate settings.
Dashboard useful out-of-box to do real-time monitoring without additional tools or installs. Easily accessed by any one who is authenticated to login. We could give it to our CIO if he wanted in about 3 minutes by adding him to the right authorized group in AD.
Easier to clone environments.
I haven't worked with OC4J but I believe Oracle's roadmap is picking Weblogic as their preferred Java application server. You can see it is the base technology for some of their other products, such as Oracle Service Bus, Oracle Enterprise Manager (OEM), and Oracle Line Planning.
I have opened 3 Oracle tickets in the past month. I was surprised at how fast they answered. For a Severity 3 ticket (medium), they usually have responded in 2-3 days. I can't say the same for their other services (over 2 weeks for a ticket on OEM).
Security is a pretty broad scope... so you'd have to be a little more specific on some of the topics of security.
One thing that is pretty awesome is the Dashboard. http://docs.oracle.com/cd/E14571_01/web.1111/e13714/dashboard.htm You can obviously add read-only monitor accounts so other users can get insight to the performance. We add developers to this so that they can validate any settings, or see performance whenever there is a production issue.
We used Microsoft Active Directory authentication in our Weblogic domains. People are not using the default weblogic administrator user so configuration changes are audited. When someone's account gets disabled when leaving the company, it disables their access to Weblogic similarly. You don't have to change the password.
Other useful settings I like in it is the ability to automatically archive config changes. Each time someone makes a config change, a backup is automatically created. This allows me to go fix something when developers break their environment without having to majorly reverse-engineer what they did.
I also like the fact that you can pack and unpack the domains. I've used it to move entire domains from staging to production with some minor changes... i.e. change all stg to prod variables. This should likewise make it easier to 'clone' environments when you want to build out a new one.
Although not related, I should mention Oracle Enterprise Manager. We are an Oracle shop because they seem to have given us a good deal on licencing. So we get to run Oracle Enterprise Manager, which is a tool slowly becoming more and more useful. The agent also reports how our RedHat Linux hosts are behaving, network input/output, CPU utilization, memory utilization, java heap stacks. We are going to move to defining groups within that has all the targets related to an application stack. This will give our operations team the insight to see where the bottleneck might be... the Oracle Weblogic web layer, network, Oracle Service Bus, or Oracle Database performance.
Supposedly, you can add jBoss, other JMX monitoring as well to OEM. It's on our to-do list for non-Weblogic instance. We're slowly rolling OEM out.

What is the definition of a Heroku production app?

I have an app running happily on Heroku, but it's registered as a 'development' app, and I can't for the life of me find any formal definition of what a 'production' app is, despite the apps dashboard and the status page making a very clear distinction between the two.
I have come across this explanation of the status of the two, which suggests that the difference is implicit (based on usage) rather than explicit (based on some configuration / setup):
Production issues are those that affect running, stable, production applications that have at least two web dynos and use a production-grade database (or no database at all). Includes dynos, database, HTTP caching, other platform components (DelayedJob workers, scheduler, etc.), and routing.
Development issues are those that affect the health of deployment workflow and tools. Includes deployment (git push, gem installation, slug compilation, etc.), general git activity, command line gem/API (scaling up/down, changing configuration, etc.), and related services (rake, console, db push/pull with TAPS, etc.). Development also includes issues specific to the operation of non-production applications such as unidling free 1-dyno apps and the operation of development databases.
Even these explanations reference a mysterious difference between development and production databases, although there is no corresponding explanation of the difference anywhere. Is the $9pcm 'Basic' Postgres plan a 'production' database?
[UPDATE]
There is now a 'Run Production Check' link on the app dashboard within your Heroku account that shows the steps to determine your app status. Screenshot attached:
I've been trying to find the same answer to your question.
So far from what I can glean Shared / Dev / Starter databases via Heroku Postgresql Database plans are not considered production and only Crane and higher are consider production grade database plans.
The Heroku Postgres production tier starts with Crane and extends through to the Mecha plan. Shared, Dev and Starter plans are not production databases.
https://devcenter.heroku.com/articles/maximizing-availability
In addition, it seems that Heroku has a plugin to check to see if your app meets their guidelines:
https://github.com/heroku/heroku-production-check
I'm not sure if you scale up and down web dynos from 2 (during the day) and 1 (at night) with a Crane database if that would be considered "production" in Heroku's eyes.
Hope that helps!

Resources