report deployed before ok, now not working (usual story:)
weird in that /server/Reports/ showed the new deployment timestamp whereas /server/ReportServer/ still shows previous timestamp
both /server/Reports/ and /server/ReportServer/ paths run the old report
no clues within the report server log below during deployment
library!ReportServer_0-1!1cec!03/11/2015-08:37:46:: Call to GetSystemPropertiesAction().
library!ReportServer_0-1!1ec!03/11/2015-08:37:46:: Call to GetItemTypeAction(/test/Equipment Management/Equipment Maintenance
Backlog).
library!ReportServer_0-1!1cec!03/11/2015-08:37:46:: Call to
GetItemTypeAction(/Data Sources/Viewpoint).
library!ReportServer_0-1!1cec!03/11/2015-08:37:46:: Call to
GetItemTypeAction(/Datasets/CompanyLookup).
library!ReportServer_0-1!1ec!03/11/2015-08:37:46:: Call to
GetItemTypeAction(/test/Equipment Management/Equipment Maintenance
Backlog/Equipment Maintenance Backlog).
library!ReportServer_0-1!1ec!03/11/2015-08:37:46:: Call to
GetItemDataSourcesAction(/test/Equipment Management/Equipment
Maintenance Backlog/Equipment Maintenance Backlog).
library!ReportServer_0-1!1cec!03/11/2015-08:37:46:: Call to
GetItemTypeAction(/test/Equipment Management/Equipment Maintenance
Backlog/Equipment Maintenance Backlog).
library!ReportServer_0-1!1cec!03/11/2015-08:37:46:: Call to
GetReportItemReferencesAction(/test/Equipment Management/Equipment
Maintenance Backlog/Equipment Maintenance Backlog).
library!ReportServer_0-1!1cec!03/11/2015-08:37:46:: Call to
CreateReportAction(Equipment Maintenance Backlog, /test/Equipment
Management/Equipment Maintenance Backlog, True).
Resolved by deleting the old deployed report via Reports Manager (/server/Reports/) before deploying again.
Q Is this normal? Having to delete a deployed instance of a report before deployment seems flaky and a waste of time.
Q could it be that I am using google chrome(+IE Tab) caching?
Q could it be something with my dev directory setup?
Q could it be that I am using the freebee version of vs2010 ?
Q could it be that I am fishing for answers when this is the way deployment is done with vs and m$ environ!
Related
Sometimes it is preferred and/or required to host dozens of applications on a single server. Not saying this is "right" or "wrong," I'm only saying that it happens.
A downside to this configuration is the error message Waiting for the script in task [TASK ID] to finish as this script requires that no other Octopus scripts are executing on this target at the same time appears whenever more than one deployment to the same machine is running. It seems like Octopus Deploy is fighting itself.
How can I configure Octopus Deploy to wait for one deployment to completely finish before the next one is started?
Before diving into the answer, it is important to understand why that message is appearing in the first place. Each time a step is run on a deployment target, the tentacle will create a "Mutex" to prevent others projects from interfering with it. An early use case for this was updating the IIS metabase during a deployment. In certain cases, concurrent updates would cause random errors.
Option 1: Disable the Mutex
We've seen cases where the mutex is the cause of the delay. The mutex is applied per step, not per deployment. It is common to see a situation where it looks like Octopus is "jumping" between deployments. Depending on the number of concurrent deployments, that can slow down the deployment. The natural thought is to disable the mutex altogether.
It is possible to disable the mutex by adding the variable OctopusBypassDeploymentMutex and setting it to True. That variable can exist in either a specific project or in a variable set.
More details on what that variable does can be found in this document. If you do disable the mutex please test it and monitor for any failures. For the most part, we don't see issues disabling the mutex, but it has happened from time to time. It depends on a host of other factors such as application type and Windows version.
Option 2: Leverage Deploy a Release Step
Another option is to coordinate the projects using the deploy a release step. Typically this works best when the projects being deployed are part of the same application suite. In the example screenshot below I have five "deployment" projects:
Azure Worker IaC
Database Worker IaC
Kubernetes Worker IaC
Script Worker IaC
OctoStudy
The project Unleash the Kraken coordinates deployments for those projects.
It does this by using the Deploy a Release step. First it spins up all the infrastructure, then it deploys the application.
This won't work as well if the server is hosting 50 disparate applications.
Option 3: Leverage the API to check for running deployments
The final option is to include a step at the start of each project which hits the API to check for active releases to the deployment targets for the deployment target. If an active deployment is found then wait until it is done.
You can do this by hitting the endpoint https://[YOUR URL]/api/[SPACE ID]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[SPACE ID]&includeSystem=false. That will tell you all the active tasks being run for a specific machine.
You can get Machine Id by pulling the value from Octopus.Deployment.Machines. You can get Space Id by pulling the value from Octopus.Space.Id.
The pseudo code for this approach could look like this (I'm not including the actual code as your requirements could be very different).
activeDeployments = true
while (activeDeployments)
{
activeDeployments = false
foreach(machineId in Octopus.Deployment.Machines)
{
activeTasks = https://[YOUR URL]/api/[Octopus.Space.Id]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[Octopus.Space.Id]&includeSystem=false
if (activeTasks.Count > 0)
{
activeDeployments = true
}
}
if (activeDeployments = true)
{
Sleep for 5 seconds
}
}
I had this message hit me because I hit the Task Cap on the Octopus Server.
In Octopus\Configuration\Nodes change the task cap to 1 to have 1 deployment at a time even with agents on different servers. The message will display constantly
Or simply increase this value to prevent the message from occurring at all.
Over the weekend someone renamed one of the cube databases that we have leading to massive headaches and SQL job failures. I would like to know if a cube database rename action is logged anywhere and related details. I tried replicating the same in development environment and searching in eventvwr, without much luck. Any leads will be appreciated!
The key mechanisms for maintaining error logs for Analysis Services are to either:
1.Keep track of the data stored in the msmdsrv.log.
It will be necessary to copy the log off before it gets overwritten.
2.If you are using Analysis Services 2005, 2008, or 2008 R2,
you can generate your own trace events as noted in the System-wide
Trace file section of the post Analysis Services Processing Best Practces
at: http://technet.microsoft.com/en-us/library/cc966525.aspx#EBAA
3.If you are using SQL Server 2012,
you can use the XEvents feature as noted in the SSAS documentation Use SQL Server Extended Events (XEvents) to Monitor Analysis Services at: http://msdn.microsoft.com/en-us/library/gg492139.aspx
by using the above mechinisims ,you can keep track of log going forward
I'm new to use oracle BIEE.My development enviromnent now is installed,and the project is a little big.Multi user development is using for developing now.The problems happens when one developer publish the rpd to network and want to test the data,the server reloading the rpd file takes too much time and I can hardly wait!When multi users want to test rpd file,e,can't stand it... is there any other way to solve the problem?or how to make the biee sever reload the rpd file quickly?
It's hard to say specifically without knowing a bit more about your setup, but here are a few general advice pointers:
When stopping the service OBI will wait for any running queries to complete before stopping the service, so making sure there's nothing running before you try to do this.
Make sure you're only restarting the BI Server component, you don't need to wait for the other services to restart if you're just changing the RPD (if you're on 11g then deploying through EM should mean this happens anyway so you don't need to worry).
If you're using 11g, you could try incremental updates by creating patches.
Check whether the hardware you're running on is adequate, most importantly that you've enough RAM so it's not having to page out to disk when it loads the RPD.
Remove anything unused from the RPD to make it smaller.
I have a CLR Project that I'm trying to publish using Visual Studio. I had to change the project to a SQL Data Tools project, and now it's not publishing. Each time I try, I get a timeout error. When I take it step-by-step, I find this line of code hangs on my server.
IF EXISTS (
SELECT 1
FROM [master].[dbo].[sysdatabases]
WHERE [name] = N'fwDrawings')
BEGIN
ALTER DATABASE [fwDrawings]
SET READ_COMMITTED_SNAPSHOT OFF;
END
Basically, I know it's trying to force the server into single user mode when I try to publish this up. It's just to my staging server and not to a production server, but this is still a problem. I can't keep kicking everyone off the server and try and switch it to single user mode every time I want to update the CLR while I'm testing it's functionality. And I don't want to wait for a maintenance cycle or down-time to promote it up to production. Is there a way around this?
Presumably you have READ_COMMITTED_SNAPSHOT turned on for your database.
If this is the case, you need to change your Database project settings to match. Check "Read committed snapshot" transaction isolation, within the Operational tab in Database Settings for the project.
For me, this prevented the publish timing out, i.e. I can now publish successfully.
For a safer way to deploy to a server that's in use, try using a schema comparison instead.
We have CC.NET as continuous integration environment. CC gets every commits from Git, build and publish to server.
This is config:
<buildpublisher
<sourceDir>Path_to_dir_with_source</sourceDir>
<publishDir>path_to_deploy</publishDir>
<cleanPublishDirPriorToCopy>true</cleanPublishDirPriorToCopy>
<useLabelSubDirectory>false</useLabelSubDirectory>
<alwaysPublish>true</alwaysPublish>
</buildpublisher>
But our QA engeneer wants to get "fresh" build every morning, not 20 times in a day :)
Anybody know how to make this with CC.NET?
[UPDATE]
We still need to build every commit, but put this commit to web server only once in a day
Use the ScheduleTrigger block:
<scheduleTrigger time="23:30" buildCondition="ForceBuild" name="Scheduled">
<weekDays>
<weekDay>Monday</weekDay>
</weekDays>
</scheduleTrigger>
You can also use a cronTrigger, or an intervalTrigger, but for your case, the scheduleTrigger seems simplest.
http://www.cruisecontrolnet.org/projects/ccnet/wiki/Trigger_Blocks
Added to address the comment:
You can have multiple triggers in one Project block.
We have an Interval Trigger on our end as well as a Schedule Trigger. You can have as many triggers as you need.
If you want to keep the builds separate, you can also have a compeltely separate Project block - one that operates on a Schedule trigger, and one that operates on an interval, but there are risks, and the configuration needs to be planned. The following comes to mind as potential considerations:
If you do this, be sure you check out the source code to different directories, or you can run into conflicts if both are running at the same time.
Even if the source code checkout is to different directories, you can still run into conflicts if both projects are publishing to the same output location.
You can add a schedule trigger that forces a build of your project at a specific time, i.e. the following would run a build every work-day at 5 am in the morning:
<triggers>
<scheduleTrigger time="05:00" buildCondition="ForceBuild">
<weekDays>
<weekDay>Monday</weekDay>
<weekDay>Tuesday</weekDay>
<weekDay>Wednesday</weekDay>
<weekDay>Thursday</weekDay>
<weekDay>Friday</weekDay>
</weekDays>
</scheduleTrigger>
</triggers>