I'm working to deploy SQLProj (Database Project) via VS2013 and I'm encountering an issue that I could use some feedback on.
When I publish changes it executes a CCRD (Create, Copy, Rename, Delete) operation for a table. Create and Copy in the PreDeploy script and the Rename, Delete in the PostDeploy Scripts. I have these statements encapsulated in a Try...Catch block and have it printing out error messages. When the Pre and Post deploy scripts encounter issues the publish completes showing the publish as successful. I'm needing the behavior to be different when these scripts encounter errors. I would like for any errors encountered within these scripts cause the publish to fail. I've tried the THROW command to throw an error and I've tried the RAISERROR command setting severity to 20 when it's executed, which according to BOL should terminate the connection, but still the publish completes with a status of "Successful".
Something we have done is split the process. We use SQLPackage.exe with the script action. Then we use SQLCMD.exe to execute the script. That way you have ONE script that contains your pre/main/post and using SQLCMD will stop execution when errors occur.
it sounds like you're doing the right thing, and it should work.
if i had to guess, i'd say:
you may be ignoring errors (:on error ignore somewhere in your project? sqlserver or your environment are configured to treat errors as warnings and ignore them?)
you may not actually be throwing the errors - can you post a sample script somewhere for us to see?
Related
I send SQL scripts to a company where the operators run my scripts in TOAD. They have Oracle 10g. I am not allowed to create stored procedures, and I can send one script every day. Because I work on several projects, which have independent scripts, I send the main script like this:
#a.sql;
#b.sql;
#c.sql;
The three external scripts (say projects scripts) may be short or long, may call other external scripts, may contain PL/SQL blocks.
Normally, all my project scripts run, and I can get their result tables. But sometimes, some of my scripts fail. If a.sql raises any error, my script stops and doesn't run b.sql and c.sql.
In the case of an error, I want to stop processing the current project's script only, and I want my main script to continue with the next project script. If b.sql fails, too, then I want to continue with c.sql.
I tried WHENEVER SQLERROR, but nor EXIT neither CONTINUE does this. EXIT stops my script, CONTINUE doesn't skip the remaining part of my erroneous script (wasting a lot of time). I tried EXCEPTION in a PL/SQL block, but I could not call any external project script there. Is it any solution for this?
We are having to use the command configuration (to be able to specify the runsettings file that I want to use at the time of running the job) for vstest.console. But now it does not create the results within bamboo. like the vstest.console bamboo task does.
First question, does the mstest parser create that kind of results? Is there a way to do this after running the command prompt. Also, since they both create trx files, can I use mstest parser for the trx created by vstest.console?
Second question, I don't see in my log that it kicked off that step. Is there anything special I need to do to have it kicked off? Especially if the previous step fails?
Also, the .trx logger setting is not renaming the trx to my TestResult.trx file name.
I looked in the list of apps to see if there is a vstest.console version of the parser, there is not. We are using version 6.6.3 ..so we are a bit behind but not sure if this means anything with the parser.
Must have had to do a reboot. I don't know why, but I came back in after the weekend, and the servers have been rebooted, and with no change to what I said above, the results are showing up as expected.
I have created database project in visual studio 2013 from existing database. Then I have done lot of changes in database project like modify stored procedures, post deployment script, table structure, etc . Now database project is ready to deploy. But I am worry if any script fails then How I can retain the original state though it build properly.
Please suggest that if any query fails then I want ROLLBACK the all changes that I have made in database project.
Firstly you need to trust your tools and either believe they will work or find other tools.
While you are building the trust I would add a create backup to the pre-deployment script or run a backup before you deploy then if anything goes wrong you can restore and figure out what went wrong.
As David said to roll-back, you would get the previously deployed dacpac and generate a new deployment script from that but fixing forward is almost always the correct thing to do rather than rolling back to a previous version.
ed
Have you been checking your changes into version control? If so, all you need to do is to revert back to the last known good version.
Or... simply work out why it's failing now and fix the root cause?
I used Db projects some time ago and as far as I remember the deploy script was wrapped in a transaction. It is possible to generate sql script without executing it. That setting was somewhere in DB project settings. You can take a look inside that script and make sure that it'll rollback on error.
Doing a backup would still be a recommended practice especially when you deploy to production.
when working on important scripts I developed a habit of always starting a transaction and commenting out the commit.
If you accidentally run it, it won't take effect. The commented out commit would only come out when the thing was done.
While this answer indicates that you CAN revert in source control (Assuming SSDT at this point) it would be nice to get a pointer to the exact process to do this. On a file by file basis the history works the same but how to revert the entire database at once isn't immediately obvious.
I have a Jenkins master/slave set up which has been working quite happily, running Oracle imports on some Linux boxes.
I have just added a new slave node and tried to run our existing database import job on this new node. This job consists of three subprojects; the first one runs some execute shells, copying files and changing permissions and this currently completes successfully, the second runs an execute shell which ends with an Oracle impdp. The impdp completes (the db exists and ps -ef no longer shows impdp running) but the Jenkins subproject never finishes. The UI just sits there with the clock whirring around.
I've tried adding an echo after the impdp, and this also executes correctly, but the subproject still never finishes.
If I add a Post-Build email notification, it is not sent.
The third subproject is never reached.
What could be the cause of this and how do I debug what is happening?
In our case, the jobs would declare "Finished: SUCCESS", but then continue with some unknown Jenkins business for another 10 or 20 minutes. After putting on more detailed logging, we found it was related to the ill-named LogRotator.
We have thousands of old builds and are deleting the artifacts for those older than a certain number of days. Because of the way old builds are handled, Jenkins searches the entire list of old builds even though they have already had their artifacts removed.
There is issue that is now fixed related to this: https://issues.jenkins-ci.org/browse/JENKINS-22607
As of right now I do not see it in a release, but if you have this issue, the temporary workaround is to turn off the deletion.
This turned out to be something horrible :-)
After finishing the work, Jenkins tries to kill all processes it spawned. To identify them, it goes through all processes in the OS, reads from /proc/<pid>/environ (this is a Linux box) which contains the process’ environment variables and compares them with the environment it sets to Jenkins processes.
Problem was there was one particular Oracle process running on our db server where if you tried to read from /proc/pid/environ for it, it would just hang forever – which is where the Jenkins code would get stuck.
I have no idea why it was getting stuck like this and nor did our DBA. We restarted it and now it works.
You can add set +x to the top of shell scripts to see which commands are actually executed. That way you should be able to easily see from the output which command is blocking.
I have some Issues regarding Jenkins and running a Powershell Script within. Long Story short: the Script takes 8x longe execution time then running it manually (takes just a few minutes) on the Server(Slave).
Im wondering why?
In the script are functions which which invoke commands like & msbuild.exe or & svn commit. I found out that the script hangs up in those Lines where before metioned commands are executed. The result is, that Jenkins time out because the Script take that long. I could alter the Timeout threshold in the Jenkins Job Configuration but i dont think this is the solution for the problem
There are no error ouputs or any information why it takes that long and i do not have any further Idea for the reason. Maybe one of you could tell me, how Jenkins invokes internaly those commands.
This is what Jenkins does (Windows batch plugin):
powershell -File %WORKSPACE%\ScriptHead\DeployOrRelease.ps1
I've created my own Powershell CI Service before I found that Jenkins supports it's own such plugin. But in my implementation and in my current jobs configs we follow sample segregation principle rule: more is better better. I found that my CI Service works better when is separated in different steps (also in case of error it's a lot easy for a root cause analyse). The Single responsibility principle is also helpful here. So as in Jenkins we have pre- & post-, build and email steps as separate script. About
msbuild.exe
As far as I remember in my case there were issues related with the operations in FileSystem paths. So when script was divided/separated in different functions we had better performance (additional checks of params).
Use "divide and conquer" technique. You have two choices: modify your script so that will display what is doing and how much it takes for every step. Second option is to make smaller scripts to perform actions like:
get the code source,
compile/build the application,
run the test,
create a package,
send the package,
archive the logs
send notification.
The most problematic is usually the first step: To get the source code from GIT or SVN or Mercurial or whatever you have as version control system. Make sure this step is not embeded into your script.
During the job run, Jenkins capture the output and use AJAX to display the result in your browser. In the script make sure you flush standard output for every step or several steps. Some languages cache standard output so you can see the results only at the end.
Also you can create log files that can be helpful to archive and verify activity status for older runs. From my experience using Jenkins with more then 10 steps requires you to create a specialized application that can run multiple steps like "robot framework".