Right now I'm running Jenkins as windows service. Most likely because of that I'm unable to spawn test application window. It's just appearing as a process in the background. I already tried Log-On option to allow service to interact with desktop, but that doesn't seem to change anything in this case.
I'd like to try running Jenkins through jenkins.war, however the first thing I've noticed is that it begins all the configuration prompting about plugins etc. Application is already configured and has all the builds in place.
Could someone explain if it's necessary to reconfigure everything or it just keeps all the settings in some different location when running as service? There's was no dedicated user, the service was operated just by local system account.
So the solution was fairly simple. The main problem was when Jenkins ran as a service entire configuration was saved in it's install directory so upon running jenkins.war configuration was set in current account user directory so it was like fresh installation.
The thing that helped was thin backup plugin. Ran the backup while service instance was running, then restored it for the new instance that allowed full plugin, configuration and builds restore. After that also passwords for source control had to be updated (restored ones didn't work probably because of per instance encryption key).
Related
I'm testing communication among multiple instances of my UWP by simultaneously running it on both my local machine and a remote machine. The Debug properties in Visual Studio are limiting in that I must choose between either local or remote. This causes me to perform multiple manual steps whenever I modify the app's code and start a new debugging session.
Open the Debug properties, where I change from remote to local
Build and deploy the solution (locally)
Undo that change and start remote debugging
Launch the app locally and attach to its process
What a pain! How can I automate these steps such that I get the same effect just be clicking "Start"?
I tried adding the executable to the solution directly as suggested here, but various errors prevent it from running. Anyway, that wouldn't help with local deployment.
I'm currently working on developing a custom workflow with many custom behaviors and scripts. I'm using the Alfresco Maven SDK to build and test my project as I develop it. This necessitates that I restart the repository-tier tomcat server every time I want to make a change/update my workflow files. I am getting quite frustrated with how long this takes each time, and it means that I'm wasting time while waiting for the server to restart, especially when I've made a small typo in one of my files.
I'm looking for a way (if it's possible) to update my files (in particular the bpmn process file) and apply these changes to my Alfresco instance without having to restart the tomcat servers each time. I've set to true in my service-context.xml, and I have also tried to redeploy the workflow from the admin-workflow-console, but my changes do not take place unless I manually restart the server.
I am using: Alfresco Community 5.2, Maven SDK 2.2
Any tips or suggestions would be very welcome!
Yes, you can do it by
workflow admin console
URL
http://<server>:<port>/alfresco/s/admin/admin-workflowconsole
Ex :: deploy alfresco/workflow/<workflow-definition>.xml
path for your workflow definition file.
Refer this docs for more information
https://community.alfresco.com/docs/DOC-5079-workflow-console
I need to run a console app after deployment, but the tool should only be run once per environment. I have two roles defined per environment, DatabaseServer and AppServer. The tool should be run on the AppServer machine.
I deploy the console app using a nuget package and a custom PowerShell script to copy it to the correct location on the AppServer machine.
Everything is fine when there is only one machine in the AppServer role, but I cannot think of an elegant way of ensuring the tool is run exactly once when there is more than one machine in the AppServer role.
The only way I can think of is if I specify a variable per environment that contains the name of the machine that the tool should run on. In the PowerShell script I could check this value and if it's not equal to the current machine name, just exit the script successfully.
This doesn't feel right though - it needs a variable per environment, and if the machines in the AppServer change (and the one the app should run on is removed) then the deployment will be classed as successful but the tool will not have run.
Is there anything simple that I've overlooked? Or will I have to make do with this approach? (which makes me sad)
You could give the AppServer machine that will run this tool a second role such as ToolRunner, for example. You could then create a deployment process step that targets the ToolRunner role and executes the console application on the one and only machine that has happens to be defined with that role.
This would require no change to your PowerShell scripts or to Octopus variables. However, should you remove the machine that is tagged with the ToolRunner tag from the environment, you would have to make sure to assign the tag to a new machine for your process to continue to work as you wish it to.
I have a Jenkins/Hudson CI server, hosted on a Dedicated server (Kindly hosted by someone else). We have come to a problem which we cannot solve, and need help from people who may know solutions:
When we try to run a build, we get a Build Failed, and
java.io.IOException: Unable to delete C:\Program Files (x86)\Jenkins\jobs\JumpPorts-2\workspace
Jenkins was able to create the files, so surely it can delete them? It is running as a service, and it is cloning the source (Maven - Java) from GitHub. This is on a windows server. I tested it on my VPS (Centos5) and it worked correctly, however due to it being a VPS, java does not run well with my other services, so i am unable to host it on there.
Full Error: http://pastebin.com/0tWVVdiH
Thanks in advance
Most likely you are using the Maven project type.
The Maven project type can parse the pom on disk before building and while accessing the GUI. As a result when building on Windows, there is the chance that window's strict version of file locking can get in the way, marking a file as in use until absolutely every file handle is released.
One way to reduce this issue is to have the windows builds run on a slave node rather than the master (note that the slave node can be the same physical machine, but because the remoting channel is required to see the slave's filesystem, the file handles may not be as big an issue)
Another way to reduce this issue is to switch to a FreeStyle project with a Maven build step. Note that my personal preference is to avoid the Maven project type on pain of death. ;-)
I work for a fairly new web development company and we are currently testing subversion installations to implement a versioning system. One of the features we need the versioning system to perform is to update the development server with an edited file once it has been committed.
We would like to maintain one server for all of our SVN repositories, even though, due to system requirements, we need to maintain several separate development servers. I understand that the updates are fairly simple when the development server resides in the same location as SVN, but that is just not possible for us. So, we need to map separate network drives to the SVN server for each development server.
However, this errors on commit. Here is my working copy test directory, as referenced in the post-commit.bat file:
SET WORKING_COPY=Z:\testweb
This, however, results in an error...
post-commit hook failed (exit code 1) with output: svn: Error resolving case of 'Z:\testweb'
I'm sure this is because the server is not the same user as me and therefore does not have the share I need mapped to "Z" - I just have no idea how to work around this. Can anyone help?
UPDATE: The more I look in to these issues it appears that the real solution to the problem is to use a CI Server to accomplish what I am attempting to accomplish. I am currently looking in to TeamCity and what it might do for us.
Don't do this through a post-commit hook. If you ever manage to get the hook to succeed, you'll be causing the person who did the commit to wait until the update is complete. Instead, I recommend that you use Jenkins which is a continuous build engine.
It is possible that you don't have anything to build. After all, if you're using PHP or JavaScript, there's nothing to compile. However, you can still use Jenkins to do the update for you.
I can't get into the nitty-gritty detail hear, but one of the things you can do with Jenkins is redefine its working directory. You can do this by clicking on the Advanced button when you define a job, and it'll ask you where you want the working directory. In this case, you can specify your server's working directory.
One of the things you can do with Jenkins is have it automatically run tests, or maybe do a bit smoother update. For example, you might have to restart your web server when you change a few files, or maybe you need to make sure that if you're changing 100 files, they all get changed at once, or your server isn't in a stable state. You could use Jenkins to do this too. And, if there are any problems, you can have Jenkins email the person who is responsible for the server that the server update failed.
Jenkins is easy to setup and use. You can download it and start up Jenkins in 10 minutes. Setting up a job in Jenkins might take you another 15 minutes if you had never seen Jenkins before and had no idea how it works.