Ansible debugger - Can you redo previous tasks? - debugging

I'm using the Ansible debugger, using commands:
p(print variables)
r(edo)
c(ontinue)
q(uit)
Once the playbook gets to the step it fails on, I often wish I could go back and re-do previous tasks, before the task that failed. Is this possible?

In respect to your question
... go back and re-do previous tasks ...
this seems to be not possible according the documentation playbooks_debugger since
You have access to all of the features of the debugger in the context of the task
only.

Related

Reload Add'ins in Solidworks PDM

I'm still rather new to the world of API programming in Solidworks PDM, and i have run into a cumbersome problem i was hoping to get some insight in: For many normal API's in PDM, it is simply enough to add the .DLL file in PDM-Administration as 'Debug', and from thereon out, whenever the solution in VisualStudio is being rebuild, the PDM-Administration will automatically grab the same DLL-file, the next time it is being called from PDM. This is great for Debugging, no problem here.
But, as soon as the API has to trigger a task (to be executed on a client PC), it can only be added to PDM as a normal task (no debug mode), then added to the 'Task Host Configuration' on the client, and then configured as a 'New Task' in PDM-Administration.
This all works fine; BUT, it takes quite some time to change anything, since the only way i can get changes to take effect, is first to rebuild the Solution in VisualStudio, then manually overwrite the DLL-file in PDM-Administration, and finally reboot the client-PC (to force-update which version of the add-in it sees).
I have tried; logging out/in-again(in PDM), restarting the explorer, and clearing the local PDM-Cache... nothing has happened here
Can any of you give me some advice on how you debug PDM API's?
or at least force-reload a addin on the clients.
Specifically suggestions to task-add'ins will be much appreciated. Thank You.
Unfortunately there isn't a clean way to debug task addins.
It is possible to attach your debugger to the PDM process itself but its not trivial. Here's the gist of it as explained by Lee Young,
It depends on what portion of the task you're attempting to debug. If
you're looking to debug the task setup in the Administration Tool, all
you need to do is attach to the conisioadmin.exe process.
To debug an executing task, it gets a little trickier. Load up your
add-in as usual and select your machine to be the only machine that
will execute the task. (In the task setup.) Close the administration
tool. You'll need to create a symlink from the file in the plugins
directory to your debug dll. I personally use DirLinker. The plugins
directory is located at AppData\Local\SolidWorks\SolidWorks Enterprise
PDM\Plugins\VaultName.
In your sourcecode, place a messagebox.show in the OnCmd method and
place a breakpoint at that line. Once the task loads up, the task will
wait for the messagebox to be closed. When the messagebox is shown,
you can attach to TaskExecutor.exe and then you'll be able to debug.
If you're not hitting breakpoints, make sure you have the correct .NET
framework version selected when debugging.
If you're still not hitting breakpoints, EPDM probably loaded another
instance of your dll into the plugins directory.
For simple task addins, my approach is to debug via the method you described (reload it manually every time). Proper logging will help a lot here (Print ex.StackTrace on every exception).
For more complex tasks, you can create a separate 'debug' project that has some hardcoded (or dynamic) inputs and calls your code. This will get you close before testing in the PDM environment. A PDM task is basically a COM process on the client machine so that's pretty simple to mimic, aside from the actual PDM Task environment, which is full of bugs.
Hope this helps.

When using `start-at-task` in Ansible, is it possible to force an earlier task to always be run?

I have a large complex Ansible provisioning setup, > 100 roles, > 30 inventory groups, etc... This is organised in the recommended way: top level playbooks, with a /roles folder, a /group_vars folder, etc...
Sometimes this fails part-way through, and I want to restart it from where it failed, using the --start-at-task command line switch.
However, I have several tasks that always need to run. These dynamically add hosts to the inventory, set variables that are needed later, etc...
Is there a way to force a task to always be run - or a role to always be applied, even when using --start-at-task?
I know about the always tag, but I think this only makes it run when filtering tasks by using --tag, not --start-at-task - unless someone knows differently?
Alternatively, is there some other way to structure things that would avoid this problem with --start-at-task?
Unfortunately, it is not possible.
Currently you either have to use tags or bite the bullet, rely on idempotency and let all tasks re-run.
There were PRs and discussions to add such a functionality, but they never made it to the official Ansible release.
There is also an open issue asking for this feature, with a suggestiion, that always_run should enable running the task when using in conjunction with --start-at-task, but the discussion also withered away more than a year ago.
The conclusion of the issue mentioned by techraf is that the good way to do it is by implementing a custom strategy: https://github.com/ansible/ansible/issues/12565#issuecomment-834517997
One has been implemented there (but I haven't been able to make it work with ansible-test)
Anyway it's a good start to implement a working solution and the only way possible as Ansible devs will not patch --start-at-task

Jenkins Timeout because of long script execution

I have some Issues regarding Jenkins and running a Powershell Script within. Long Story short: the Script takes 8x longe execution time then running it manually (takes just a few minutes) on the Server(Slave).
Im wondering why?
In the script are functions which which invoke commands like & msbuild.exe or & svn commit. I found out that the script hangs up in those Lines where before metioned commands are executed. The result is, that Jenkins time out because the Script take that long. I could alter the Timeout threshold in the Jenkins Job Configuration but i dont think this is the solution for the problem
There are no error ouputs or any information why it takes that long and i do not have any further Idea for the reason. Maybe one of you could tell me, how Jenkins invokes internaly those commands.
This is what Jenkins does (Windows batch plugin):
powershell -File %WORKSPACE%\ScriptHead\DeployOrRelease.ps1
I've created my own Powershell CI Service before I found that Jenkins supports it's own such plugin. But in my implementation and in my current jobs configs we follow sample segregation principle rule: more is better better. I found that my CI Service works better when is separated in different steps (also in case of error it's a lot easy for a root cause analyse). The Single responsibility principle is also helpful here. So as in Jenkins we have pre- & post-, build and email steps as separate script. About
msbuild.exe
As far as I remember in my case there were issues related with the operations in FileSystem paths. So when script was divided/separated in different functions we had better performance (additional checks of params).
Use "divide and conquer" technique. You have two choices: modify your script so that will display what is doing and how much it takes for every step. Second option is to make smaller scripts to perform actions like:
get the code source,
compile/build the application,
run the test,
create a package,
send the package,
archive the logs
send notification.
The most problematic is usually the first step: To get the source code from GIT or SVN or Mercurial or whatever you have as version control system. Make sure this step is not embeded into your script.
During the job run, Jenkins capture the output and use AJAX to display the result in your browser. In the script make sure you flush standard output for every step or several steps. Some languages cache standard output so you can see the results only at the end.
Also you can create log files that can be helpful to archive and verify activity status for older runs. From my experience using Jenkins with more then 10 steps requires you to create a specialized application that can run multiple steps like "robot framework".

how to attach debugger to remote Hadoop instance

I am not looking for these so-called "debugging" solutions which rely on println. I mean to attach a real debugger to a running Hadoop instance, and debugging it from a different machine.
Is this possible? How? jdb?
A nicely given at LINK
To debug task tracker, do following steps.
Edit conf/hadoop-env.sh to have following
export HADOOP_TASKTRACKER_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n"
Start Hadoop (bin/start-dfs.sh and bin/start-mapred.sh)
It will block waiting for debug connection
Connect to the server using Eclipse "Remote Java Application" in the Debug configurations and add the break points
Run a map reduce Job
I've never done it that way as I'd rather my "real" jobs run unhindered by debug-overhead (which can, under circumstances, change the environment conditions anyway): I debug "locally" against a pseudo-instance (normal debugging in eclipse is absolutely no problem), copying specific files from the live environment once I've isolated (by using e.g. counters) where the problem lies.

How would I hook into rake's tasks to time how long each takes, to try to eliminate slow bits of build script?

I'm interested in knowing which parts of my rake-based build (running within TeamCity) is slow. Is there an MVC-filter-style way I can wrap rake-tasks so that each one runs within a timer, and I output a breakdown of
time-spent on task including prerequisites (I guess the time between invoke starting and execute finishing)
time-spent on task excluding prerequisites (I guess the time between execute starting and finishing)
so that I can analyse which parts of my build are taking the most time, to target my optimisation efforts?
Does TeamCity have any features baked in that would do this for me? (I know I'll be able to chart the results of my performance-logging with custom-charts; I just wondered whether it could do this out of the box already.)
First, in TeamCity 6.0 there is a tree view of the build log. In this tree view you can see duration times spent for different blocks of your build.
Also, in TeamCity's rake runner, there is "Track invoke/execute stages" option, which can be enabled to get more information in your build log (and there is timing information for each record).
You can also try adding rake parameters like -t or -v in TeamCity rake settings to get more verbose output.
TeamCity also allows you to use custom service messages to provide more information to your log and to your build.
Hope this helps,
KIR

Resources