I am looking for a good way to make a summary to existing large build TS.
What I am working with is SCCM 2012r2 and what I need is a hint, how to capture all steps I want(some of them are in various groups) and put result of them in some sort of variable so at the end, someone who is building that PC will have a table showing lets say 30 of applications green and 4 of them red as a failure.
Can it be done in some easy way? I just need someone building the PC to see what app didn't install so he can install it manually or at least provide me more information before I'll dive into logs.
Thanks
I wouldn't say easy because it requires lots of steps and you have to do it per application manually basically but there is a TS Variable _SMSTSLastActionSucceeded which you could check after each installation step (you have to set the step to continue on error to make this work). So basically after you tried to install you check whether it worked and then set a TS variable of your choice to reflect the failure.
As a final step you implement a script that checks all your TS variables and outputs the result.
You could even use the addon OSDBackground to display your errors as the background image.
Some lengthy article how to implement a form of error handling can be found here however you would have to do this quite a bit different because in this example the ts fails at the first error and you want to continue and log but you should get the basic principles.
Related
A note in the README file said to ask questions here, so I am doing so.
The RIPEstat service has just shut off their own port 43 plain text service and now is forcing everyone to access their data using jq. I have zero experience with or knowledge of jq, but I am forced to give it a try. I have just built the thing successfully from sources (jq-1.5) on my crusty old FreeBSD 9.x system and the build completed OK, but one of the post-build verification tests (tests/onigtest) failed. I am looking at the test-suite.log file but none of what's in there means anything to me. (Unfortunately, I am new to stackoverflow also, and thus, I have no idea how to even upload a copy of that here so that the maintainer can peruse it.)
So, my questions:
1) Should I even worry about the failure of tests/onigtest?
2) If I should, then what should I do about this failure?
3) What is the best and/or most proper way for me to get a copy of the test-suite.log file to the maintainer(s)?
Should I even worry about the failure of tests/onigtest?
If the only failures are related to onigtest, then most likely only the regex filters will be affected.
what should I do about this failure?
According to the jq download page, there is a pre-built binary for FreeBSD, so you might try that.
From your brief description, it's not clear to me what exactly you did, but if you haven't already done so, you might also consider building an executable from a git clone of "master" as per the guidelines on the download page; see also https://github.com/stedolan/jq/wiki/Installation#or-build-from-source
What is the best and/or most proper way for me to get a copy of the test-suite.log file to the maintainer(s)?
You could create a ticket at https://github.com/stedolan/jq/issues
I am currently running Simulations in Veins and/or Artery.
Is there an easy way (thats perhaps I just didn't find because I'm blind/stupid) to dump the Output created in the Console into a file, apart from running it slower than express mode and then using copy/paste?
Can I create these data while still running in express-mode?
The short answer: if by 'console output' you mean the event log, then yes you can, but no you shouldn't, for exactly the reason you mention: express mode disables this output.
The recommended way to collect data from your simulation is by recording it using "statistics", see also this page of the OMNeT++ tutorial.
You can log this information using the record-eventlog=true option in your omnetpp.ini (as described in more detail in the manual), but this produces huge files for veins and artery. This is because the event log is used more as a logging system. The best way to think of it is as debug output and development support: to quickly figure out why something isn't working correctly. I tried to (ab)use this feature for logging data -- please, save yourself the immense pains and use the statistics module.
Yes. Easiest way, From top bar, go to: Run > Run Configuration > Common tab > scroll down to output and select the name and the location of the output file.
Downside, each time you want to run a different application, it over writes the previous one that was created, so don't forget to back it up before you run a different simulation.
Good luck.
I hope this is a good place to ask this, otherwise please redirect me to the correct forum.
I have a large amount of data (~400GB) I need to distribute to all nodes in a cluster (~100 nodes). Any help into how to do this will be appreciated, following here is what Ive tried.
I was thinking of doing this using torrents but I'm running into a bunch of issues. These are the steps I tried:
I downloaded ctorrent to create the torrent and seed and download it. I had a problem because I didn't have a tracker.
I found that qbittorrent-nox has an embedded tracker so I downloaded that on one of my nodes and set the tracker up.
I now created the torrent using the tracker I created and copied it to my nodes.
When I run the torrent with ctorrent on the node with the actual data on it to seed the data I get:
Seed for others 72 hours
- 0/0/1 [1/1/1] 0MB,0MB | 0,0K/s | 0,0K E:0,1 Connecting
When I run on one of the nodes to download the data I get:
- 0/0/1 [0/1/0] 0MB,0MB | 0,0K/s | 0,0K E:0,1
So it seems they aren't connecting to the tracker ok, but I don't know why
I am probably doing something very wrong, but I can't figure it out.
If anyone can help me with what I am doing, or has any way of distributing the data efficiently, even not with torrents, I would be very happy to hear.
Thanks in advance for any help available.
but the node thats supposed to be seeding thinks it has 0% of the file, and so it doesn't seed.
If you create a metadata file (.torrent) with tool A and then want to seed it with tool B then you need to point B to both the metadata and the data (the content files) itself.
I know it is a different issue now, and might require a different topic, but Im hoping you might have ideas.
You should create a new question which will have more room for you to provide details.
So this is embarrassing, I might have had it working for a while now, but I did change my implementation since I started. I just re-checked and the files I was transferring were corrupted on one of my earlier tries and I have been using them since.
So to sum up this is what worked for me if anybody else ends up needing the same setup:
I create torrents using "transmission-create /path/to/file/or/directory/to/be/torrented -o /path/to/output/directory/output_file_name.torrent" (this is because qbittorrent-nox doesn't provide a tool that I could find to create torrents)
I run the torrent on the computer with the actual files so it will seed using "qbittorrent-nox ~/path/to/torrent/file/name_of_file.torrent"
I copy the .torrent file to all nodes and run "qbittorrent-nox ~/path/to/torrent/file/name_of_file.torrent" to start downloading
qbittorrent settings I needed to configure:
In "Downloads" change "Save files to location" to the location of the data in the node that is going to be seeding #otherwise that node wont know it has the files specified in the torrent and wont seed them.
To avoid issues with the torrents sometimes starting as queued and requiring a "force resume". This doesn't appear to have fixed the problem 100% though
In "Speed" tab uncheck "Enable bandwidth management (uTP)"
uncheck "Apply rate limit to uTP connections"
In "BitTorrent" tab uncheck "Torrent Queueing"
Thanks for all the help and Im sorry I hassled people for no reason from some point..
Sorry if a similar question has been posed before. There are a lot of deployment questions but none seemed to address my problem.
Anyway. I'm working with asp.net, C# and using Visual Studio.
The Organization I'm working in is changing rapidly. There are a lot of projects coming in the pipeline that will require multiple code changes and iterative deployments over the next few months. While working, these changes are always 'on the forefront', so sometimes I have to code certain parts of the same program multiple times.
Since these projects are all staggered, I can't just make one sweeping change all at once; I have to deploy and redeploy the same program multiple times, using only the changes that are required for that deployment.
If this is confusing, here's a simple example:
Application is being used on an Intranet. This application calls our Database, using Driver A.
There are two environments, test and production.
Certain Stored procedures have to be called with parameters that register 'Test' to allow certain other applications to run even with bad data (for testing purposes).
When deploying applications, these stored procedures have to be modified, removing Test parameters
We have an Operating System upgrade, allowing us to move to a much faster Driver B, but requires changes to be made to the code to use Driver B.
So that's two wholly different deployments where some code must be changed for Deployment 1 and other code must be changed for Deployment 2.
Currently I'm just using notepad for an overall change list, regular debugging break points and a multitude of in-code comments, and then I manually slog through the code to make sure that everything is changed. With hundreds of thousands of lines of code over multiple files, classes, objects, etc. this gets pretty tedious, as well as there being a good chance of missing something (causing it to break) or pushing wrong changes (causing it to either break or allow bad data).
Is there a tool that could be used to help in this situation? Preferably one that I can discern what needs to change for Deployment A and what needs to change for Deployment B? I'm also open to hearing other schools of thought as well (tips are definitely accepted!)
Sure, I understand your problem.
I would suggest a couple of things
Installers : Why don't you think of installers, there are loads of installers i.e Install shield, Wix, MSI installer.
These installers will give you flexibilty to update files which you need to update, i.e. Full Control.
But you need to choose the best of them, I have worked around MSI and Wix a lot, so I know this can sort your problem, however its your call.
Publish : I haven't played around much with this, I have just done website publish. However I know it does wonders, so try it also.
I'm creating a module that requires a few things to be done (once only) when the module is installed. There may be a number of things that need to be done, but the most basic thing that I need to do is make an API call to a server to let the external server know that the module was installed, and to get a few updated configuration items.
I read this this question on stackoverflow however in my situation I truly am interested in executing code that has nothing to do with the database, fixtures, updating tables, etc. Also, just to be clear this module has no affect (effect?) on the front end. FYI, I've also read this spectacular article by Alan Storm, but this really only drives home the point in my mind that the install/upgrade scripts are not for executing random PHP.
In my mind, I have several possible ways to accomplish this:
I do what I fear is not a best practice and add some PHP to my setup/install script to execute this php
I create some sort of cronjob that will execute the task I need once only (not sure how this would work, but it seems like it might be a "creative" solution - of course if cron is not setup properly then this will fail, which is not good
I create a core_config_data flag ('mynamespace/mymodule/initialized') that I set once my script has run, and I check on every area of the adminhtml that my module touches (CMS/Pages and my own custom adminhtml controller). This seems like a good solution except for all of the extra overhead every time CMS/Pages is hit or my controller is hit, checking this core_config_data setting. The GOOD thing about this solution would be that if something were to fail with my API call, I can set this flag to false and it will run again, display the appropriate message, and continue to run until it succeeds (or have additional logic that will stop the initialization code after XX number of attempts)
Are any of these options the "best" way, and is there any sort of precedent for this somewhere, such as a respected extension or from Magento themselves?
Thanks in advance for your time!
You raise an interesting question.
At the moment, I am not aware of a means to go about executing any arbitrary PHP on module installation, the obvious method (rightly/wrongly) would be to use the installer setup/upgrade script as per 1 of your Q.
2 and 3 seem like a more resource intensive approach, ie. needlessly checking on every page load (cache or not).
There is also the approach of using ./pear to install your module (assuming you packaged it using Magento). I had a very quick look through ./downloader/pearlib/php/pearmage.php but didn't see any lines which execute (vs copying files). I would have imagined this is the best place to execute something on install (other than 1 mentioned above).
But, I've never looked into this, so I'm fairly interested in other possible answers.