I've spent hours trying to figure out the answer to this and just continue to come up empty handed. I've setup a XMPP server through OpenFire that is fully functional. My goal with creating the server was placing an alert system for when an event is completed on my server. For example when one of my renders is finished rendering (takes hours, sometimes days), it has the option of running a command when it's finished. This command would then run a .bat file telling a theoretical program to send a message via the broadcast plugin in OpenFire to all parties involved in the render. So it needs to be able to receive parameters such as %N for name of the render and %L for the label of it.
I've located two programs that do exactly what I'm looking to do but one does not work and from the sounds of the comments may have never worked and the second one is seemingly LINUX only. The render server is Windows as is the OpenFire server so naturally it would not work. Here are the links though so you can get an idea.
http://thwack.solarwinds.com/media/40/orion-npm-content/general/136769/xmpp-command-line-client/
http://manpages.ubuntu.com/manpages/jaunty/man1/sendxmpp.1.html
Basically the command I want to push is identical to that of the first link.
xmppalert.exe -m "%N is complete." %L#broadcast.myserver
This would broadcast to everyone in the labels Group that the named render is complete.
If anyone has any idea how to get either of the above links working, know of another way or simply have a better idea on how to accomplish what I'm trying to do please let me know. This is something that has been eating at me for 2 days now.
Thanks.
you can take a look at PoshXMPP which allows you to use XMPP from the Powershell.
http://poshxmpp.codeplex.com/
Alex
Related
I have a something that is sending an SNMP set command to my server. I can see the packet in wireshark, and I know that I'm getting the packet. Once I get this packet I need to decode it and do an operation (using a script). I can't believe I am the first person that needs to do this, but have googled for hours and found no one else in this use case. I've seen utilities that allow me to use a get snmp, but as the set doesn't actually set anything on my server, there is no way to get it. It doesn't seem traps are helpful as that seems to find the message, as its not labeled a trap. Is there a way to convert the set to a trap once my server gets it, or is there a better method. My server is windows, but if I have to create a linux VM to make this easier I'm all ears. As of now I'm thinking powershell, but if there is an easy way in go, c#, etc I would totally do it.
I am attempting to get a SNMP SET to and use that as a trigger for running a script.
You are 100% correct in that you are not the first person to ask this question. The answer depends on which SNMP agent you have deployed. Many people have had success with Net SNMP and if you want to invoke a shell script from the SNMP agent, see this tutorial. Good luck with your project.
I ended up using pythons pysnmp to build an agent that would recieve the set/get requests. I had to compile my own MIBs for the PDU I was emulating, but got the job done. I also looked into Net SNMP, but I'm more familiar with python and found many helpful examples and good documentation.
I would like to flatten my use case, but at the moment I have two scripts, one that is the agent using pysnmp and one that will do a get request to see what the value is and set off a ssh script. I wish I could hook into the snmp set function, like when the var is written, but for now I have working setup. If anyone wants me to post my code I can.
Copied a lot of code from the second example here:
https://pysnmp.readthedocs.io/en/latest/examples/v3arch/asyncore/agent/cmdrsp/agent-side-mib-implementations.html
Learned about MIB compiling and used the mbidump.py tool. Looks like if you don't provide the proper file the old host server might be compromised and it downloads a file automatically...so be careful.
I am using Streamlink to help some of my technically challenged older friends watch streams from selected sites that webcast LIVE two or thee times a week and are ingested into Youtube. In between webcasts, it would nice to show the User when the next one will begin via the apps Status page.
The platform is Raspberry Pi 3 B+. I have modified the Youtube plugin to allow/prohibit non-live streams. If '--youtube-live-required' is in the command line, then only LIVE streams will play. This prevents the LIVE webcast from re-starting after it has ended, and also prevents videos that Youtube randomly selects, from playing. I have also applied a 'soon to be released' patch that fixes a breaking-change that Youtube made recently. I mention these so you know that I have at least a minimal understanding of the Streamlink code, and am not looking for a totally free ride. But for some reason, I cannot get my head around how to add a feature to get the 'scheduledStartTime' value from the Youtube.py plugin. I am hoping someone with a deep understanding of the Streamlink code can toss me a clue or two.
Once the 'scheduledStartTime' value is obtained (it is in epoch notation), a custom module will send that value to the onboard Python server, via socketio, which can then massage the data and push it to the Status page of connected clients.
Within an infinite loop, Popen starts Streamlink. The output of Popen is PIPEd and observed in order to learn what is happening, and then sends that info to the Server, again using socketio. It is within this loop that the 'scheduledStartTime' data would be gleaned (I think).
How do I solve the problem?
I have a solution to this problem that I am not very proud of, but it solves the problem and I can close this project, finally. Also, it turns out that this solution did not have to utilize the streamlink youtube.py plugin, but since it fetches the contents of the URL of interest anyways, I decided to hack the plugin and keep all of this business in one place.
In a nutshell, a simple regex gets the value of scheduledStartTime IF it is present in the fetched URL contents. The Hack: That value is printed out as a string 'SCHEDULE START TIME:epoch time value', which surfaces through streamlink via Popen PIPE which is polled for such information, in a custom module. Socket.io then sends the info to the on-board server, that sends a massaged version of the info to the app's Status Page (Ionic framework, typescript, etc). Works. Simple. Ugly. Done.
I have done something silly and written a script for a website that does an ajax check every 2 seconds. In this case its using wordpress and its admin-ajax.php file every 2 seconds. This essentially burned up all the CPU power of the server, and made every site on the server run really slowly.
After a lot of detective work, i finally found the script and stopped it, so that it doesn't happen on new loads of that website. But looking at my apache log, i can see that it is still running in one browser somewhere.
Is there a way for me to stop that browser from requesting that ajax-call, or perhaps block it from my server? Or will I just have to wait until that browser is being refreshed or closed?
Try to use netstat or something similar through ssh to detect the IP and port of the unknown browser. Also you should try to reboot the server so it may will loose connection.
PS: It's pretty hard to give a clue or answer in the right direction without having any logs or evidence to ensure you answer to this question correctly.
I'm trying to upload several hundred files to 10+ different servers. I previously accomplished this using FileZilla, but I'm trying to make it go using just common command-line tools and shell scripts so that it isn't dependent on working from a particular host.
Right now I have a shell script that takes a list of servers (in ftp://user:pass#host.com format) and spawns a new background instance of 'ftp ftp://user:pass#host.com < batch.file' for each server.
This works in principle, but as soon as the connection to a given server times out/resets/gets interrupted, it breaks. While all the other transfers keep going, I have no way of resuming whichever transfer(s) have been interrupted. The only way to know if this has happened is to check each receiving server by hand. This sucks!
Right now I'm looking at wput and lftp, but these would require installation on whichever host I want to run the upload from. Any suggestions on how to accomplish this in a simpler way?
I would recommend using rsync. It's really good at only transferring just the data that's been changed during a transfer. Much more efficient than FTP! More info on how to resume interrupted connections with an example can be found here. Hope that helps!
I'm working on a consumer web app that needs to do a long running background process that is tied to each customer request. By long running, I mean anywhere between 1 and 3 minutes.
Here is an example flow. The object/widget doesn't really matter.
Customer comes to the site and specifies object/widget they are looking for.
We search/clean/filter for widgets matching some initial criteria. <-- long running process
Customer further configures more detail about the widget they are looking for.
When the long running process is complete the customer is able to complete the last few steps before conversion.
Steps 3 and 4 aren't really important. I just mention them because we can buy some time while we are doing the long running process.
The environment we are working in is a LAMP stack-- currently using PHP. It doesn't seem like a good design to have the long running process take up an apache thread in mod_php (or fastcgi process). The apache layer of our app should be focused on serving up content and not data processing IMO.
A few questions:
Is our thinking right in that we should separate this "long running" part out of the apache/web app layer?
Is there a standard/typical way to break this out under Linux/Apache/MySQL/PHP (we're open to using a different language for the processing if appropriate)?
Any suggestions on how to go about breaking it out? E.g. do we create a deamon that churns through a FIFO queue?
Edit: Just to clarify, only about 1/4 of the long running process is database centric. We're working on optimizing that part. There is some work that we could potentially do, but we are limited in the amount we can do right now.
Thanks!
Consider providing the search results via AJAX from a web service instead of your application. Presumably you could offload this to another server and let you web application deal with the content as you desire.
Just curious: 1-3 minutes seems like a long time for a lookup query. Have you looked at indexes on the columns you are querying to improve the speed? Or do you need to do some algorithmic process -- perhaps you could perform some of this offline and prepopulate some common searches with hints?
As Jonnii suggested, you can start a child process to carry out background processing. However, this needs to be done with some care:
Make sure that any parameters passed through are escaped correctly
Ensure that more than one copy of the process does not run at once
If several copies of the process run, there's nothing stopping a (not even malicious, just impatient) user from hitting reload on the page which kicks it off, eventually starting so many copies that the machine runs out of ram and grinds to a halt.
So you can use a subprocess, but do it carefully, in a controlled manner, and test it properly.
Another option is to have a daemon permanently running waiting for requests, which processes them and then records the results somewhere (perhaps in a database)
This is the poor man's solution:
exec ("/usr/bin/php long_running_process.php > /dev/null &");
Alternatively you could:
Insert a row into your database with details of the background request, which a daemon can then read and process.
Write a message to a message queue which a daemon then read and processed.
Here's some discussion on the Java version of this problem.
See java: what are the best techniques for communicating with a batch server
Two important things you might do:
Switch to Java and use JMS.
Read up on JMS but use another queue manager. Unix named pipes, for instance, might be an acceptable implementation.
Java servlets can do background processing. You could do something similar to this technology in a web technology with threading support. I don't know about PHP though.
Not a complete answer but I would think using AJAX and passing the 2nd step to something thats faster then PHP (C, C++, C#) then a PHP function pick the results off of some stack most likely just a database.