Use Photoshop actions on the server - client-server

I need to use the photoshop action(some kind of macro), which I have already recorded, on the server.
This server is going to have some clients, who are able to send photos to the server.
Server should process the photo(according to the recorded action), then return and save in the db processed photo.
Any suggestions?

So, I have used the openCV to solve my problem
Now it is fine)

Related

Why does a change in ISP affect the download progress?

I was downloading a file on my pc using my mobile hotspot from Sim_1 and 50% of the download was completed. When I changed the mobile data source to Sim_2, the download began from the beginning. :(
Is this problem caused by the change in ISP or maybe due to some other issues?
PS: Both Sim_1 and Sim_2 are of different networks.
I think in order to resume a download both the server that the resource is being fetched from and the client need to support this.
They would have to agree on the progress made and resume the download from the location in the byte stream that the client has reached. This might be achievable in HTTP using something like the partial requests using range headers like accept-range https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
Essentially, I think that the ability for your download manager client to be able to change network and resume from that point would depend on the end resource being requested as the server that is providing it would need to be able to do that. If that server is only capable of sending the total bytes from start to end then that is the best your client can hope for.

How to read data from an Ajax web service with Qt?

I would like to process some data in a Qt application. This data can be found on a web page which uses Ajax to dynamically update itself.
For example, the page itself is www.example.com, and it uses Ajax to load data from www.example.com/data, which is a plain text file. If I view www.example.com in a browser, I can clearly see when the data is updated.
The brute force solution would be to just call the QWebView's load(QUrl("www.example.com/data")) every couple of seconds, or every time its loadFinished() signal is emitted, but that would be a waste of bandwidth, an I will be downloading the same data over and over. The time between updates could theoretically be a few seconds, but it could also be minutes, hours, or longer.
Is there a possibility to only reload the data when the page is updated?
The traditional AJAX model uses the following sequence of events:
Browser opens connection
Browser sends request
Server sends response
Server closes connection
Because the connection is closed, there is no way for the server to notify your browser if any data have changed. In order to get this information, you have no option but to query the server periodically.
As you mentioned in your question, this is not very efficient since you can waste a lot of bandwidth if nothing changes for a long while.
WebSockets is a more up-to-date technology that tries to overcome this inefficiency and Qt has a module that caters for this.
Unfortunately, it's not universal yet so, if you want to use WebSocket technology on a third-party server, you need to have traditional AJAX code to fall back on in case WebSockets are not supported.
EDIT:
Unfortunately, WebSockets are not the golden solution. It's still up to the server to have been programmed to send out notifications of changes. If the server does not have this feature, it won't matter if you're using WebSockets or traditional AJAX, you'll still have to keep querying for changes.

Sending automated alerts through a XMPP server via command line? (WINDOWS)

I've spent hours trying to figure out the answer to this and just continue to come up empty handed. I've setup a XMPP server through OpenFire that is fully functional. My goal with creating the server was placing an alert system for when an event is completed on my server. For example when one of my renders is finished rendering (takes hours, sometimes days), it has the option of running a command when it's finished. This command would then run a .bat file telling a theoretical program to send a message via the broadcast plugin in OpenFire to all parties involved in the render. So it needs to be able to receive parameters such as %N for name of the render and %L for the label of it.
I've located two programs that do exactly what I'm looking to do but one does not work and from the sounds of the comments may have never worked and the second one is seemingly LINUX only. The render server is Windows as is the OpenFire server so naturally it would not work. Here are the links though so you can get an idea.
http://thwack.solarwinds.com/media/40/orion-npm-content/general/136769/xmpp-command-line-client/
http://manpages.ubuntu.com/manpages/jaunty/man1/sendxmpp.1.html
Basically the command I want to push is identical to that of the first link.
xmppalert.exe -m "%N is complete." %L#broadcast.myserver
This would broadcast to everyone in the labels Group that the named render is complete.
If anyone has any idea how to get either of the above links working, know of another way or simply have a better idea on how to accomplish what I'm trying to do please let me know. This is something that has been eating at me for 2 days now.
Thanks.
you can take a look at PoshXMPP which allows you to use XMPP from the Powershell.
http://poshxmpp.codeplex.com/
Alex

Google Reader Like Web Application (SmartGWT) (GWT)

I need to write web aplication like google reader (using SmartGWT).
Instead of RSS feads I will show log files which updates in realtime. I think I can start a timer and ask server are there any new logs every minute. Is this the right way to do this?
Do I have to use WebSockets? Are they working in all the modern browsers?
I think I can start a timer and ask server are there any new logs every minute. Is this the right way to do this?
Without using server push this is the way to go. You typically want to query the server with the timestamp of the last received log entry. This way can you only send the diff since the last pull.
See here for some more information on GWT and push (which is actually pull). Or check out stream-hub (and the pimped stock watcher example) if you wanna go for server push.

Handle Unstable Internet Connection in Server-Client App

what technology can i use to manage unstable internet connection in a Server-Client App. i know mainly PHP (+Zend Framework), learning C# & ASP.NET MVC. i heard WCF/MSMQ is something that can help... but how ... is there something PHP (which i am more familiar) can do? but it is also good to know a .NET alternative if its better
the background:
client***s*** will connect to server db to do CRUDs. but if the internet connection fails this will not be possible. so how do i fix this?
the solution used now was have localhost db's. at the end of the day, all clients will upload to server and morning download "consolidated" db from server. this is not foolproof as upload/download may still fail. and considering large amts of data transfered, it actually increases the chances.
UPDATE: is there a PHP/Zend Framework/MySQL replacement for MSMQ/WCF?
WCF can help, because it supports various technologies for reliable message transfer.
One thing that might help you is to have the clients make their data changes locally, then upload those changes to a reliable message queue. You would not upload all changes in a single transaction. You might upload 10 at a time, possibly one at a time. As the uploaded messages are processed on the server, the server would write the transaction results to another queue, unique to each client. After the upload (or maybe at the same time), the client would check that queue to see what the result of each upload was. If the result was success, then the client can remove their local database. If the result was a failure, then the client should try uploading it again.
Of course, you should always be careful that your attempts at error recovery don't make things worse. Too much retry traffic on a bad link may very well cause more traffic, which may itself need recovery, etc.
And, of course, the ultimate solution is to move towards links that are more reliable. Not necessarily faster, but just more reliable.

Resources