SCCM remote control [CMRCViewer] connect on timeout - sccm

everyone
For those of you who know about SCCM Remote know it does a pretty good job, but there are little things here and there that could be better.
In this case, I'd like to configure Config Manager to allow an incoming remote connection, if no-one responds to the connection request and it times out. Any ideas how this could be done? For the record, I'd rather not do a reghack.
SCCM - Client Settings - Remote Tools Image
Many thanks,
James

Related

My windows FTP server unable to access remotely on some networks

I have setup a windows 2003 ftp server and using chilkat to connect to this ftp inside my customized application. My application is developed in VB6 with ftp support of chilkat. The application works on different places of the city and connects to my ftp. Unable to access ftp and transfer files using the customised application, from some networks like idea netsetter / bsnl. It works perfect on other networks.
Thanks in advance.
Regards,
Sam
This is likely to be a firewall issue at the client end. FTP is often blocked by firewalls.
Just as well, FTP has its problems making it a less than ideal alternative. There are better options such as SFTP or FTPS but support for those is limited in Windows and you'll have to buy both server and client pieces to use one of them.
Fewer firewalls block HTTP and HTTPS though some are finicky enough to block traffic that doesn't look like Web browsing. Stiil, your odds of success go up substantially.
An obvious choice might be to use WebDAV. IIS supports WebDAV and it is pretty easy to write simple WebDAV client logic in VB6 based on one of the many HTTP components available. I'd probably use XmlHttpRequest or WinHttpRequest for that. A search ought to turn up several VB6 classes written to wrap one of them to support WebDAV client operations. You can also buy WebDAV client libraries.
Stick to using HTTPS (which means you need a server cetificate for IIS) and you won't have passwords going over the network in the clear. Even if you use HTTP you'll be no worse off than using FTP, plus it'll work through the vast majority of firewalls except those that specifically block non-browsing HTTP requests.
This could be a firewall configuration on the Client or Server. You're not going to be able to do much about the client, but for the server it may depend on whether your doing Active or Passive FTP connections.
If you are doing Active connections, make sure ports 20 and 21 are open.
If you're doing Passive connections, you may want to check out this article about configuring the PassivePortRange in Server 2003 FTP- http://support.microsoft.com/?id=555022.

OpenFire, HTTP-BIND and performance

I'm looking into getting an openfire server started and setting up a strophe.js client to connect to it. My concern is that using http-bind might be costly in terms of performance versus making a straight on XMPP connection.
Can anyone tell me whether my concern is relevant or not? And if so, to what extend?
The alternative would be to use a flash proxy for all communication with OpenFire.
Thank you
BOSH is more verbose than normal XMPP, especially when idle. An idle BOSH connection might be about 2 HTTP requests per minute, while a normal connection can sit idle for hours or even days without sending a single packet (in theory, in practice you'll have pings and keepalives to combat NATs and broken firewalls).
But, the only real way to know is to benchmark. Depending on your use case, and what your clients are (will be) doing, the difference might be negligible, or not.
Basics:
Socket - zero overhead.
HTTP - requests even on IDLE session.
I doubt that you will have 1M users at once, but if you are aiming for it, then conection-less protocol like http will be much better, as I'm not sure that any OS can support that kind of connected socket volume.
Also, you can tie your OpenFires together, form a farm, and you'll have nice scalability there.
we used Openfire and BOSH with about 400 concurrent users in the same MUC Channel.
What we noticed is that Openfire leaks memory. We had about 1.5-2 GB of memory used and got constant out of memory exceptions.
Also the BOSH Implementation of Openfire is pretty bad. We switched then to punjab which was better but couldn't solve the openfire issue.
We're now using ejabberd with their built-in http-bind implementation and it scales pretty well. Load on the server having the ejabberd running is nearly 0.
At the moment we face the problem that our 5 webservers which we use to handle the chat load are sometimes overloaded at about 200 connected Users.
I'm trying to use websockets now but it seems that it doesn't work yet.
Maybe redirecting the http-bind not via Apache rewrite rule but directly on a loadbalancer/proxy would solve the issue but I couldn't find a way on how to do this atm.
Hope this helps.
I ended up using node.js and http://code.google.com/p/node-xmpp-bosh as I faced some difficulties to connect directly to Openfire via BOSH.
I have a production site running with node.js configured to proxy all BOSH requests and it works like a charm (around 50 concurrent users). The only downside so far: in the Openfire admin console you will not see the actual IP address of the connected clients, only the local server address will show up as Openfire get's the connection from the node.js server.

Overcoming limitations of firewalls

If a regular internet user wishes to contact a TCP service on their computer but without having to go through the hassle of firewall translation I think I'm right in saying that the 'best' way to do this is by having a 3rd party in the middle that will accept connections from both the user's home computer and their travelling computer and act as a proxy.
But how exactly is this achieved? Obviously the travelling computer just contacts the proxy server whenever it wants information, but how is this then relayed back to the home computer? Does the home computer keep a constant connection open with the proxy which allows bidirectional data flow?
If this is the case, how would I go about designing a Ruby/Sinatra server that would keep track of these permanent connections and then forward a travelling computer's queries onwards? (Assume that the home computer's service can make whatever calls would be necessary to establish the link)
Thanks guys!
EDIT
I think I over-generalised, I'm forwarding HTTP requests (or at least, the requests coming from the travelling computer will be HTTP based), so I figured it made sense to use sinatra to capture the requests from the traveller. My problem though is how to keep an open connection from the home computer to the proxy so I can forward the requests immediately.
I know persistent HTTP connections can be done, but that they're a little convoluted, would I be better off having the home computer continually establish a lower level connection with the proxy and push the requests over that?
I think your general methodology will work - relay event messages from one computer to another by having the traveling computer send signals to the proxy and having the home computer request new information from the proxy.
If you want more continuous data flow, you may not want to use sinatra - specifically for receiving the data from the travelling computer. Check out Event Machine - https://github.com/eventmachine/eventmachine/wiki

FTP over Satellite/High Latency connections

I use FTP on a daily basis to work on multiple websites, but when I try to work from home, my darned satellite internet has a latency of about 1000ms. (Its craptastic service, I know, but there are no alternatives where I live.) Thus, I was wondering if there is a way that I can connect to my web server and transfer files that can accomodate this latency.
FTP "works", but it communicates very very slowly, and its a nightmare with multiple files. It takes the connection about 10-15 seconds to start the transfer, and another 5 seconds after the transfer is done. The transfer itself goes very fast as expected, but the handshake process does not, as the server/client seem to need to do a lot of communication to negotiate the transfer. Worse, it seems to need to do this handshake thing for every individual file, which certainly doesn't help.
Is there any way I can modify my FTP to make it work better over a high latency connection? If not, are there any other protocols or transfer services I might be able to use that could handle such an issue? Its the main fault I find with my ISP, and there's not a lot I've been able to find that I can do about it...
Thanks
Sounds like a good case for using UDP rather than TCP-based protocols - e.g. uftp
A quote from the linked site: "especially useful for data distribution over a satellite link (with two way communication), where the inherent delay makes any TCP based communication terribly inefficient".
A few options:
Sneaker-net. Use a USB key.
SCP. I'm almost positive it'll only authenticate/handshake once.
Tunnelling over SSH. The poor man's VPN. You'll be able to tunnel FTP or anything you like over the SSH connection. It'll be as fast as you're going to get and is very secure to boot.

How do I monitor what commands my ftp application is sending to a ftp server

F
Is there a way to monitor the FTP port so that I can know what commands my FTP application is sending to a FTP server?
I am using a closed-source FTP client application, which is not working with a closed-source FTP application server. The client and the server are not communicating well with each other, and I would like to find out why. I wish to reverse-engineer the client to see what commends the client are sending to the sever. I used a web test tool before that allowed me to monitor the content transferring through HTTP, but I can't seem to find such tool for FTP. I appreciate it if you can help me out, thanks.
Sounds like you need a packet sniffer - assuming your network admins/company policy allows it...I have used wireshark fairly successfully before.
The core FTP commands should be visible in the packets.
You can use the Wireshark application: http://www.wireshark.org/
It should have decent parsing capabilities for FTP as well as other protocols.
Can you configure a proxy with the client? Then you could install an ftp proxy server using the logging on that to see what's going on?
There's a proxy server for Linux here: http://frox.sourceforge.net/doc/FAQ.html
Paul.
Do you have access to ftp-server logs? Its likely those commands would be logged there.
If they aren't, your next option would be to configure the server to log them, if you have access.
If thats not an option or server does not log such things, then you have to go to either packet sniffer or a proxy, as suggested by previous posters.
On Unix, tcpdump might be your friend. Maybe you should first state which OS you're targeting, though.
If you have the ability (often requiring root access) to use a packet sniffer, tcpflow sniffing the TCP control channel will show you the commands and responses going back and forth in an easy-to-read format.
If you don't have such access, tools such as ktrace and strace will allow you to see all data read and written on the socket for this connection, though it will be a little work to extract it.
If you could tell us just what tool you were using for HTTP traffic, that would allow us to look for something similar for FTP traffic.

Resources