I'm looking for some stand alone library to access NFS shares.
I am not looking for mounting the shares, just browsing and accessing the files for reading.
Preferable something with a simple simple API similar to regular POSIX operations of opendir, scandir, read and etc.
Thanks in advance!
Here's a link to this NFS client library, but it looks promising, to quote:
The NFS client handles only one connection at a time, but no connection takes
very long.
Read requests must be for under 8000 bytes. This has to do with packet size.
You don't want to know.
Once 256 files are open simultaneously -- by all applications, since the client
does not discriminate between requests in any way -- file handles begin to be
overwritten. The client prints an error.
If the client has problems opening sockets it quits gracefully, including
returning a message over the socket to the application. The exception is if
it is given a bad hostname to mount, in which case it just responds with failure
rather than quitting.
If the formatting of the code looks messed up, it's because the code was written
half on a Mac (tab = 4 spaces).
Here is another link that might explain the limitation of the 256 files opened simultaneously here on sourceforge.net, see B3 of the FAQ there on sourceforge...
Edit: Here's a question that was posted here on Stackoverflow in respect to recursively reading a directory that could be easily modified to scandir...
There is now a libnfs library on github: https://github.com/sahlberg/libnfs
I see it has Debian and FreeBSD packages.
Related
Is it possible (for example with C++, but it does not really matter) to create a bridge/proxy application to get the data requested by another application? To be more detailed, I'm talking about a Adobe Air based game. (I want to create a report with stats based on the data acquired, but that is not actually part of this question.)
Rather than simple "boolean" answer please provide some link to example/documentation. Thanks
It would always be possible, and depending on the your target operating system, may require a fair amount of effort, which begs the question - is there a reason you cannot use Fiddler or some packet sniffing software for your target OS?
You can write a proxy by hand, in python can be quite easy. All you have to do is to set localhost as proxy, then forward the request and pass it back to the calling socket.
I've started writing something like this some times ago. The idea was to write a simple replacement for dansguardian.
I've uploaded it on github so you can give it a look if it can help.
I do not remember well (I've started writing it the last year) but maybe with some modification can fit well your requests.
Conceptually, this is your configuration:
app_client -> [app_channel] -> proxy -> [server_channel] -> app_server
Your proxy starts a server socket, the app_client connects to it. This is our app_channel. Now your proxy creates a connection to the app_server. This is your server_channel.
Now start 2 threads, one which reads from the app_channel and writes to the server_channel, the other reads from the server_channel and writes to the app_channel.
This will create a transparent connection to the app_server via your proxy. You can extract the data as you wish. If the data is encrypted though, there's very little you can actually do by way of analysis.
My fist question here on Stackoverflow: What should I need to do so that the SSH SOCKS 5 Proxy (SSH2) will allow multiple connections?
What I have noticed, is that when I load a page in Firefox (already configured to use the SOCKS 5 proxy), it loads everything one by one. It can be perceived by bare eyes, and I also confirm that through the use of Firebug's NET tab, which logs the connections that have been made.
I have already configure some of the directives in the about:config page, like pipeline, persistent proxy connections, and a few other things. But I still get this kind of sequential load of resources, which is noticeably very slow.
network.http.pipelining;true
network.http.pipelining.maxrequests;8
network.http.pipelining.ssl;true
network.http.proxy.pipelining;true
network.http.max-persistent-connections-per-proxy;100
network.proxy.socks_remote_dns;true
My ISP sucks because during the day, it intentionally breaks connections on a random basis. And so, it is impossible to actually accomplish meaningful works without the need of a lot of browser refresh or hitting F5 key. So, that is why I started to find solutions to this.
The SSH's dynamic port forwarding is the best solution I find to date, because it has some pretty good compression which saves a lot of useless traffic, and is also secure. The only thing remaining is to get it to have multiple connections running in it.
Thanks for all the inputs.
I have had the same thoughts and my conclusion is that it should already have multiple connections going through the socks proxy. This is because if you view the ssh connection with -vvv flag, you'll notice it opening up different ports for the different requests.
I think it may have something to do with SSH-over-TCP itself; plus, perhaps, some extra inefficiencies and/or bugs in the implementations. Are you using only OpenSSH on Mac OS X / *BSD / Linux, or is this PuTTY on Windows?
Your situation is actually pretty much exactly why SCTP was developed (as a TCP replacement), which has a notion of multiple streams from within a single connection.
Hopefully, we'll have SSH over SCTP readily available one day. The best part about SCTP is that it'd still work over IPv4, i.e. it is supposedly mostly a matter of only the endhosts having support for it, so, unlike IPv6, you wouldn't have to wait for your lazy ISP (at leasts, theoretically).
I have implemented a named pipe server that communicates with multiple named pipe clients. Generally it works, but in some instances, the Client would not be able to get a valid result from TransactNamedPipe. The GetLastError code returned is 998 (Invalid memory access). Which is weird, because the handle I used for TransactNamedPipe was valid from CreateFile.
I have implemented the client to retry when it detects an error (unless the pipe server is not alive). For other error codes (997, 230, 231) it works fine. But when it encounters error code 998, no matter how many times it retries, the named pipe server does not respond; in the named pipe server logs, it just says that the client disconnected, but there was no data exchange.
What could be the reason behind this? Is it because the client requests are coming from multiple threads and the named pipe server cannot cope with the (almost) simultaneous requests? I also implemented "locks" to prevent simultaneous requests from the client to the named pipe server, but the error still occurs.
I have searched the web for named pipe communication with this similar problem, but so far, no results.
Thanks in advance
This is weird, indeed. I updated to the latest Windows SDK, pointed my project to it, and, without any changes to the code, it now works perfectly. It must have been a bug that's already been fixed. I was using the libs that came with VC++ 9.0.
If,for example,The socket in my compiled application is designed to connect to 123.456.789.0.
How do I check if its connected to 123.456.789.0? Is there a way to do this?
The idea is this:I want to prevent other people editing my program and changing the address to,for example, 127.0.0.1 and make it connect through a proxy.
Is there any function/way/trick to check the address after the socket is connected?
Use the getpeername function to retrieve the address of the remote host.
If someone edits your program like you mention, they'll probably alter such a check as well though.
nos's comment about the insecurity of this approach is correct, but incomplete. You wouldn't even need to change the program's code to circumvent your proposed mechanism.
The easiest way around it would be to add an IP alias to one of the machine's network interfaces. Then a program can bind to that interface on the port your program connects to, and the OS's network stack will happily send connections to the attacker's local program, not your remote one.
So, now you say you want to know how to list the computer's interfaces so you can detect this sort of subversion. Your opponent counterattacks, launching your program as a sub-process of theirs after installing a Winsock hook that routes Winsock calls back through the parent process.
We then expect to find you asking how to read the executable code section of a particular DLL loaded into your process space, so you can check that the code is what you expect. Now your opponent drops the Winsock shim, switching to an NDIS layer filter, rewriting packets from your program right before they hit the NIC.
Next we find you looking for someone to tell how to list the drivers installed on a Windows system, so you can check that one of these filters isn't present. Your opponent thinks for about 6 seconds and decides to start screwing with packet routing, selecting one of at least three different attacks I can think of off the top of my head. (No, wait, four.)
I'm not a security expert. Yet, I've spent five minutes on this and already have your security beat seven different ways.
Are you doomed? Maybe, maybe not.
Instead of you coming up with fixes to the risks you can see, better to post a new question saying what it is you're trying to protect, and have the experts comment on risks and possible fixes. (Don't add it here. Your question is already answered, correctly, by nos. This is a different question.)
Security is hard. Expertise counts for far more in that discipline than in most other areas of computer science.
I'm trying to make file I/O over a network drive (likely over a WAN or VPN) as reliable as possible for a native C++ Windows app...
What are the possible error conditions that I need to be able to handle?
How can I simulate these error conditions in testing?
How do I get detailed information on a particular error? For example, if fopen() fails, does errno tell me everything I need to know, or do I need to get at the GetLastError() value?
How do I reliably distinguish between "network drive access fully functional but the file doesn't exist" and various problems with the network or server?
One particular error condition that I've noticed on my desktop (not specific to the app we're developing) is that sometimes the first attempt to access a file on a network drive will fail, but it presumably causes the drive to be reconnected in the background, because subsequent connections work. I don't know what causes this. This is an example of the kind of error condition that I want to properly handle.
EDIT: This is for a legacy distributed application that uses files on network shares for communication between nodes. Some nodes may be unattended, so passing the error on to the end user may not be an option. The long term goal is to switch to a better protocol, but in the short term I'd like to make the file I/O as reliable as possible.
I believe you're approaching this from a wrong perspective. There's little one can do in the application itself to improve what is essentially a network filesystem driver problem, perhaps except implementing the networked I/O itself. That being said, you should be better off choosing a suitable networked filesystem for your needs. Look at this on Wikipedia.
Generally, your application should behave like the file is locally-stored. Don't try too hard to handle network problems. But if your choice of a network filesystem is good, then these problems can be automatically mitigated.
So I'd say you should settle with checking errno in case of errors. Perhaps fallback on local storage in case writing a remote file fails (if the networked file system doesn't handle this itself).