How to implement Sandboxie - winapi

As I saw Sandboxie makes a virtual space on hard disk and there are allowed the programs to write.
This how can be implemented as software?
Which windows (kernel,shell ?) functions needed to be overriden?

Software like Sandboxie basically provide a virtual execution environment for (sandboxed) applications. They do this by virualizing file system and Registry (read/write/delete/exec operations) among other things. They are also called feather-weight virtual machines, as they provide a virtual machine like environment for individual applications. You can refer to these pages for more info:
http://sourceforge.net/projects/fvm-rni/ (open source app)
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.74.1367&rep=rep1&type=pdf (paper)
http://www.ecsl.cs.sunysb.edu/tr/TR224.pdf (paper)

I don't know how Sandboxie does it, but the usual way is through File System Filter Drivers:
A file system filter driver intercepts
requests targeted at a file system or
another file system filter driver. By
intercepting the request before it
reaches its intended target, the
filter driver can extend or replace
functionality provided by the original
target of the request.

Related

How to create a software-implemented drive

There are some applications (let us call them providers), which (when running) provide a virtual file and directory structure under a new drive letter. Access requests from other processes to those files and directories are served by the provider.
One example of such provider could be the Google Drive for Windows (the new one, not the old Backup and Sync), which maps the contents of your Google Drive to a chosen drive letter.
I thought there should be some simple user-mode API, which should allow my app to provide a new drive and the contents of files and directories on it. I thought that many applications use such API, but I cannot find it. The closest I could get are IFS (installable file system drivers) and file system filter drivers, but those are kernel-mode and they seem too complex. They just seem not designed to accomplish such task.
So, what API should I use to make a simple software-implemented drive?
In addition to the suggestions in the comments there is also now the Projected Filesystem, which allows software to provide a drive-like interface though callbacks and not just by creating an actual disk image. It is my understanding that Projected FS is how, for instance, SQL Server does its table-backed files interface.

How to detect Windows file closures locally and on network drives

I'm working on a Win32 based document management system that employs an automatic check in/check out model. The model it currently uses for tracking documents in use (monitoring the processes of the applications that open the documents) is not particularly robust so I'm researching alternatives.
Check outs are easy as the DocMgt application is responsible for launching the other application (Word, Adobe, Notepad etc) and passing it the document.
It's the automatic check-in requirement that is more difficult. When the user closes the document in Word/Adobe/Notepad ideally the DocMgt system would be automatically notified so it can perform an automatic check in of the updated document.
To complicate things further the document is likely to be stored on a network drive not a local drive.
Anyone got any tips on API calls, techniques or architectures to support this sort of functionality?
I'm not expecting a magic 3 line solution, the research I've done so far leads me to believe that this is far from a trivial problem and will require some significant work to implement. I'm interested in all suggestions whether they're for a full or part solution.
What you describe is a common task. It is perfectly doable, though not without its share of hassle. Here I assume that the files are closed on the computer where your code can run (even if the files are stored on the mounted network share).
There exist two approaches to controlling the files when they are used: the filter and the virtual filesystem.
The filter sits in the middle, between the process and the filesystem (any filesystem, either local, network or fully virtual) and intercepts file requests that go to this filesystem. Here it is required that the filter code is run on the computer, via which the requests are passed (this requirement seems to be met in your scenario).
The virtual filesystem is an endpoint for the requests that come from the applications. When you implement the virtual filesystem, you handle all requests, so you always fully control the lifetime of the files. As the filesystem is virtual, you are free to keep the files anywhere including the real disk (local or network) or even in the cloud.
The benefit of the filter approach is that you can control individual files that reside on the real disks, while the virtual filesystem can be mounted only to the new drive letter or into the empty directory on the NTFS drive, which is not always fisible. At the same time, sitting in the middle, the filter is to some extent more restricted at what it can do, and the files can be altered while the filter is not running. Finally, filters are more complicated and potentially error prone, as they sit in the middle and must play nice with other filters and with endpoints.
I don't have specific recommendations, but if the separate drive letter is an option, I would recommend the virtual filesystem.
Our company developed (and continues to maintain for the new owner) two products, CBFS Filter and CBFS Connect, which let you create a filter and a virtual filesystem respectively, all in the user mode. Those products are used in many software titles, including some Document Management Systems (which is close to what you do). You will find both products on their website.

Running an untrusted application on Linux in a sandbox

We have a device running Linux and we need to run untrusted applications on this. We are trying to alleviate the following security concerns -
The untrusted application should not be able to adversely affect the core OS data and binaries
The untrusted application should not be able to adversely affect another application's data and binaries
The untrusted application should not be able consume excessive CPU, memory or disk and cause a DoS/resource starvation like situation to the core OS or the other applications
From the untrusted application standpoint, it only needs to be able to read and write to its own directory and maybe the mounted USB drive
We are thinking of using one of the following approaches -
Approach 1 - Use SELinux as a sandbox
Is this possible? I have read a bit of SELinux and it looks a bit complicated in terms of setting up a policy file and enforcing it at runtime etc. Can SELinux do this and restrict the untrusted application to just read/write its own directory and also be able to set quota limits?
Approach 2 - Create a new sandbox on our own
During install time
Create a new app user for each untrusted application
Stamp the entire application directory and files with permissions so that only the application user can read and write
Set quotas for the application user using ulimit/quota
During run time, launch the untrusted application using
Close all open file descriptors/handles
Use chroot to set the root to the application directory
Launch the application under the context of the application user
Thoughts on the above? Which approach is more secure than the other? Is there another approach that might work out better? We do not have a choice to move Android due to some reasons so we cannot use the sandboxing features that Android provides natively...
Let me know
Thanks,
The SELinux is a set of rules that are applies a bit similar as user rights even more complex. You can use it for each process to set a domain of that process and allow or deny nearly any access. It means access to files, network or processes/threads. That way it can be used as a kind of sandbox. However you have to prepare a rule set for each process or you can make a script that has to be run before sandboxed application to prepare rules itself.
If you want to take control on CPUs consumption, the SELinux has not a CPU planner because any rules have just one of two logical results 'allow' or 'deny' access. I recommend you 'cgroups' to control CPUs consumption.
The legato project uses a higher level sandboxing. It uses chroot and bind mount to contain applications. A key feature of it is a formal declarative api thus application components can talk to system service components under a managed security configuration. And services and applications can be added and removed as needed, as well as updated over the air. The application memory usage, processor share, storage, etc are also closely managed. It claims to make application development easier.

What is a good framework for deploying a portable HTML/JavaScript Windows application?

I need to deploy an application onto some Windows machines for purposes of data collection from a group of people (i.e. the application will be used to gather responses to a series of survey questions). The process is interactive, alternating between displays of text and images with specific timing requirements. I have put together a prototype application using HTML and JavaScript that implements the survey. However, there are some unique constraints on the deployment environment that have me stuck:
While the machine is Internet-connected, the client requires that the survey application must run fully local to the PC that it runs on. Therefore, sending the survey results to a remote server is not permissible. Obviously, saving to a local file from a Web browser is typically not permitted for security reasons.
Installation of applications onto the machines that will run the survey is not permitted.
The configuration of the machines is not known specifically a priori, but I can assume some recent version of Windows with IE8+.
The "no remote access" requirement was a late comer, and has thrown a wrench into the plan of just writing a simple Web application that could post results to an HTTP server. I'm now looking for the easiest way forward. Two main approaches come to mind:
Use a GUI framework that provides a control that can display HTML/JavaScript; running a full-blown application on the PC would allow me to save the results to the filesystem. I've never done this, but it seems like in this day and age it shouldn't be too difficult. This would allow me to reuse much of my existing prototype implementation, but I would need some way of transferring the results (which would be stored in a JavaScript data structure) outside of the Web control to where the rest of the application could access it.
Reimplement the entire application using some GUI framework (I've used PyQt successfully before, although not on Windows). This approach is obviously less desirable than #1 due to the lack of reuse. However, it may be necessary if #1 isn't feasible.
Any recommendations for the best way to go? Ideally, I'm looking for a solution that can be run in a "portable" manner from a USB thumbdrive or similar.
Have you looked at HTML Applications (HTA)? They work in IE5+ and can use Windows Scripting Host to write to local drives and UNC shares...
Maybe you can use a portable web server with a scripting language on the server side. http://code.google.com/p/mongoose/ Mongoose, for example, you can run PHP, CGI, etc. .. scripts. Then, simply create a script to save a file to your hard drive. And let the rest of the application in the same manner.
Use a script to start the web server, and perhaps a portable web browser like K-Meleon to start the application http://kmeleon.sourceforge.net/ This is highly configurable. Or start the system explorer to your localhost URL.
The only problem may be that the user has to modify the firewall for the first time you run the server?

How can Windows API calls to an application/service be monitored?

My company is looking at implementing a new VPN solution, but require that the connection be maintained programatically by our software. The VPN solution consists of a background service that seems to manage the physical connection and a command line/GUI utilty that initiates the request to connect/disconnect. I am looking for a way to "spy" on the API calls between the front-end utilty and back-end service so that our software can make the same calls to the service. Are there any recommended software solutions or methods to do this?
Typically, communications between a front-end application and back-end service are done through some form of IPC (sockets, named pipes, etc.) or through custom messages sent through the Service Control Manager. You'll probably need to find out which method this solution uses, and work from there - though if it's encrypted communication over a socket, this could be difficult.
Like Harper Shelby said, it could be very difficult, but you may start with filemon, which can tell you when certain processes create or write to files, regmon, which can do the same for registry writes and reads, and wireshark to monitor the network traffic. This can get you some data, but even with the data, it may be too difficult to interpret in a manner that would allow you to make the same calls.
I don't understand why you want to replace the utility, instead of simply running the utility from your application.
Anyway, you can run "dumpbin /imports whatevertheutilitynameis.exe" to see the static list of API function names to which the utility is linked; this doesn't show the sequence in which they're called, nor the parameter values.
You can then use a system debugger (e.g. Winice or whatever its more modern equivalent might be) to set breakpoints on these API, so that you break into the debugger (and can then inspect parameter values) when the utility invokes these APIs.
You might be able to glean some information using tools such as Spy++ to look at Windows messages. Debugging/tracing tools (Windbg, or etc.) may allow you to see API calls that are in process. The Sysinternals tools can show you system information to some degree of detail of usage.
Although I would recommend against this for the most part -- is it possible to contact the solution provider and get documentation? One reason for that is fragility -- if a vendor is not expecting users to utilize that aspect of the interface, they are more likely to change it without notice.

Resources