MS Access (2016); Better to hard code a network path or use the storage location of the database when deploying? - acc

I am using MS Access 2016. I have a project which in part creates and moves folders around. I am currently using Application.CurrentPath & [form control value] & "\" (etc.) to store the created folders (and to move them when users are done). When I deploy I want to split the db so I can work on the front-end without disturbing users, and so that multiple users can work simultaneously. The back-end will be stored at a shared network location. Users will get a copy of the front-end on their PC to speed things up.
If I do it this way, the folders that the solution creates will be created in the same directory as the users' front-end (no bueno.)
Should I change the code to be hard-coded to some network location (which means moving it is a pain) or change the code to refer to the back-end location? If I refer to the back-end location, what does that code look like?
Many thanks in advance!

Related

What is the easiest way to migrate file permissions (SMB/AD)

I botched a DC's AD / DNS pretty bad over the course of several years (of learning experiences) to the point where I could no longer join or leave the domain with clients. I have a NAS that used to plug into AD via SMB and that is how all the users (my family) used to access their files.
I have recreated my infrastructure configuration from scratch using Windows 2016 using best practices this time around. Is there any way to easily migrate those permissions to users in a new domain/forest (that are equivalent in value to the old one)?
Could I possibly recreate the SIDs / GUIDs of the new users to match the old? I'm assuming no because they have a Windows installation-unique generated string in there.
Could I possibly do this from the NAS side without having to go through each individual's files to change ownership?
Thank you.
One tool you can use to translate permissions from original SIDs to new SIDs is Microsoft's SubInACL
SubInACL will need from you information which old SID corresponds to which new SID or username and execute translation for all data on NAS server. For example like this
subinacl /subdirectories "Z:\*.*" /replace=S-1-5-1-2-3-4-5=NEWDOMAIN\newuser
How long it will take for translation to complete depends on number of files and folders, if it's tens of thousands expect hours.
There are also other tools like SetACL or PowerShell cmdlets Get-Acl/Set-Acl
You cannot recreate objects with original SIDs and GUIDs unless you're doing restore of the AD infrastructure or cloning/migrating original identities into new ones with original SID in sidHistory attribute.
So if you're already running domain controller with NAS in newly created forest and old one suffered from issues you wanted fixed that option would be probably much more painful and it's easier to go for SID translation.

ACCESS_DENIED_ERROR when using NetFileClose API

I have a windows network in which many files are shared across many users with full control. I have a folder shared for everyone in my system, so whenever I try to access it using the machine name (run->\Servername) from another system, I can see the shared folder and open/write files in it.
But my requirement is to close any open files(in my system) in network. So I used NetFileEnum to list all opened file ids so that I can close those files using NetFileClose API.
But the problem is NetFileEnum returns invalid junk ids like 111092900, -1100100090 etc so that I can't close it from another machine. So I listed the network opened files using net file command and by noting the id, say it be 43 I hard coded the id in my function call NetFileClose("Servername", 43); But when I executed, I got ACCESS_DENIED_ERROR. But if the same code is run on the server, it is successfully closing the files. I had given full permission in share for all users.
But why ACCESS_DENIED_ERROR and why NetFileEnum returning invalid ids? Is there anything to be done for this API to work? How can I use these APIs properly to close network opened files?

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

What is my Eclipse-RCP application storing in $HOME/.eclipse, and how do I prevent it?

When I run my Eclipse RCP application, it creates a whole lot of directories in my $HOME/.eclipse directory. What is this?
I don't want the files there, how can I hinder them from getting there? The rational for this: the application must run very clean and only leave files at one specific location (not $HOME/.eclipse).
I'd figured it was controlled by osgi.instance.area so tried to set this to different values (a directory, #none, #noDfault etc...) but can't stop the application from creating directories in $HOME/.eclipse. -data and other arguments works as expected.
On my system the only thing that is stored in .eclipse is the Equinox Secure Storage. Here is the blurb on the doc page for that:
By default, secure storage is located in your home directory. On Windows that typically resolves to "C:\Documents and Settings\.eclipse\org.eclipse.equinox.security". This location is selected to allow multiple Eclipse-based applications to share the same secure storage.
If you would like to modify the location of the default secure storage, you can use the "-eclipse.keyring " runtime option. The is a path to the file which is used to persist the secure storage data.
Here is the online reference.

Good way to demo a classic ASP web site

What is the best way to save data in session variables in a classic web site?
I am maintaining a classic web site and want to be able to allow my users to demo all functionality of the site, this means allowing them to delete records.
The closet example I have seen so far are the demos of Telerik controls where they are saving the dataset in sessions on first load and allowing the user to manipulate the data.
How can I achieve the same in ASP with an MS Access backend?
If you want to persist the state over multiple pages (e.g. to demo you complete application) then it's a bit tricky.
I would suggest copying the MDB file for each session and using the copied version. This would ensure that every session uses its own data.
create a version of your access db which will be used as a fresh template for each user
on session copy the template and name it after the users session ID
use the individual MDB
Note: Then only drawback I can see here is that you need to remove the unused MDB files as it can get a lot after sometime. You could do it with a scheduled task or even on session start before you create a new one.
I am not sure what you can use to check if it's used or not but check the files creation date or maybe the LDF file can help you as well (if it does not exist = unused).
You can store a connection or inclusive an object in a session variable as far you remember what kind of variable are you storing at the retrieving time. I had never stored a dataset in a session variable but I had stored a lot of arrays in session variables so you can use the ADO Getrows method to locate a complete dataset into a session variable.
How big is the Access database? If your database is small enough (relative to the server capacity, expected number of users, and so forth) then I like the idea of using a fresh copy of the database for each user that runs the demo.
With this approach, you simplify your possible code paths. Otherwise this "are we in demo mode or not?" logic will permeate a heck of a lot of your code.
I'd do it like this...
When the user begins the demo, make a copy of the Access DB for that user to use. If your db is foo.mdb, copy it to /tempdb/foo_1234567890.mdb where 1234567890 is the user's session ID.
Alter the user's connection string to point to the fresh database copy. From this point on, your app can operate like "normal" with no further modifications.
Have a scheduled task that deletes all files in /tempdb with last-modified times more than __ hours in the past. If you don't have the ability to schedule tasks on the server (perhaps you're in a shared hosting environment, etc) then you could do this at the same time you do step #1.

Resources