I'm writing an updater for an msaccess based application that has several linked .accdb files that could be in various locations on the hard disk. We need to replace older .accdb's with newer ones periodically, but won't necessarily know where they are since the customer can move them around.
Since the updater operates outside of the access code, I haven't found a good way to programatically determine the locations of all linked .accdb files given a front file.
Is the only way to do this from inside msaccess itself? Or does anyone have a more clever way of determining the locations of these files?
Thanks in advance!
Assuming you know
The linking accdb
The name of the linked tables (In sample they are foo and bar)
You can query the MSysObjects table via ODBC or OleDB e.g.
SELECT Database
FROM msysobjects
WHERE
ForeignName in ("Foo", "Bar")
There might also exist an Automation approach but that will depend on what the updater is written in.
Related
I built a very simple minifilter driver as part of a lesson on minifilters. I've also read the minifilter documentation that Microsoft provides which is in the form of a PDF doc, as well as this reference. These guides explain how to set up a context and an instance. However, they do not explain why one would use a context and/or instance and what they are for. My very small filter driver used NULL for both context and instance and still operates, so I am wondering the use-case for these constructs.
There are many reasons why you would want to use contexts for files, volumes etc.. Certainly filters and even file-systems could operate without them, but the performance would be really bad.
Imagine this scenario: you are an AV (AntiVirus) and want to scan some files to check if they contain malicious code or not.
You register your minifilter and callbacks and now you are being called and you need to make a decision on a file as it is opened.
There are a few steps involved:
You query the file name and security context
You read the file contents
Alternatively hash the file with a SHA256 to see if it matches in your AV database for example
You check if the file is digitally signed, also part of your check
You parse the file's PE header if it has one to see what kind of file or executable it is to help you in your decision
You apply your policy on the file based on all the information above
Now let's assume the file is clean and goes away. If you cannot hold on to the information that you just learnt about the file, the next time the file is opened you will have to re-do it all over again. Your performance will suck, and your OS will crash and burn slowly to the ground.
This is where contexts come in handy.
Now that you have all this information about the file, you store all of it in your context that is then associated with this file. Next time you see the file you simply query its context and have all the information you need.
Of course some things will need to be updated, for example if you notice the file has been changed then you mark it as dirty and update as needed on the next Create or Cleanup callback.
Alternatively you could use a cache, where after the file is closed for good and the minifilter wants to free the context you have associated with the file you can save it yourself.
Now, the next time the file is opened you look for the context of the file ( NTFS support unique file ids for files ) and just associated it with your file and know immediately everything you need to know about that file
This is only one usage, but now you can think for yourself of many more scenarios where they are useful.
We are migrating our APP to Win7. The program generates log files to help us support and also saves a number of dictionary files and settings files that are useful for the user though the user will rarely if ever actually want to interact with the files outside of our application. They can though because they are csv files. I built the first run through with using the APPDATA\LOCAL\OURAPPLICATION folder as the destination. Now I am wondering if it should be PROGRAMDATA\OURAPPLICATION.
I actually think the first choice is better because it seems that everything I have scanned suggests that the PROGRAMDATA folder should be considered untouchable by the user but as I am not a programmer I am not sure.
I hope this is the right place to ask this question
The key point to consider is what the scope of the data is. If you are storing data that is associated with a specific user then you should use APPDATA and if you are storing data that is global to your program then you should use PROGRAMDATA.
Both APPDATA and PROGRAMDATA are hidden folders so the intent is for users not to be poking around in there (not that they couldn't if they wanted to).
I'm attempting to open some database files used by a legacy application that I know almost nothing about. The databases appear to be in file pairs of a bin and idx, for example: Cust.bin and Cust.idx.
I have never seen this type of database before and wasn't able to find anything useful through Google. I also don't know what language or tool the developer used for this app, but it seems that he used the default generic icon for his published executable. This is it:
Can anyone tell me anything about this application, what type of database it uses and how I might open the database myself?
The program that was using this database was a custom written application by a former consultant.
I never did figure out what type of database he was using, or how to open it properly. But I did manage to extract all the data out of it. I opened the file up in EditPad and found that all records had fixed-length fields. With this knowledge I was able to easily write a small application to parse all the binary data and export everything to .csv
So I was ultimately able to get the data. Woot!
I am trying to devise a method which helps to load DLL from a common location for various products. This helps the following directory structure to avoid file replication.
INNSTALLDIR/Product1/bin
INNSTALLDIR/Product2/bin
..
INNSTALLDIR/ProductN/bin>
Instead of replicating DLLs in each product's bin directory above, I can create a DLL repository/directory - 'DLLrepo' in INSTALLDIR and make all product exceutables load from it. I am thinking to do this by creating hardlink to each DLL in 'DLLrepo' in each product's bin directory. This will help to address platforms starting from WinXP. Using 'probing' method can address only Windows server 2008 and above.
I like to get your opinion if this approach looks like a reasonable solution.
When we create hardlink to a file, the explorer or DIR command doesn't account valid size of the folder involving link. It account the actual data size in the linked file in total size of the directory. This is a known issue in windows if I am not wrong. Is there any utility that I can use to verify the actual folder size? Is it possible to use 'chkdisk' on a directory path? Another thing which I like to know is to get the list of links created on file data.
When we create hardlink to a file, the
explorer or DIR command doesn't
account valid size of the folder
involving link. It account the actual
data size in the linked file in total
size of the directory. This is a know
issue in windows if I am not wrong. Is
there any utility that I can use to
verify the actual folder size?
I can provide an answer, of sorts, for this part of the question. When you create file hardlinks, there's not really any concept of which "file" is the original. Each of them points to the space on disk that the data is occupying and modifying the file via any of these references affects the data that's seen when accessing it via any other hardlink. As such it's less a known "issue" and more of a "this is how it works".
As such, there's no way to verify "actual folder size" unless you're looking at the size of the highest common parent folder of the folders that contain the links. At that point you can start single-counting each hard-link to get an accurate idea of space used on disk.
Simple question... I have a VS 2005 solution that encompasses several reporting services projects. Currently, each project has it's own shared data source making changing the database target very tedious.
Is there a way to share the data source across the entire solution (i.e. all the projects in the solution will use the data source defined in one place?).
I thought I could create a project that just held one data source item and then make all of the other projects dependent upon that one, however, the shared date source in the new project does not appear in the other projects for me to select.
Help! I have looked around the web for info, but not much available. There must be a simple solution to this.
Thanks
I am sorry I somehow overlooked your question when I posted the same.
Nonetheless, a technique I am using is described in an answer to it. It feels a little shady and underhanded but seems to be working so far:
Make a new report project to hold your shared data source. I called mine Data Source.
Copy your shared data source (let's pretend it's called My Shared Data Source) to that new project.
If necessary, copy My Shared Data Source to each actual report project and link things up the way you want. But probably you're already set up like this.
Close Visual Studio to make sure all changes are saved in the filesystem and to make sure it doesn't end up clobbering some of our next, "backstage" edits.
In plain old Windows Explorer (or whatever), delete the My Shared Data Source.rds file from every project folder except Data Source's.
Using a text editor or XML-file editor, edit each project's .rptproj file to change the text of the Project.DataSources.ProjectItem.FullPath element from My Shared Data Source.rds to ..\Data Source\My Shared Data Source.rds.
Now each project still has its own reference to a data source, but all those references happen point to the same underlying physical file, and thus they all share one data source specification.
According to this post by Paul Turley, it appears as if this is not possible. You'll have to copy the data source into each project. The good news is that if you deploy them to the same location, only one data source should exist on the server.
This may not be what you're thinking, but when I'm writing an app consisting of several distinct applicaitons accessing the same data I usually take one of two approaches.
write all of my data access logic into a Class Library project and reference it from the other projects.
Write my data access logic into a Web Service library and add a web reference.
I usually go for option 2 if the data I am accessing is likely to be used in future development, such as accessing company-wide customer lists, etc.