What does MIBCC.EXE do exactly ?
Please tell me what does it do ?
In some document I read that it means "The SNMP MIB Compiler"
I cant understand its mean .
Can you say a example of its work ?
Compiling a New or Updated MIB File by Using Mibcc.exe:
As explained earlier, the SNMP-related branches of the MIB tree are located in the internet branch of the tree. The internet branch contains public branches that are defined by the IETF and private branches that are defined by large organizations. When an organization creates its own subset of MIB branches and objects, or updates an existing MIB file, the new or updated MIB file must be created in compliance with SMI-prescribed data types.
If your organization adds or updates a new MIB file, use the Mibcc.exe tool to compile the MIB file so that the SNMP Management API (Mgmtapi.dll) can use the MIB objects in the new or updated MIB file. After you compile the MIB file, you can reference objects by their text object identifiers instead of their numeric object identifiers. The ASN.1 language is used to define the formats of the protocol data units (PDUs) that are exchanged by SNMP entities and to define the objects that are managed through SNMP. Mibcc.exe converts the ASN.1 MIB description into the binary Mib.bin file, which the Management API then uses to map text-based object names to numeric object identifiers.
You can find Mibcc.exe in the C:\Program Files\Resource Kit folder when you install the Windows Server 2003 Resource Kit companion CD. The Mib.bin file is located in systemroot on Windows Server 2003.
http://technet.microsoft.com/en-us/library/cc783142(WS.10).aspx
Related
(Windows platform)
I have a complex local configuration file (dart file) full of objects and enums etc I'd like to parse and load as objects.
Note: The data is full of nested complex objects I don't want to spend 10 years trying to convert to JSON with no benefit and possibly ending "unserializably"
So basically the program loads, opens its default configuration file:
loadedConfig = default embedded config.
I want to read/write/maintain a local dart asset file and load the contents as loadedConfig.
There is plenty of info handing to/from JSON but there is no apparent info on loading a simple dart(txt) file and deserializing it into dart objects?? The closest I've found is storing simple numbers or lists of strings.
Really makes sense to have a local dart file as a config file on a static platform.
Anyone with the wisdom on how to do this or a better unrelated approach? Cheers.
I tried to play around with Projected File System to implement a user mode ram drive (previously I had used Dokan). I have two questions:
Is this a read-only projection? I could not find anything any notification sent to me when opening the file from say Notepad and writing to it.
Is the file actually created on the disk once I use PrjWriteFileData()? From what I have understood, yes.
In that case what would be any useful thing that one could do with this library if there is no writing to the projected files? It seems to me that the only useful thing is to initially create a directory tree from somewhere else (say, a remote repo), but nothing beyond that. Dokan still seems the way to go.
The short answer:
It's not read-only but you can't write your files directly to a "source" filesystem via a projected one.
WriteFileData method is used for populating placeholder files on the "scratch" (projected) file system, so, it doesn't affect a "source" file system.
The long answer:
As stated in the comment by #zett42 ProjFS was mainly designed as a remote git file system. So, the main goal of any file versioning system is to handle multiple versions of files. From this a question arise - do we need to override the file inside a remote repository on ProjFS file write? It would be disastrous. When working with git you always write files locally and they are not synced until you push the changes to a remote repository.
When you enumerate files nothing being written to a local file system. From the ProjFS documentation:
When a provider first creates a virtualization root it is empty on the
local system. That is, none of the items in the backing data store
have yet been cached to disk.
Only after the file is opened ProjFS creates a "placeholder" for it in a local file system - I assume that it's a file with a special structure (not a real one).
As files and directories under the virtualization root are opened, the
provider creates placeholders on disk, and as files are read the
placeholders are hydrated with contents.
What "hydrated" is mean? Most likely, it represents a special data structure partially filled with real data. I would imaginge a placeholder as a sponge partially filled with data.
As items are opened, ProjFS requests information from the provider to allow placeholders for those items to be created in the local file system. As item contents are accessed, ProjFS requests those contents from the provider. The result is that from the user's perspective, virtualized files and directories appear similar to normal files and directories that already reside on the local file system.
Only after a file is updated (modified). It's not a placeholder anymore - it becomes "Full file/directory":
For files: The file's content (primary data stream) has been modified.
The file is no longer a cache of its state in the provider's store.
Files that have been created on the local file system (i.e. that do
not exist in the provider's store at all) are also considered to be
full files.
For directories: Directories that have been created on the local file
system (i.e. that do not exist in the provider's store at all) are
considered to be full directories. A directory that was created on
disk as a placeholder never becomes a full directory.
It means that on the first write the placeholder is replaced by the real file in the local FS. But how to keep a "remote" file in sync with a modified one? (1)
When the provider calls PrjWritePlaceholderInfo to write the
placeholder information, it supplies the ContentID in the VersionInfo
member of the placeholderInfo argument. The provider should then
record that a placeholder for that file or directory was created in
this view.
Notice "The provider should then record that a placeholder for that file". It means that in order to sync the file later with a correct view representation we have to remember with which version a modified file is associated. Imagine we are in a git repository and we change the branch. In this case, we may update one file multiple times in different branches. Now, why and when the provider calls PrjWritePlaceholderInfo?
... These placeholders represent the state of the backing store at the
time they were created. These cached items, combined with the items
projected by the provider in enumerations, constitute the client's
"view" of the backing store. From time to time the provider may wish
to update the client's view, whether because of changes in the backing
store, or because of explicit action taken by the user to change their
view.
Once again, imagine switching branches in a git repository; you have to update a file if it's different in another branch. Continuing answering the question (1). Imaging you want to make a "push" from a particular branch. First of all, you have to know which files are modified. If you are not recorded the placeholder info while modifying your file you won't be able to do it correctly (at least for the git repository example).
Remember, that a placeholder is replaced by a real file on modification? A ProjFS has OnNotifyFileHandleClosedFileModifiedOrDeleted event. Here is the signature of the callback:
public void NotifyFileHandleClosedFileModifiedOrDeletedCallback(
string relativePath,
bool isDirectory,
bool isFileModified,
bool isFileDeleted,
uint triggeringProcessId,
string triggeringProcessImageFileName)
For our understanding, the most important parameter for us here is relativePath. It will contain a name of a modified file inside the "scratch" file system (projected). Here you also know that the file is a real file (not a placeholder) and it's written to the disk (that's it you won't be able to intercept the call before the file is written). Now you may copy it to the desired location (or do it later) - it depends on your goals.
Answering the question #2, it seems like PrjWriteFileData is used only for populating "scratch" file system and you cannot use it for updating the "source" file system.
Applications:
As for applications, you still can implement a remote file system (instead of using Dokan) but all writes will be cached locally instead of directly written to a remote location. A couple use case ideas:
Distributed File Systems
Online Drive Client
A File System "Dispatcher" (for example, you may write your files in different folders depending on particular conditions)
A File Versioning System (for example, you may preserve different versions of the same file after a modification)
Mirroring data from your app to a file system (for example, you can "project" a text file with indentations to folders, sub-folders and files)
P.S.: I'm not aware of any undocumented APIs, but from my point of view (accordingly with the documentation) we cannot use ProjFS for purposes like a ramdisk or write files directly to the "source" file system without writing them to the "local" file system first.
I wrote a simple note taking program that's nothing more than a dictionary mapping a key to a value. IE
$ hlp -key age -value 25
$ hlp age
25
and it just stores information in a json file hardcoded to ~/.hlp.json. But I was wondering if there's likely some standard location I should be putting this file. Is there a standard location for databases like this?
A useful resource here is the hier(7) man page. (http://linux.die.net/man)
Data that is only going to be used by you belongs in $HOME, traditionally hosted under /home.
For something that is used to support the system itself, you'd be using /var. For applications that are just hosted on the system, you'd use /var/opt.
If the application is something big that could be replicated or moved to another system, you'd create a separate filesystem with a mount point outside any of those listed in hier(7). This could be a filesystem mounted from a SAN or NAS, which whould help mobility of the application.
Once you actually need to access the data from different machines, you'd have to move it to a network accessable key/value store or sql database.
I am very new to SNMP and I need to get "system uptime" using our own enterprise OID.
I have already obtained an IANA number and created a MIB file.
The problem is when I use snmpget command with our OID, I get an object not found error at the command prompt.
Although when I do a snmptranslate on our object, I get the exact OID of that object.
If any additional information is required from my side, please let me know.
When you use snmpget, an SNMP request is made via IP to an SNMP agent on a remote (or local) host to return a specific piece of data. A MIB is used to describe in human readable terms, what that data is and where to find it. On the other hand, snmptranslate is a tool used to parse a given MIB. It parses a local MIB file, and doesn't make any contact with an agent.
Since you mentioned creating a new MIB, I assume your trying to add new functionality to an SNMP agent. To do this, the agent must be extended. If you're using Net-SNMP, there are a few options including compiling new source code into the agent, using a sub-agent, and using external scripts via pass and pass-persist protocol. Take a look at:
http://www.net-snmp.org
http://vincent.bernat.im/en/blog/2012-extending-netsnmp.html
For an application I'm writing, i want to programatically find out what computer on the network a file came from. How can I best accomplish this?
Do I need to monitor network transactions or is this data stored somewhere in Windows?
When a file is copied to the local system Windows does not keep any record of where it was copied. So unless the application that created it saved such information in the file then it will be lost.
With file auditing file and directory operations can be tracked, but I don't think that will include the source path with file copies (just who created it and when).
Yes, it seems like you would either need to detect the file transfer based on interception of network traffic, or if you have the ability to alter the file in some way, use public key cryptography to sign files using a machine-specific key before they are transferred.
Create a service on either the destination computer, or on the file hosting computers which will add records to an Alternate Data Stream attached to each file, much the way that Windows handles ZoneInfo for files downloaded from the internet.
You can have a background process on machine A which "tags" each file as having been tagged by machine A on such-and-such a date and time. Then when machine B downloads the file, assuming we are using NTFS filesystems, it can see the tag from A. Or, if you can't have a process at the server, you can use NTFS streams on the "client" side via packet sniffing methods as others have described. The bonus here is that future file-copies will retain the data as long as it is between NTFS systems.
Alternative: create a requirement that all file transfers must be done through a Web portal (as opposed to network drag-and-drop). Built in logging. Or other type of file retrieval proxy. Do you have control over procedures such as this?