I have used the variant datatype to get the list of VisualSVN_PermissionEnrty objects from WMI query and stored it into the safearray.
Now the safearray contains the list of VisualSVN_PermissionEnrty object. how can i read the values from the safearray objects.
Thanks...
Related
I read that inorder to populate binary values for Insert query you need to create a PreparedStatement and then use setBytes() API to set the byte array as the binary parameter.
My problem is that when i do the same I get "data exception: String data,right truncation".
I read about this that this might come if we populate a value of size more than the declared size. But here I am using a very small byte [] ("s".getbytes()).
I also tried setBinaryStream() but with the same result!
I also tried setting null value. Still I get the same error.
The length of the VARBINARY or LONGVARBINARY column must be enough to accept the data you are inserting. Your CREATE TABLE statement can contain VARBINARY as the type of the column, allowing up to 16MB per each data item.
If you use BINARY as the type, it means only one byte is allowed.
I am trying to construct a Recordset object with multiple Recordsets inside it. I am trying to use the .NextRecordset method to do it, but I am having trouble. Specifically I get Current provider does not support returning multiple recordsets from a single execution. error upon executing .NextRecordset method.
Dim oRs As ADODB.Recordset
Set oRs = New ADODB.Recordset
oRs.CursorLocation = adUseClient
oRs.Fields.Append "hello1", adVarChar, 100, adFldUpdatable
oRs.Fields.Append "hello2", adVarChar, 100, adFldUpdatable
oRs.Open , , adOpenStatic, adLockOptimistic
oRs.AddNew
oRs.Fields("hello1") = "234"
oRs.Fields("hello2") = "234"
Set oRs = oRs.NextRecordset ' BLOWS UP
' Add some columns + rows to this recordset
The additional complication is that I need to do this in C# (via Interop), but I'd be happy to first figure it out in VB6.
So, is it possible to do what I want?
I think the closest thing to what you seem to want is a Hierarchical Recordset in ADO. These can go multiple levels deep, or just two levels (a Recordset contining Chapter fields) as in your request:
Regardless of which way the parent Recordset is formed, it will contain a chapter column that is used to relate it to a child Recordset. If you wish, the parent Recordset may also have columns that contain aggregates (SUM, MIN, MAX, and so on) over the child rows. Both the parent and the child Recordset may have columns which contain an expression on the row in the Recordset, as well as columns which are new and initially empty.
You can nest hierarchical Recordset objects to any depth (that is, create child Recordset objects of child Recordset objects, and so on).
You can access the Recordset components of the shaped Recordset programmatically or through an appropriate visual control.
The key to this is using the Data Shaping Service, an OLEDB Provider which "rides on top of" your underlying Provider (even if only the local Cursor Service Provider implied when using client-side cursors).
Some description and a crude example can be found in the How To Create Hierarchical Recordsets Programmatically Support article.
More details and reference material can be found at Data Shaping, including the SQL-like language used to define Shape commands.
Or are you asking about paged Recordsets, as in PageSize Property (ADO) instead?
I need an efficient data structure to generate IDs. The IDs should be able to be released using a method in the data structure. After an ID was released it can be generated again. The data structure must always retrieve the lowest unused ID.
What efficient data structure can be used for this?
Can't you just increment an integer and return that, with appropriate currency control. If someone releases an integer back store that in another sorted data structure and return that. If the list of returned integers is empty then your return is a simple as read, increment, write, return. If the list of returned integers is not empty then just read, return and remove the first int from the returned integers list
Many times i have come accross a situation where there is a loop and a new object is constructed at the beginning of the loop and added to a collection. For example, pseudocode:
iterating over a resultset do
create an object
set instance data in object to some resultset data
put object in collection
next
How is this approach instead?
create an object
iterating over a resultset do
set instance data in object to some resultset data
put object in collection
next
What are the pros and cons of both the approaches? Which can be faster? Is there a better way than the two?
P.S. : i dont know what tags to put. Pardon me.
Depending on the language you are using to implement that you will get different results.
Some languages return a reference to an object. So the first option will do what you expect because a new object is created and appended to the collection with its own values.
iterating over a resultset do
create an object
set instance data in object to some resultset data
put object in collection
next
But if the language simply returns a reference to an object and you try and do the second method
create an object (x)
iterate over resultset do
set instance data in object (x) to resultset data (returns a reference to x with updated data)
put object in collection (puts a reference to x in collection)
next
So after iterating the resultset you will end up with a bunch of references to the same object in your collection, with the values being whatever was last assigned to.
Your final pseudocode should be more like:
create an object
iterating over a resultset do
set instance data in object to some resultset data
**clone** object into collection
next
Since otherwise you would simply end up with so many copies of your final object, since you kept modifying the same object that you added references to. So you don't really save any object space or creation time. The second method is perfectly valid if for some reason you would prefer not to re-initialize every loop, such as when you need to change less members each loop that you'd have to initialize. Just make sure your object's copy constructor is robust.
I've got a text file, with about 200,000 lines. Each line represents an object with multiple properties. I only search through one of the properties (the unique ID) of the objects. If the unique ID I'm looking for is the same as the current object's unique ID, I'm gonna read the rest of the object's values.
Right now, each time I search for an object, I just read the whole text file line by line, create an object for each line and see if it's the object I'm looking for - which is basically the most inefficient way to do the search. I would like to read all those objects into memory, so I can later search through them more efficiently.
The question is, what's the most efficient way to perform such a search? Is a 200,000-entries NSArray a good way to do this (I doubt it)? How about an NSSet? With an NSSet, is it possible to only search for one property of the objects?
Thanks for any help!
-- Ry
#yngvedh is correct in that an NSDictionary has O(1) lookup time (as is expected for a map structure). However, after doing some testing, you can see that NSSet also has O(1) lookup time. Here's the basic test I did to come up with that: http://pastie.org/933070
Basically, I create 1,000,000 strings, then time how long it takes me to retrieve 100,000 random ones from both the dictionary and the set. When I run this a few times, the set actually appears to be faster...
dict lookup: 0.174897
set lookup: 0.166058
---------------------
dict lookup: 0.171486
set lookup: 0.165325
---------------------
dict lookup: 0.170934
set lookup: 0.164638
---------------------
dict lookup: 0.172619
set lookup: 0.172966
In your particular case, I'm not sure either of these will be what you want. You say that you want all of these objects in memory, but do you really need them all, or do you just need a few of them? If it's the latter, then I would probably read through the file and create an object ID to file offset mapping (ie, remember where each object id is in the file). Then you could look up which ones you want and use the file offset to jump to the right spot in the file, parse that line, and move on. This is a job for NSFileHandle.
Use NSDictionary to map from ID's to objects. That is: use the ID as key and the object as value. NSDictionary is the only collection class which supports efficient key lookup. (Or key lookup at all)
Dictionaries are a different kind of collection than the other collection classes. It is an associative collection (maps IDs to objects in your case) whereas the others are simply containers for multiple objects. NSSet holds unordered unique objects and NSArray holds ordered objects (may hold duplicates).
UPDATE:
To avoid reallocations as you read the entries, use the dictionaryWithCapacity: method. If you know the (approximate) number of entries prior to reading them you can use it to preallocate a big enough dictionary.
200,000 objects sounds like you might run into memory constraints, depending on size of the objects and your target environment. One other thing you may want to consider is to convert the data into SQLite database, and then index the columns you want to do lookup on. This would provide a good compromise between efficiency and resource consumption, as you would not have to load the full set into memory.