I found that if the ido completion list contains hundreds of items, it would be too slow to give completion suggestions.
I want to make use of ido's ability with high response speed. Any suggestions about how to fix the problem?
I use ido to complete Unicode character names, of which there are upwards of 30,000. Ido was quite slow until I set ido-enable-flex-matching to nil for this single operation, and it immediately became essentially as fast as for any other matching operation. Maybe this tactic can help you, too.
In a nutshell, I did this:
(let* ((ido-enable-flex-matching nil)
(input (ido-completing-read prompt (ucs-names))))
; ...
)
Related
I want to make a tool similar to zerofree for linux. I want to do it by allocating a big file without zeroing it, look for nonzero blocks and rewrite them.
With admin privileges it is possible, uTorrent can do this: http://www.netcheif.com/Articles/uTorrent/html/AppendixA_02_12.html#diskio.no_zero , but it's closed source.
I am not sure this answers your question (need), but such a tool already exists. You might have a look at fsutil.exe Fsutil command line tool. This tool has a huge potential to discover the internal structures of NTFS files and can also create file of any size (without the need to zeroing it manually). Hope that helps.
Wrote a tool https://github.com/basinilya/winzerofree . It uses SetFileValidData() as #RaymondChen suggested
You should try SetFilePointerEx
Note that it is not an error to set the file pointer to a position
beyond the end of the file.
So after you create file, call SetFilePointerEx and then SetEndOfFile or WriteFile or WriteFileEx and close the file, size should be increased.
EDIT
Raymonds suggested SetValidData is also good solution, but this requares privileges, so shouldn't be used often by anyone.
My solution is the best on NTFS, because it supports feature known as initialized size it means that after using SetFilePointerEx data won't be initialized to zeros, but after attempt to read uninitialized data you will receive zeros.
To sum up, if NTFS use SetFilePointerEx, if FAT (not very likely) - use SetValidData
This is so wrong.
I want to perform a large copy operation; moving 250 GB from my laptop hard drive to an external drive.
OSX lion claims this will take about five hours.
After a couple of hours of chugging, it reports that one particular file could not be copied (for whatever reason; I cannot remember and I don't have the patience to repeat the experiment at the moment).
And on that note it bails.
I am frankly left aghast.
That this problem persists in this day and age is to me scarcely believable. I remember hitting up against the same scenario 20 years back with Windows 3.1.
How hard would it be for the folks at Apple (or Microsoft for that matter) to implement file copying in such a way that it skips over failures, writing a list of failed operations on-the-fly to stderr? And how much more useful would that implementation be? (both these questions are rhetorical by the way; simply an expression of my utter bewilderment; please don't answer them unless by means of comments or supplements to an answer to the actual question, which follows:).
More to the point (and this is my actual question), how can I implement this myself in OS X?
PS I'm open to all solutions here: programmatic / scripting / third-party software
I hear and understand your rant, but this is bordering on being a SuperUser-type question and not a programming question (saved only by the fact you said you would like to implement this yourself).
From the description, it sounds like the Finder bailed when it couldn't copy one particular file (my guess is that it was looking for admin and/or root permission for some priviledged folder).
For massive copies like this, you can use the Terminal command line:
e.g.
cp
or
sudo cp
with options like "-R" (which continues copying even if errors are detected -- unless you're using "legacy" mode) or "-n" (don't copy if the file already exists at the destination). You can see all the possible options by typing in "man cp" at the Terminal command line.
If you really wanted to do this programatically, there are options in NSWorkspace (the performFileoperation:source:destination:files:tag: method (documentation linked for you, look at the NSWorkspaceCopyOperation constant). You can also do more low level stuff via "NSFileManager" and it's copyItemAtPath:toPath:error: method, but that's really getting to brute-force approaches there.
I want to make a console display with printf, where periodically I get some inputs on 3 channels, and I wanted to print lines like :
Channel1 Last_message_1
Channel2 Last_message_2
Channel3 Last_message_3
and when a new message comes on channel2 I want to overwrite that part of the console. Like:
Channel1 Last_message_1
Channel2 New_message_2
Channel3 Last_message_3
I know this sort of stuff can be done with printf, but I don't remember how. Any pointers ?
This post might be useful:
print to screen from c console application overwriting current line
in particular, answer #2 (not the selected answer)
As far as I know you can only change the last line with printf, and here you want to change any line, so I think you will need to look into ncurses.
You can't do this portably with printf. If your console supports it, you can send it ANSI control codes to position the cursor -- but the ANSI control codes are fairly clumsy, and quite a few "consoles" just don't support them, in which case you'll get a lot of extra garbage with the data you're trying to produce.
That leaves using something that's at least theoretically non-portable. If portability still matters, my immediate choice among those would probably be ncurses -- it's reasonably decently design, fairly easy to use, and reasonably portable.
If I was sure portability didn't matter at all and I was writing (for example) purely for Windows, it'd be worth considering using the native console functions. It's open to argument that this is rarely a very good tradeoff though -- you lose all portability, and gain only a little speed, etc.
printf ( "\033[2;1H"); // move to 2nd line
I am trying to obtain the CPU utilization of each of the (up to 200) threads in my (Delphi XE) application. To prepare for this I pass to PdhExpandWildCardPath a string '\Thread(myappname/*)\% Processor Time'. However (on Win7/64) the buffer returned from this function returns a string for every thread running in the system, in other words it seems to have treated the input as if it were '\Thread(*/*)\% Processor Time'. This was unexpected. The same happens when I subsequently expand a string to get 'ID Thread'.
Obviously I can filter the resulting strings on the application name and only add the counters I need, but this requires many hundreds of substring scans. Have I misinterpreted how the wildcards work?
Late, but I've hit the same wall, maybe someone else needs it:
Here it is: '\Thread(myappname*)\% Processor Time'
Especially useful with ProcessNameFormat set to 2 and ThreadNameFormat set to 2 in 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\PerfProc\Performance'
For ProcessNameFormat = 2 -> link, same applies for ThreadNameFormat, although I couldn't find any kind of documentation.
I am using VB6 SP6
This code has work correctly for years but I am now having a problem on a WIN7 to WIN7 network. It also works correctly on an XP to Win7 network.
Open file for random as ChannelNum LEN =90
'the file is on the other computer on the network
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
'(MyAcFile is UDT that is less than 90 long)
.......... other code that does not reference file or RecNum - then
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
Close ChannelNum
The second record overwrites the first.
We had a similar problem in the past with OpportunisticLocking so we turn that off at install - along with some other keys that cause errors in data in Windows networks.
However we have had no problems like this for years, so I think MS have some new "better" option that they think will "improve" networking.
Thanks for your help
I doubt there is any "bug" here except in your approach. The file metadata that LOF() interrogates is not meant to be updated immediately by simple writes. A delay seems like a silly idea, prone to occasional failure unless a very long delay is used and sapping performance at best. Even close/reopen can be iffy: VB6's Close statement is an async operation. That's why the Reset statement exists.
This is also why things like FlushFileBuffers() and SetEndOfFile() exist at the API level. They are also relatively expensive operations from a performance standpoint.
Track your records yourself. Only rely on LOF() if necessary after you first open the file.
Hmmm... is file (as per in the open statement at the top of the code sample) UNC filename or similar to x:\ where x is the mapped drive? Are you not incrementing RecNum? Judging by the code, the RecNum is unchanged and hence appears to overwrite the first record...Sorry for sounding ummm no pun intended... basic...It would be of help to show some more code here...
Hope this helps,
Best regards,
Tom.
It can be just timing issue. In some runs your LOF() function returns more updated information than in other runs. The file system API is asynchronous, for example when some write function is called it will not be immediately reflected as the increazed size.
In short: you code have shown an old bug, which is just easier to reproduce on Windows 7.
To fix the bug the cheapest way: you may decide to add a delay (it can be significant delay of say 5 seconds).
More elaborate fix is to force the size update by closing and reopening file.