getting a no hashes loaded error in hashcat, windows - windows

I'm a beginner in cryptography and I was deciphering a list of md5 hashes using hashcat 6.2.5,
the problems that I faced were:
my cmd didn't recognize hashcat64.exe as a command but accepted hashcat.exe as a command
my text docs don't visually show the .txt extension but are indeed .txt files when checked with properties or path
decoding a list of hashes, I used the -m 0 and -a 0 command but when executed, gave the "token exception length, no hashes loaded" error.
how to resolve this issue?

Related

ASC2EPH ASCII to Binary conversion

Trying to run an ASC2EPH.exe to convert linked ASCII DE405 files to binary JPLEPH file. Getting error when running in CMD - ASC2EPH<infile.405 (see error code). Any help appreciated.
Have run this on Simply Fortran and Force v3 both with same result. NRECL set to 1 for PC

how to determine if a file is completely downloaded using kqueue?

I want to implement a function which monitor a directory and perform some action when a new file is downloaded from the Internet, but found it difficult to determine if the file is completely downloaded, is there a way to do that?
Usually tools that show the hash of a file will give the state of a file - this should be compared to the hash of another file - if identical then we know the file has downloaded successfully.
md5 (native to bsd) is available - but is only practical on a local file -
If you are retrieving the remote file via HTTP , then there is no way to get the hash of the file without downloading it first (whether it is to STDOUT or piped to file , using wget -O- or curl )
If the file host has a second file that contains the md5 hash of the file being downloaded - then a comparison of the locally downloaded hash is comparable to the hash provided by the file provider.
To do anything more swish will require a comprehensive program to be written - such as the combination of this question and accepted answer :
Python Compare local and remote file MD5 Hash
Besides MD5, there is a simple way to do this:
Partially downloaded file usually has a temporary filename, and it will be renamed to original filename after fully downloaded. You can make your program to ignore or monitor only certain filename extensions.

Basic Usage of generate_appcast tool of Sparkle Updater

Since macOS 11.3 broke my Perl script which I have been using to generate Sparkle appcasts for the last 12 years, I decided to instead start using the generate_appcast tool which has since been provided with Sparkle. Invoking generate_appcast with no arguments, I get some brief documentation which I interpret to mean that I should provide two arguments:
a -f followed by the path to my Sparkle private key file
the path to a directory of several recent versions of my app, all zipped
So I created a new directory and copied zip archives of the three most recent versions of my app into it. Those are the .zip archives, notarized by Apple, which I upload to my site for users to download.
Then I ran this command:
Air2:~ jk$ generate_appcast -f /path/to/My_Sparkle_priv.pem /path/to/directory/of/zips
The result:
Warning: Private key not found in the Keychain (-25300). Please run the generate_keys tool
Error generating appcast from directory /path/to/My_Sparkle_priv.pem
Error Domain=NSCocoaErrorDomain Code=256 "The file “My_Sparkle_priv.pem” couldn’t be opened." UserInfo={NSUserStringVariant=(
Folder
), NSURL=file:///path/to/My_Sparkle_priv.pem/, NSFilePath=/path/to/My_Sparkle_priv.pem, NSUnderlyingError=0x13a637e10 {Error Domain=NSPOSIXErrorDomain Code=20 "Not a directory"}}
Apparently it is not recognizing the key file I provided, and also oddly implies that it expects a directory instead of a regular file. In the brief documentation, there is an example marked [DEPRECATED] which omits the -f before the path to the key file, so I tried that but got the same result. I also tried putting the path to the zips first, but that result was even worse.
My key file is, I think, a pretty standard .pem text file that begins with the line -----BEGIN DSA PRIVATE KEY----- followed by 1133 ASCII characters, etc.
Where did I miss the boat?
Astonishingly, this seems to be due to an obvious programming error in the Sparkle generate_appcast Swift source code. In attempting to remove elements indexed N and N+1 from an array of command-line arguments, the code removes element N, and then removes element N+1, which of course removes elements N and N+2 instead. After I fixed this programming error, the problem is solved.
After I do some more head-scratching and maybe consulting with others smarter than me, I shall submit a pull request or whatever to the Sparkle project next week.

Reinstalling packages from a list generated by command: ado dir

I am recovering Stata following a Windows upgrade. I have a list of my packages generated from ado dir in the following format:
[1] package mdesc from http://fmwww.bc.edu/RePEc/bocode/m
'MDESC': module to tabulate prevalence of missing values
[2] package univar from http://fmwww.bc.edu/RePEc/bocode/u
'UNIVAR': module to generate univariate summary with box-and-whiskers plot
[3] package tabmiss from http://www.ats.ucla.edu/stat/stata/ado/analysis
tabmiss. Shows tabulation of number of missing and non-missing values
I have many packages and would like to reinstall them without having to designate each directory/url via net cd. While using net cd along with net install or ssc install along with package names in a loop is trivial (as below), it would seem that an automated method for this task might be available.
net cd http://www.ats.ucla.edu/stat/stata/ado/analysis
local ucla tabmiss csgof powerlog ldfbeta
foreach x of local ucla {
net install `x'
}
To my knowledge, there is no built-in or automated method of tracking and managing your installed packages outside of what is available through ado or net.
I would also tend to agree with #Nick Cox that this task seems strange and I can't imagine how a new Stata install or reinstall could know what was installed previously, but I find the question interesting for other reasons.
The main reason being for users who have Stata installed on multiple machines who need the same packages on both machines. I faced a similar issue when I purchased a new computer and installed Stata but wanted all of the packages I use to be available as well. Outside of moving the ado directory or selected contents I'm not aware of any quick solution.
Here it would be possible to use the output of ado dir on one machine to determine what you need to install on a second machine with a new Stata install.
The method you propose using a foreach loop could save you time from having to type in or copy/paste a lot of packages and URLs. At the same time however, this is only beneficial if you have many packages from only a few repositories because you will need to net cd to the URL each time as you show in your example.
An alternative solution is the programmatic solution. As you know, ado dir will list each installed package, the URL and a short description of the package. Using this, a log file, and the built in I/O functionality, a short program could be written to automate the process and dynamically build a do file that contains the commands to install the already installed packages.
The code below generates a do file containing commands (in this case, net describe package, from(url)) for each package I have installed on my computer.
clear *
tempfile log1
log using "`log1'", text name(mylog)
ado dir
log close mylog
tempname logfile
file open `logfile' using "`log1'", read
file read `logfile' line
file open dfh using "path/to/your/dofile.do", write replace
local pckage "package"
while r(eof) == 0 {
if `: list pckage in line' {
local packageName : word 3 of `line'
local dirName : word 5 of `line'
di "`packageName' `dirName'"
file write dfh "net describe `packageName', from(`dirName')"
file write dfh _newline
}
file read `logfile' line
}
file close `logfile'
file close dfh
In the above code, I create a temp file to write a .txt log file to and store the contents of ado dir in that file.
Then, I open the log file using file open and read it line by line in the while loop.
Above the loop, I'm creating a do file at /path/to/your/dofile.do to hold the output of the loop - the dynamically created commands relating to the installed packages on my machine.
The loop will iterate so long as r(eof) = 0, where r(eof) is an end of file marker. I use an if statement to sort out lines of the log file which contain the word package, as I'm only interested in those lines with the package name and URL in them.
Inside of the if block, I parse the local macro line to pull the package name and the URL/directory name.
this is important: this section of code assumes that the 3rd and 5th words in the macro will always be the package name and URL respectively - Confirm this from the output of ado dir before executing.
You will also need to change the command that is being written to the file handle dfh inside of the loop to what you want (net install, etc) when you are ready to execute.
For more help on using file, locals, and tempfiles execute any of the following in Stata:
help file
help extended_fcn
help macrolists
There may be nicer ways to parse the contents of ado dir but this has worked for me. And of course I'd always advise that you take the time to understand what the code is doing so that you can make any necessary tweaks to fit your particular situation.

How to overcome limit of linking multiple object files in make file

I am new to makefile concept. So please can anyone give me example to overcome the problem of linking too many make files. Because I am getting error "fatal error U1095:" in my make file.
Assuming windows (since on windows, the command line is limited to 128 characters (?! really - sic?!))
I suggest you use #response files for LINK.EXE and or CL.exe etc.
LINK.EXE #response.tmp
You can store all commandline parameters in the text file without any limit.
Update MSDN calls them Command Files

Resources