Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Wondering if I am overlooking the obvious
I am trying to use
wget -rl 0 -A "*.fna.gz" ftp://ftp.ncbi.nlm.nih.gov/genomes/genbank/bacteria/Acinetobacter_nosocomialis/all_assembly_versions
To download all the files in all the directories contained in ftp://ftp.ncbi.nlm.nih.gov/genomes/genbank/bacteria/Acinetobacter_nosocomialis/all_assembly_versions/ that match *.fna.gz
If you visit the above link, you will see a list of directories starting with GCA. I want all the files in those directories that match *.fna.gz but I get nothing when I run the command. I'm wondering if wget is not recognizing the GCA* directories as directories, and this is the problem? Or is there something wrong with my wget command?
I am suspicious because when I try to download the directories with FileZilla I get:
GCA_000248315.2_ASM24831v2: Not a regular file
Error: Critical file transfer error
These are not directories but links to somewhere else. There is no information in the file listing which gives the type of the target file, i.e. if directory or plain file or whatever. Thus wget will probably assume plain file and not follow it.
Apparently this isn't working as expected because of a bug on the server which displays symbolic links to directories as ordinary files. Thus as #Steffen Ullrich mentioned, "There is no information in the file listing which gives the type of the target file, i.e. if directory or plain file or whatever. Thus wget will probably assume plain file and not follow it." Thanks to codesquid_ on the FileZilla IRC for the clarification.
Follow up question regarding a work around at https://stackoverflow.com/questions/35307325/recursive-wget-cant-get-files-within-symbolic-link-directories
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
Ayo,
I'm using an FTP FileZilla solution and recently noticed that a few of my old directories are still on the server. I'm simply unable to erase those old directories because of one single file inside, that whenever I try to delete, says: 550 Forbidden filename
Does anyone know how to fix that?
thanks!
There are a couple of potential workarounds for this, I believe it can be caused by "invisible" characters in the filename.
Firstly, try to rename the file (ideally in shell mv 3.jpg deleteme.jpg).
Secondly, you can try to create a PHP file in the directory which contains that file. Insert the code:
<?php
unlink('3.jpg');
?>
Then load the php file in a browser or execute in shell:
> php myphpfile.php
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Anyone can explain to me why? From the screen shot, you can see I have some csv file in the current directory. The command: dir .csv or dir ".csv" is not working correctly. However, anything like: dir **.csv, dir ?*.csv, dir .csv are working. All of them can list the files I am looking for. Why?
In case you are unfamiliar with what dxiv is talking about. Aliasing in PowerShell is when you essentially give a command a nickname. In PowerShell there is no such command as dir, it's only an alias for the command Get-ChildItem.
The documentation for that command is here:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.1
If you read through, you'll see that you need to specify a wildcard as a stand-in for the filename before passing the extension.
Generally speaking, the reason you need wildcards for things is to tell the language you're using that there should be something before it. By typing .csv you are searching for files that are literally called '.csv'. No more, no less. The wildcard in *.csv says that it should look for anything ending with '.csv'.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Since I had a task where I have to copy recursively files of a nested directory for work, i discovered the forfiles-function in windows cmd.
It worked properly and now I wonder how does the function distinguish between a file and a directory?
If every file had a file extension like .jpg .png .xls or something like that, I could understand it, but some of my files came without extensions, but it still did its job.
As I'm used to linux, I tried to google the sourcecode, but windows applications aren't opensource, so if anybody can explain me, how does it work, it would be very interesting to know.
PS: why does this got downvoted? its a general question
The command will eventually call the Windows FindFirstFile/FindNextFile functions. Those return a WIN32_FIND_DATA structure which may contain a FILE_ATTRIBUTE_DIRECTORY flag. If that flag is not set, it's a file.
Internally there is quite a difference between a file and a directory, and it's no surprise that typical file/directory handling commands know about this. The fact that a file doesn't have an extension (or that a directory is called "directory.jpeg") does not cause any confusion within those commands.
If you check forfiles' "man page" (forfiles /?), you might see that the /C switch gives you access to the #isdir variable, which can tell you the difference: are you dealing with a directory (value:TRUE) or a file (value:FALSE)?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I was editing an emacs file abc, and prior to saving, had a crash. There is now a file .#abc, and I would like to find out what is in that file, to perhaps recover what I was working on.
I know the file is there because when I type
ls -a
it lists
.#abc
However, when I type
more ".#abc"
or simply
more .#abc
I get the error
.#abc: No such file or directory
The same error occurs with cp in place of more.
How do I see what is in that file? Why does ls list it and then other commands can't find the file?
(Is .#abc actually an alias file? If so, how would I know that? And how, nevertheless, do I see the content of it, even if this is only what it is an alias to?)
[Note: I do not want to use emacs to try to find out what is in the file or restore it, because the situation is somewhat more complicated than described: the above is all occurring inside a Time Machine backup, which I need to access because of an emacs autosave overwrite problem on the primary file. I don't want to have the same problem occur on the backup of the autosave file!]
This is all on Mac OS10.8.4.
Whereas autosave files use a tilde ~, lock-files use a dot number-sign .#:
http://www.gnu.org/software/emacs/manual/html_node/elisp/File-Locks.html
Creation of lock-files can be disabled with the following setting:
(setq create-lockfiles nil)
https://stackoverflow.com/a/12974060/2112489
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
hey guys so im using Ubuntu and im having a issue finding the files using the ln command
currently i have a folder called myName.2 within it are 3 folders notes, assignments and web.
within notes i created 3 files called linux.txt, unix.txt and shell.txt, now i have redirected myself so im in the folder web and want to create hard links to these files here, so for example i type the command
ln /home/admin/3000/Assignment1/myName.2/notes/linux.txt
however the terminal is telling me,
ln: accessing `/home/admin/3000/Assignment1/myName.2/notes/linux.txt': No such file or directory
i went to the properties of the linux text file and copy and pasted the path straight from there
any hints would be much appreciated thanks!
It's easy to make typos, and simpler to use relative paths. Try:
ln ../notes/linux.txt
from inside the web directory.