Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I have created an image of my sd card containing raspbian using the dd tool. This gave me a 16,09 GB big image file. The actual data on there is about 5gb, and I want to truncate it so I can store/clone it onto a 8gb sd card.
All help results I found required Linux to do this (tools like resize2fs doesn't appear to be available for osx, at least, can't find it with homebrew).
What tool(s) can I use to remove the 'empty' space of my .img file?
If you just created it using dd, it is just a dump with no filesystem-specific interpretation, so you can just as easily use dd to get the first 6GB or so:
dd if=Existing16GBimage.img of=New6GBimage.img bs=1m count=6000
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have this shell (bash) command on linux. I want to make this happen in Windows .bat (CMD).
Linux command:
export STATUS="$(cat STATUS_FILE)-${STATUS_CODE:-0}"
STATUS_FILE contains status of the current server. You can say -for example- "running".
STATUS_CODE is the status code of the server. If it given, it is bigger than 1. If this env not available then 0 as default.
So the output and STATUS will be: running-0 or running-2
I know environments can be translated as %STATUS_CODE% but how do I read from status file without newline (one line text stripped if any) and assign it to new variable called $STATUS together with the code.
This worked for me for now:
set /p FSTATUS=<STATUS_FILE
SET STATUS=%FSTATUS%-%STATUS_CODE%
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Is there any way/too in window7 to see the total size of files of particular type ? For example if I have directory which has 5 files. 2 files are Jpg, 1 is log file and 2 are docx file. In such case, it should report something like below
jpg - 2 files- Total size -10 MB
log file - 1 file- Total size -5KM
document file - 2 file - Total -45 MB
-Rajesh
Is there a way to do this in linux (e.g. some form of ls or grep)? If there is, it is probably supported by cygwin.
In other words, you could install cygwin and then run something like the 'find' command shown here: https://askubuntu.com/questions/558979/how-to-display-disk-usage-by-file-type.
Also, if you put the cygwin executable directory in your PATH environment you can run all of the cywin commands from a windows command prompt.
And if you just want a good way to see where all of your disk space is being used there are a number of good tools for that. I personally like spacesniffer.
You can start a command window and use dir.
ex:
dir *.txt
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I would like to execute a tcpdump , which generates a new file after one 2GB file.
As much as I know from an other post it's not possible to generate files bigger than 2 GB.
That's the tcpdump I'm currently looking at:
tcpdump -C 100 -W 2048 -w /tmp/example.pcap
It should create a new pcap file(example.pcap00, example.pcap01) every 2GB, but it doesn't. Probably because I'm trying to write it on an external disk. So I think I need to create the files before I write tcpdump data in it.
How can I do that?
It should create new files with 2GB pcap data until the 1TB HD is full. So I cannot really use the -C option, because I don't know how much I need in advance.
What's the best way to go with my problem?
As much as I know from an other post it's not possible to generate files bigger than 2 GB.
That depends on the OS on which you're running, whether you're running on a 64-bit machine (for some OSes; for OS X and *BSD, it doesn't matter), the version of libpcap tcpdump is using, and how that version of libpcap was built.
tcpdump -C 100 -W 2048 -w /tmp/example.pcap
Which means "change the file you're writing to when the file gets bigger than 100 million bytes, and have no more than 2048 files". (No, -W doesn't specify the maximum file size.)
It should create a new pcap file(example.pcap00, example.pcap01) every 2GB,
No, every 100 million bytes. Read the fine manual page.
but it doesn't. Probably because I'm trying to write it on an external disk.
Why would the external disk have anything to do with this?
If "it doesn't", does that mean "it doesn't create new files, it just keeps writing to the old file" or "it reports an error and quits after writing to the first file"? If it's the latter, you might want to see the answer to this question.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm running a script on terminal and it is supposed to produce a long output, but for some reason the terminal is just showing me the end of the result and I cannot scroll up to see the complete result. Is there a way to save all the terminal instructions and results until I type clear.
The script I'm using has a loop so I need to add the output of the loop if Ill be redirecting the output to a file.
Depending on your system, the size of the terminal buffer may be fixed and hence you may not be able to scroll far enough to see the full output.
A good alternative would be to output your program/script to a text file using:
user#terminal # ./nameofprogram > text_file.txt
Otherwise you will have to find a way to increase the number of lines. In some terminal applications you can go to edit>profiles>edit>scrolling tab and adjust your settings.
You can either redirect the output of your script in a file:
script > file
(Be careful to choose a file that does not exist otherwise the content will be erased)
Or you can buffer the output with less:
script | less
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a small script which is creating a backup every 2 hours. Now I would like to delete the old ones. I know "find" can do this, but I want it more advanced.
I want to keep
all backups form the last 24 hours
4 backups from the last 5 days
1 backup from the last 14 days
everything older than 14 days can be deleted
Could you tell me how to do this via. a shell bash script in debian ?
I couldn´t find anything for this via. google.
Thank You.
Do not reinvent the wheel. Take a look at rsnapshot. Unless you want to use this as a learning exercise, I see no reason why you would want to spend the time that has already been spent to solve this problem.