I'm new to ruby ... and shoes... and programming but here is my prob:
I made a timer which puts the timed amount into a txt file as a log. It also keeps an all time running total in a separate txt file. It works as I want it to...
I tried packaging it:
If I package the rb file it doesn't work, it will only work if I package the entire folder including the txt files.
This working copy seems to operate without txt files (they are somehow built-in. Is there a way to package this so I still have access to the associated txt files. (maybe has something to do with the paths...)
thanks.
Shoes Packager behaviour is sometimes ugly. I think you're using windows, so i'll try to explain what seems to happen:
You have a bundled Shoes-App (a standalone .exe file). Every Time you start it by double clicking, it will extract itself in a new temp dir (located under c:\tmp\tempFileDirectory). So, it is a NEW temporary Directory, everytime!
The current path also is set to this tmp directory, which also includes the txt files you bundled into the app, in it's original state. If you change the content of the files during the app execution, and restart the app, your changes are gone, because in the new created tmp dir, there is a fresh copy of the original txt files. so far, it is a BAD idea to put your data files (txt in your case, or SQLITE-database-files, or config files, ...) into the bundles app.
Better way:
create a "hidden" folder (folder's name should start with an ".") in the user's home folder. On windows it should be something like "c:\Users\YourName". Create there everything you need, this directory won't be temporal, so you can access it everytime without problems. This should general be a better solution when programming desktop stuff, not just while using shoes.
Related
I'm looking for a solution that checks for the same filename when I'm downloading files, specifically through Firefox on Windows 10. I know that this feature comes standard when it comes to files in the same directory, but as the volume of files scales up, it's getting harder and harder to find what I'm looking for out of the files I've downloaded.
But since Firefox doesn't have an option to scan sub directories when saving files (nor can I find an add-on for Firefox that does something like it), I'm looking for any alternative solutions that would achieve what I'm looking for in the end: something that will notify me that I'm attempting to download (or have just downloaded) a file whose name already exists in the sub directory of a given folder, whether that be via an add-on, or some kind of application or script that can run in the background. Preferably, I would like it to check the folders inside of those sub folders as well.
My memory is terrible, so I opted to keep everything in the same folder so I would immediately get the warning when attempting to download a file I'd already downloaded. But said folder now contains far too many files for me to realistically glean through to find a particular file that I'm looking for.
I would like to be able to sort these files into sub folders of the folder I'm currently storing my downloaded files while keeping the functionality of being able to immediately tell whether or not I'm about to download something I've already downloaded. All I need is a check to see if the same filename exists upon trying to create a file (which is already a feature) - but in the sub directories as well. I do not need any functionality to actually view all the files in each sub folder in the same window.
Here's the scenario: We have a computer running Windows 10 which has a directory that's backed up nightly. The backups are done with a batch file utilizing Robocopy and scheduled via Windows. The parameters are as such that the backup will always add any new files or existing file edits into the destination, but it will never delete files from the destination that have been deleted in the source. It essentially archives all files which are in the source directory at the end of each day.
Here's the tricky part. The source directory is very large, and occasionally someone finds a duplicate file (or several duplicates of a file) in it. When that happens, we need to delete all but one copy of the file, and then we need to access the backup directory manually, locate the file there, and do the same. This is tedious and time-consuming as it's not rare for someone to notice an entire subdirectory full of files that exist 5+ times each.
What we're looking for is a way to scan the source directory and all subdirectories inside for duplicate files and remove all but one copy of them, and then a way to reflect that into the destination. I've assumed that we will not be able to use Robocopy to reflect the changes in the destination due to the nature of the backup script it's running, but we do have the ability to run any third-party software on the destination directory as well, essentially running an action in both directories to clean each of them of duplicate files.
On that note, I'm not against using third-party tools to make this cleaner or more efficient, I'm just not aware of any.
There is one way to solve this problem I was also suffering from this problem. but I found that how to use "BATCH" file
There are mainly 2 command
X_COPY
ROBO_COPY
According to your need here, (1)x_copy will be helpfull
xcopywill backup your specific file or folder even if you changed some megabytes data, it will copy the new data and will not be replaced on previous data it will make new copy.
HOW TO DO
Open NotePad and type
xcopy "source file" "destination" /y/e/d/c/f/h/i/z/j
And then save your notepad as ".bat" file
for more requirement use below url
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/xcopy
Im making a command line tool for Mac that does some work on .strings files. Part of the process is making a backup of the .strings files it will operate on so that, in case something gets messed up, the user's files will still be safe somewhere for retrieval.
What directory is in your opinion best to save such data? I'm assuming a temp directory of sorts but not sure where that should be. If this was an application in the macOS sense, i could store this data in the application folder. However this is a command line tool which will be added in /usr/local/bin, following the UNIX way of doing things, and it would feel "wrong" to put that data in there.
I have a web folder on a Mac (running MAMP Pro) and have files been added within it on the fly, whose URL is as follows .
http://abc.com/folder/
I have another Windows machine that should constantly watch this folder and download any new file that gets dumped to this web folder to c:\macfolder\ (The files are always tsv's).
I know I can use wget to acquire files and run whatever program that would do this as a scheduler in windows to watch constantly, but whats the best way to watch this folder for the new files.
Thanks
P.S I do not know what are the best tags for this question. Help me out with that too.. :P
Since the directory already is mounted on Windows, your question appears to be a duplicate of this (and related) - assuming you're OK using C# to build such a tool:
Monitoring a directory for new file creation without FileSystemWatcher
If not so keen on the C# tool, there are command line solutions like this one here:
batch file to monitor additions to download folder
For an application I would like to store a collection of files together, and have them appear in the filesystem as a single file so its easy to manage. I am currently storing everything in a folder.
I would like to keep things accessible so you can manually edit the inside contents if neccesary.
One way to do this would be to create a zip archive and give it a custom extension other then .zip. Then it appears as a filetype and if needed you can unpack and access the content, but for normal use keep it hidden.
I can't seem to find a convenient way to do this. Boost and zlib can do the compression but don't work with archives. I found libzip but I have a hard time understanding how to use it and to me it seems that it only reads/writes zip archives without doing the actual compression.
Is there a more convenient way to tackle this?
Can you call system functions for creating an archive on OSX from c++ / Carbon?
Is there another way to make a folder appear as a single file?
In OSX, you can create Document Packages (similar to application bundles) which are treated as a single file in the Finder, but are really just directories with some internal structure.
Apple does not zip these packages, but they do provide the functionality you describe and they can be created and accessed through CoreFoundation by using CFBundleRef .
From the documentation:
... The important thing to remember about creating a document package is that it is just a directory. As long as the type of the document package is registered (as described in “Registering Your Document Type”), all you have to do is create a directory with the appropriate filename extension. (The Finder uses the filename extension as its cue to treat the directory as a package.) You can create the directory (and create any files you want to put inside that directory) using the standard BSD file system routines ...
As 1st step, simple rename the folder and add the extension .bundle, e.g. Myappdir.bundle
That's will show the whole folder as one file with a lego-like bundle icon.
The next step is you must create one Info.plist file inside.